00:00:00.002 Started by upstream project "autotest-nightly" build number 4130 00:00:00.002 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3492 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.002 Started by timer 00:00:00.048 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.049 The recommended git tool is: git 00:00:00.049 using credential 00000000-0000-0000-0000-000000000002 00:00:00.050 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.076 Fetching changes from the remote Git repository 00:00:00.079 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.125 Using shallow fetch with depth 1 00:00:00.125 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.125 > git --version # timeout=10 00:00:00.186 > git --version # 'git version 2.39.2' 00:00:00.186 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.248 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.248 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.097 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.111 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.125 Checking out Revision 7510e71a2b3ec6fca98e4ec196065590f900d444 (FETCH_HEAD) 00:00:04.125 > git config core.sparsecheckout # timeout=10 00:00:04.137 > git read-tree -mu HEAD # timeout=10 00:00:04.154 > git checkout -f 7510e71a2b3ec6fca98e4ec196065590f900d444 # timeout=5 00:00:04.176 Commit message: "kid: add issue 3541" 00:00:04.176 > git rev-list --no-walk 7510e71a2b3ec6fca98e4ec196065590f900d444 # timeout=10 00:00:04.257 [Pipeline] Start of Pipeline 00:00:04.272 [Pipeline] library 00:00:04.274 Loading library shm_lib@master 00:00:04.275 Library shm_lib@master is cached. Copying from home. 00:00:04.297 [Pipeline] node 00:00:04.307 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.309 [Pipeline] { 00:00:04.323 [Pipeline] catchError 00:00:04.324 [Pipeline] { 00:00:04.337 [Pipeline] wrap 00:00:04.346 [Pipeline] { 00:00:04.354 [Pipeline] stage 00:00:04.356 [Pipeline] { (Prologue) 00:00:04.620 [Pipeline] sh 00:00:04.906 + logger -p user.info -t JENKINS-CI 00:00:04.924 [Pipeline] echo 00:00:04.926 Node: GP11 00:00:04.934 [Pipeline] sh 00:00:05.228 [Pipeline] setCustomBuildProperty 00:00:05.239 [Pipeline] echo 00:00:05.240 Cleanup processes 00:00:05.246 [Pipeline] sh 00:00:05.528 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.528 2927294 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.540 [Pipeline] sh 00:00:05.819 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.819 ++ grep -v 'sudo pgrep' 00:00:05.819 ++ awk '{print $1}' 00:00:05.819 + sudo kill -9 00:00:05.819 + true 00:00:05.833 [Pipeline] cleanWs 00:00:05.843 [WS-CLEANUP] Deleting project workspace... 00:00:05.843 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.848 [WS-CLEANUP] done 00:00:05.853 [Pipeline] setCustomBuildProperty 00:00:05.864 [Pipeline] sh 00:00:06.141 + sudo git config --global --replace-all safe.directory '*' 00:00:06.255 [Pipeline] httpRequest 00:00:06.938 [Pipeline] echo 00:00:06.939 Sorcerer 10.211.164.101 is alive 00:00:06.947 [Pipeline] retry 00:00:06.948 [Pipeline] { 00:00:06.959 [Pipeline] httpRequest 00:00:06.964 HttpMethod: GET 00:00:06.964 URL: http://10.211.164.101/packages/jbp_7510e71a2b3ec6fca98e4ec196065590f900d444.tar.gz 00:00:06.965 Sending request to url: http://10.211.164.101/packages/jbp_7510e71a2b3ec6fca98e4ec196065590f900d444.tar.gz 00:00:06.967 Response Code: HTTP/1.1 200 OK 00:00:06.967 Success: Status code 200 is in the accepted range: 200,404 00:00:06.967 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_7510e71a2b3ec6fca98e4ec196065590f900d444.tar.gz 00:00:07.706 [Pipeline] } 00:00:07.717 [Pipeline] // retry 00:00:07.724 [Pipeline] sh 00:00:08.005 + tar --no-same-owner -xf jbp_7510e71a2b3ec6fca98e4ec196065590f900d444.tar.gz 00:00:08.021 [Pipeline] httpRequest 00:00:08.504 [Pipeline] echo 00:00:08.506 Sorcerer 10.211.164.101 is alive 00:00:08.514 [Pipeline] retry 00:00:08.516 [Pipeline] { 00:00:08.527 [Pipeline] httpRequest 00:00:08.530 HttpMethod: GET 00:00:08.531 URL: http://10.211.164.101/packages/spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:00:08.531 Sending request to url: http://10.211.164.101/packages/spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:00:08.550 Response Code: HTTP/1.1 200 OK 00:00:08.550 Success: Status code 200 is in the accepted range: 200,404 00:00:08.550 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:00:51.743 [Pipeline] } 00:00:51.760 [Pipeline] // retry 00:00:51.768 [Pipeline] sh 00:00:52.049 + tar --no-same-owner -xf spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:00:55.354 [Pipeline] sh 00:00:55.636 + git -C spdk log --oneline -n5 00:00:55.636 09cc66129 test/unit: add mixed busy/idle mock poller function in reactor_ut 00:00:55.636 a67b3561a dpdk: update submodule to include alarm_cancel fix 00:00:55.636 43f6d3385 nvmf: remove use of STAILQ for last_wqe events 00:00:55.636 9645421c5 nvmf: rename nvmf_rdma_qpair_process_ibv_event() 00:00:55.636 e6da32ee1 nvmf: rename nvmf_rdma_send_qpair_async_event() 00:00:55.648 [Pipeline] } 00:00:55.663 [Pipeline] // stage 00:00:55.675 [Pipeline] stage 00:00:55.677 [Pipeline] { (Prepare) 00:00:55.692 [Pipeline] writeFile 00:00:55.703 [Pipeline] sh 00:00:55.981 + logger -p user.info -t JENKINS-CI 00:00:55.994 [Pipeline] sh 00:00:56.278 + logger -p user.info -t JENKINS-CI 00:00:56.289 [Pipeline] sh 00:00:56.568 + cat autorun-spdk.conf 00:00:56.568 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:56.568 SPDK_TEST_NVMF=1 00:00:56.568 SPDK_TEST_NVME_CLI=1 00:00:56.568 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:56.568 SPDK_TEST_NVMF_NICS=e810 00:00:56.568 SPDK_RUN_ASAN=1 00:00:56.568 SPDK_RUN_UBSAN=1 00:00:56.568 NET_TYPE=phy 00:00:56.576 RUN_NIGHTLY=1 00:00:56.581 [Pipeline] readFile 00:00:56.605 [Pipeline] withEnv 00:00:56.607 [Pipeline] { 00:00:56.620 [Pipeline] sh 00:00:56.904 + set -ex 00:00:56.904 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:56.904 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:56.904 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:56.904 ++ SPDK_TEST_NVMF=1 00:00:56.904 ++ SPDK_TEST_NVME_CLI=1 00:00:56.904 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:56.904 ++ SPDK_TEST_NVMF_NICS=e810 00:00:56.904 ++ SPDK_RUN_ASAN=1 00:00:56.904 ++ SPDK_RUN_UBSAN=1 00:00:56.904 ++ NET_TYPE=phy 00:00:56.904 ++ RUN_NIGHTLY=1 00:00:56.904 + case $SPDK_TEST_NVMF_NICS in 00:00:56.904 + DRIVERS=ice 00:00:56.904 + [[ tcp == \r\d\m\a ]] 00:00:56.904 + [[ -n ice ]] 00:00:56.904 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:56.904 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:56.904 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:56.904 rmmod: ERROR: Module irdma is not currently loaded 00:00:56.904 rmmod: ERROR: Module i40iw is not currently loaded 00:00:56.904 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:56.904 + true 00:00:56.904 + for D in $DRIVERS 00:00:56.904 + sudo modprobe ice 00:00:56.904 + exit 0 00:00:56.914 [Pipeline] } 00:00:56.929 [Pipeline] // withEnv 00:00:56.934 [Pipeline] } 00:00:56.948 [Pipeline] // stage 00:00:56.957 [Pipeline] catchError 00:00:56.959 [Pipeline] { 00:00:56.974 [Pipeline] timeout 00:00:56.975 Timeout set to expire in 1 hr 0 min 00:00:56.976 [Pipeline] { 00:00:56.991 [Pipeline] stage 00:00:56.993 [Pipeline] { (Tests) 00:00:57.008 [Pipeline] sh 00:00:57.291 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:57.292 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:57.292 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:57.292 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:57.292 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:57.292 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:57.292 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:57.292 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:57.292 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:57.292 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:57.292 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:57.292 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:57.292 + source /etc/os-release 00:00:57.292 ++ NAME='Fedora Linux' 00:00:57.292 ++ VERSION='39 (Cloud Edition)' 00:00:57.292 ++ ID=fedora 00:00:57.292 ++ VERSION_ID=39 00:00:57.292 ++ VERSION_CODENAME= 00:00:57.292 ++ PLATFORM_ID=platform:f39 00:00:57.292 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:00:57.292 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:57.292 ++ LOGO=fedora-logo-icon 00:00:57.292 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:00:57.292 ++ HOME_URL=https://fedoraproject.org/ 00:00:57.292 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:00:57.292 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:57.292 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:57.292 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:57.292 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:00:57.292 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:57.292 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:00:57.292 ++ SUPPORT_END=2024-11-12 00:00:57.292 ++ VARIANT='Cloud Edition' 00:00:57.292 ++ VARIANT_ID=cloud 00:00:57.292 + uname -a 00:00:57.292 Linux spdk-gp-11 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:00:57.292 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:58.228 Hugepages 00:00:58.228 node hugesize free / total 00:00:58.228 node0 1048576kB 0 / 0 00:00:58.228 node0 2048kB 0 / 0 00:00:58.228 node1 1048576kB 0 / 0 00:00:58.228 node1 2048kB 0 / 0 00:00:58.228 00:00:58.228 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:58.228 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:00:58.228 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:00:58.228 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:00:58.228 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:00:58.228 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:00:58.228 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:00:58.228 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:00:58.228 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:00:58.228 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:00:58.228 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:00:58.228 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:00:58.228 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:00:58.228 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:00:58.228 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:00:58.228 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:00:58.228 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:00:58.228 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:58.228 + rm -f /tmp/spdk-ld-path 00:00:58.228 + source autorun-spdk.conf 00:00:58.228 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:58.228 ++ SPDK_TEST_NVMF=1 00:00:58.228 ++ SPDK_TEST_NVME_CLI=1 00:00:58.228 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:58.228 ++ SPDK_TEST_NVMF_NICS=e810 00:00:58.228 ++ SPDK_RUN_ASAN=1 00:00:58.228 ++ SPDK_RUN_UBSAN=1 00:00:58.228 ++ NET_TYPE=phy 00:00:58.228 ++ RUN_NIGHTLY=1 00:00:58.228 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:58.228 + [[ -n '' ]] 00:00:58.228 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:58.228 + for M in /var/spdk/build-*-manifest.txt 00:00:58.228 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:00:58.228 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:58.228 + for M in /var/spdk/build-*-manifest.txt 00:00:58.228 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:58.228 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:58.228 + for M in /var/spdk/build-*-manifest.txt 00:00:58.228 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:58.228 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:58.228 ++ uname 00:00:58.228 + [[ Linux == \L\i\n\u\x ]] 00:00:58.228 + sudo dmesg -T 00:00:58.228 + sudo dmesg --clear 00:00:58.487 + dmesg_pid=2927975 00:00:58.487 + [[ Fedora Linux == FreeBSD ]] 00:00:58.487 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:58.487 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:58.487 + sudo dmesg -Tw 00:00:58.487 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:58.487 + [[ -x /usr/src/fio-static/fio ]] 00:00:58.487 + export FIO_BIN=/usr/src/fio-static/fio 00:00:58.487 + FIO_BIN=/usr/src/fio-static/fio 00:00:58.487 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:58.487 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:58.487 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:58.487 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:58.487 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:58.487 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:58.487 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:58.487 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:58.487 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:58.487 Test configuration: 00:00:58.487 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:58.487 SPDK_TEST_NVMF=1 00:00:58.487 SPDK_TEST_NVME_CLI=1 00:00:58.487 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:58.487 SPDK_TEST_NVMF_NICS=e810 00:00:58.487 SPDK_RUN_ASAN=1 00:00:58.487 SPDK_RUN_UBSAN=1 00:00:58.487 NET_TYPE=phy 00:00:58.487 RUN_NIGHTLY=1 16:08:58 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:00:58.487 16:08:58 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:58.487 16:08:58 -- scripts/common.sh@15 -- $ shopt -s extglob 00:00:58.487 16:08:58 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:58.487 16:08:58 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:58.487 16:08:58 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:58.487 16:08:58 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:58.487 16:08:58 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:58.487 16:08:58 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:58.487 16:08:58 -- paths/export.sh@5 -- $ export PATH 00:00:58.487 16:08:58 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:58.487 16:08:58 -- common/autobuild_common.sh@478 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:58.487 16:08:58 -- common/autobuild_common.sh@479 -- $ date +%s 00:00:58.487 16:08:58 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1727618938.XXXXXX 00:00:58.487 16:08:58 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1727618938.WGTbbF 00:00:58.487 16:08:58 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:00:58.487 16:08:58 -- common/autobuild_common.sh@485 -- $ '[' -n '' ']' 00:00:58.488 16:08:58 -- common/autobuild_common.sh@488 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:58.488 16:08:58 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:58.488 16:08:58 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:58.488 16:08:58 -- common/autobuild_common.sh@495 -- $ get_config_params 00:00:58.488 16:08:58 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:00:58.488 16:08:58 -- common/autotest_common.sh@10 -- $ set +x 00:00:58.488 16:08:58 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:00:58.488 16:08:58 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:00:58.488 16:08:58 -- pm/common@17 -- $ local monitor 00:00:58.488 16:08:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:58.488 16:08:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:58.488 16:08:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:58.488 16:08:58 -- pm/common@21 -- $ date +%s 00:00:58.488 16:08:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:58.488 16:08:58 -- pm/common@21 -- $ date +%s 00:00:58.488 16:08:58 -- pm/common@25 -- $ sleep 1 00:00:58.488 16:08:58 -- pm/common@21 -- $ date +%s 00:00:58.488 16:08:58 -- pm/common@21 -- $ date +%s 00:00:58.488 16:08:58 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1727618938 00:00:58.488 16:08:58 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1727618938 00:00:58.488 16:08:58 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1727618938 00:00:58.488 16:08:58 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1727618938 00:00:58.488 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1727618938_collect-vmstat.pm.log 00:00:58.488 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1727618938_collect-cpu-load.pm.log 00:00:58.488 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1727618938_collect-cpu-temp.pm.log 00:00:58.488 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1727618938_collect-bmc-pm.bmc.pm.log 00:00:59.425 16:08:59 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:00:59.425 16:08:59 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:59.425 16:08:59 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:59.425 16:08:59 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:59.425 16:08:59 -- spdk/autobuild.sh@16 -- $ date -u 00:00:59.425 Sun Sep 29 02:08:59 PM UTC 2024 00:00:59.425 16:08:59 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:59.425 v25.01-pre-17-g09cc66129 00:00:59.425 16:08:59 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:00:59.425 16:08:59 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:00:59.425 16:08:59 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:00:59.425 16:08:59 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:00:59.425 16:08:59 -- common/autotest_common.sh@10 -- $ set +x 00:00:59.425 ************************************ 00:00:59.425 START TEST asan 00:00:59.425 ************************************ 00:00:59.425 16:08:59 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:00:59.425 using asan 00:00:59.425 00:00:59.425 real 0m0.000s 00:00:59.425 user 0m0.000s 00:00:59.425 sys 0m0.000s 00:00:59.425 16:08:59 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:00:59.425 16:08:59 asan -- common/autotest_common.sh@10 -- $ set +x 00:00:59.425 ************************************ 00:00:59.425 END TEST asan 00:00:59.425 ************************************ 00:00:59.425 16:08:59 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:59.425 16:08:59 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:59.425 16:08:59 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:00:59.425 16:08:59 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:00:59.425 16:08:59 -- common/autotest_common.sh@10 -- $ set +x 00:00:59.425 ************************************ 00:00:59.425 START TEST ubsan 00:00:59.425 ************************************ 00:00:59.425 16:08:59 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:00:59.425 using ubsan 00:00:59.425 00:00:59.425 real 0m0.000s 00:00:59.425 user 0m0.000s 00:00:59.425 sys 0m0.000s 00:00:59.425 16:08:59 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:00:59.425 16:08:59 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:59.425 ************************************ 00:00:59.425 END TEST ubsan 00:00:59.425 ************************************ 00:00:59.425 16:08:59 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:59.425 16:08:59 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:59.425 16:08:59 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:59.425 16:08:59 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:59.425 16:08:59 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:59.425 16:08:59 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:59.425 16:08:59 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:59.425 16:08:59 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:59.425 16:08:59 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-shared 00:00:59.685 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:59.685 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:59.943 Using 'verbs' RDMA provider 00:01:10.485 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:20.452 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:20.452 Creating mk/config.mk...done. 00:01:20.452 Creating mk/cc.flags.mk...done. 00:01:20.452 Type 'make' to build. 00:01:20.452 16:09:20 -- spdk/autobuild.sh@70 -- $ run_test make make -j48 00:01:20.452 16:09:20 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:20.452 16:09:20 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:20.452 16:09:20 -- common/autotest_common.sh@10 -- $ set +x 00:01:20.452 ************************************ 00:01:20.452 START TEST make 00:01:20.452 ************************************ 00:01:20.452 16:09:20 make -- common/autotest_common.sh@1125 -- $ make -j48 00:01:20.452 make[1]: Nothing to be done for 'all'. 00:01:30.455 The Meson build system 00:01:30.455 Version: 1.5.0 00:01:30.455 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:30.455 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:30.455 Build type: native build 00:01:30.455 Program cat found: YES (/usr/bin/cat) 00:01:30.455 Project name: DPDK 00:01:30.455 Project version: 24.03.0 00:01:30.455 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:30.455 C linker for the host machine: cc ld.bfd 2.40-14 00:01:30.455 Host machine cpu family: x86_64 00:01:30.455 Host machine cpu: x86_64 00:01:30.455 Message: ## Building in Developer Mode ## 00:01:30.455 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:30.455 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:30.455 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:30.455 Program python3 found: YES (/usr/bin/python3) 00:01:30.455 Program cat found: YES (/usr/bin/cat) 00:01:30.455 Compiler for C supports arguments -march=native: YES 00:01:30.455 Checking for size of "void *" : 8 00:01:30.455 Checking for size of "void *" : 8 (cached) 00:01:30.455 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:30.455 Library m found: YES 00:01:30.455 Library numa found: YES 00:01:30.455 Has header "numaif.h" : YES 00:01:30.455 Library fdt found: NO 00:01:30.455 Library execinfo found: NO 00:01:30.455 Has header "execinfo.h" : YES 00:01:30.455 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:30.455 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:30.455 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:30.455 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:30.455 Run-time dependency openssl found: YES 3.1.1 00:01:30.455 Run-time dependency libpcap found: YES 1.10.4 00:01:30.455 Has header "pcap.h" with dependency libpcap: YES 00:01:30.455 Compiler for C supports arguments -Wcast-qual: YES 00:01:30.455 Compiler for C supports arguments -Wdeprecated: YES 00:01:30.455 Compiler for C supports arguments -Wformat: YES 00:01:30.455 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:30.455 Compiler for C supports arguments -Wformat-security: NO 00:01:30.455 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:30.455 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:30.455 Compiler for C supports arguments -Wnested-externs: YES 00:01:30.455 Compiler for C supports arguments -Wold-style-definition: YES 00:01:30.455 Compiler for C supports arguments -Wpointer-arith: YES 00:01:30.455 Compiler for C supports arguments -Wsign-compare: YES 00:01:30.455 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:30.455 Compiler for C supports arguments -Wundef: YES 00:01:30.455 Compiler for C supports arguments -Wwrite-strings: YES 00:01:30.455 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:30.455 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:30.455 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:30.455 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:30.455 Program objdump found: YES (/usr/bin/objdump) 00:01:30.455 Compiler for C supports arguments -mavx512f: YES 00:01:30.456 Checking if "AVX512 checking" compiles: YES 00:01:30.456 Fetching value of define "__SSE4_2__" : 1 00:01:30.456 Fetching value of define "__AES__" : 1 00:01:30.456 Fetching value of define "__AVX__" : 1 00:01:30.456 Fetching value of define "__AVX2__" : (undefined) 00:01:30.456 Fetching value of define "__AVX512BW__" : (undefined) 00:01:30.456 Fetching value of define "__AVX512CD__" : (undefined) 00:01:30.456 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:30.456 Fetching value of define "__AVX512F__" : (undefined) 00:01:30.456 Fetching value of define "__AVX512VL__" : (undefined) 00:01:30.456 Fetching value of define "__PCLMUL__" : 1 00:01:30.456 Fetching value of define "__RDRND__" : 1 00:01:30.456 Fetching value of define "__RDSEED__" : (undefined) 00:01:30.456 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:30.456 Fetching value of define "__znver1__" : (undefined) 00:01:30.456 Fetching value of define "__znver2__" : (undefined) 00:01:30.456 Fetching value of define "__znver3__" : (undefined) 00:01:30.456 Fetching value of define "__znver4__" : (undefined) 00:01:30.456 Library asan found: YES 00:01:30.456 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:30.456 Message: lib/log: Defining dependency "log" 00:01:30.456 Message: lib/kvargs: Defining dependency "kvargs" 00:01:30.456 Message: lib/telemetry: Defining dependency "telemetry" 00:01:30.456 Library rt found: YES 00:01:30.456 Checking for function "getentropy" : NO 00:01:30.456 Message: lib/eal: Defining dependency "eal" 00:01:30.456 Message: lib/ring: Defining dependency "ring" 00:01:30.456 Message: lib/rcu: Defining dependency "rcu" 00:01:30.456 Message: lib/mempool: Defining dependency "mempool" 00:01:30.456 Message: lib/mbuf: Defining dependency "mbuf" 00:01:30.456 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:30.456 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:30.456 Compiler for C supports arguments -mpclmul: YES 00:01:30.456 Compiler for C supports arguments -maes: YES 00:01:30.456 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:30.456 Compiler for C supports arguments -mavx512bw: YES 00:01:30.456 Compiler for C supports arguments -mavx512dq: YES 00:01:30.456 Compiler for C supports arguments -mavx512vl: YES 00:01:30.456 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:30.456 Compiler for C supports arguments -mavx2: YES 00:01:30.456 Compiler for C supports arguments -mavx: YES 00:01:30.456 Message: lib/net: Defining dependency "net" 00:01:30.456 Message: lib/meter: Defining dependency "meter" 00:01:30.456 Message: lib/ethdev: Defining dependency "ethdev" 00:01:30.456 Message: lib/pci: Defining dependency "pci" 00:01:30.456 Message: lib/cmdline: Defining dependency "cmdline" 00:01:30.456 Message: lib/hash: Defining dependency "hash" 00:01:30.456 Message: lib/timer: Defining dependency "timer" 00:01:30.456 Message: lib/compressdev: Defining dependency "compressdev" 00:01:30.456 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:30.456 Message: lib/dmadev: Defining dependency "dmadev" 00:01:30.456 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:30.456 Message: lib/power: Defining dependency "power" 00:01:30.456 Message: lib/reorder: Defining dependency "reorder" 00:01:30.456 Message: lib/security: Defining dependency "security" 00:01:30.456 Has header "linux/userfaultfd.h" : YES 00:01:30.456 Has header "linux/vduse.h" : YES 00:01:30.456 Message: lib/vhost: Defining dependency "vhost" 00:01:30.456 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:30.456 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:30.456 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:30.456 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:30.456 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:30.456 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:30.456 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:30.456 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:30.456 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:30.456 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:30.456 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:30.456 Configuring doxy-api-html.conf using configuration 00:01:30.456 Configuring doxy-api-man.conf using configuration 00:01:30.456 Program mandb found: YES (/usr/bin/mandb) 00:01:30.456 Program sphinx-build found: NO 00:01:30.456 Configuring rte_build_config.h using configuration 00:01:30.456 Message: 00:01:30.456 ================= 00:01:30.456 Applications Enabled 00:01:30.456 ================= 00:01:30.456 00:01:30.456 apps: 00:01:30.456 00:01:30.456 00:01:30.456 Message: 00:01:30.456 ================= 00:01:30.456 Libraries Enabled 00:01:30.456 ================= 00:01:30.456 00:01:30.456 libs: 00:01:30.456 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:30.456 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:30.456 cryptodev, dmadev, power, reorder, security, vhost, 00:01:30.456 00:01:30.456 Message: 00:01:30.456 =============== 00:01:30.456 Drivers Enabled 00:01:30.456 =============== 00:01:30.456 00:01:30.456 common: 00:01:30.456 00:01:30.456 bus: 00:01:30.456 pci, vdev, 00:01:30.456 mempool: 00:01:30.456 ring, 00:01:30.456 dma: 00:01:30.456 00:01:30.456 net: 00:01:30.456 00:01:30.456 crypto: 00:01:30.456 00:01:30.456 compress: 00:01:30.456 00:01:30.456 vdpa: 00:01:30.456 00:01:30.456 00:01:30.456 Message: 00:01:30.456 ================= 00:01:30.456 Content Skipped 00:01:30.456 ================= 00:01:30.456 00:01:30.456 apps: 00:01:30.456 dumpcap: explicitly disabled via build config 00:01:30.456 graph: explicitly disabled via build config 00:01:30.456 pdump: explicitly disabled via build config 00:01:30.456 proc-info: explicitly disabled via build config 00:01:30.456 test-acl: explicitly disabled via build config 00:01:30.456 test-bbdev: explicitly disabled via build config 00:01:30.456 test-cmdline: explicitly disabled via build config 00:01:30.456 test-compress-perf: explicitly disabled via build config 00:01:30.456 test-crypto-perf: explicitly disabled via build config 00:01:30.456 test-dma-perf: explicitly disabled via build config 00:01:30.456 test-eventdev: explicitly disabled via build config 00:01:30.456 test-fib: explicitly disabled via build config 00:01:30.456 test-flow-perf: explicitly disabled via build config 00:01:30.456 test-gpudev: explicitly disabled via build config 00:01:30.456 test-mldev: explicitly disabled via build config 00:01:30.456 test-pipeline: explicitly disabled via build config 00:01:30.456 test-pmd: explicitly disabled via build config 00:01:30.456 test-regex: explicitly disabled via build config 00:01:30.456 test-sad: explicitly disabled via build config 00:01:30.456 test-security-perf: explicitly disabled via build config 00:01:30.456 00:01:30.456 libs: 00:01:30.456 argparse: explicitly disabled via build config 00:01:30.456 metrics: explicitly disabled via build config 00:01:30.456 acl: explicitly disabled via build config 00:01:30.456 bbdev: explicitly disabled via build config 00:01:30.456 bitratestats: explicitly disabled via build config 00:01:30.456 bpf: explicitly disabled via build config 00:01:30.456 cfgfile: explicitly disabled via build config 00:01:30.456 distributor: explicitly disabled via build config 00:01:30.456 efd: explicitly disabled via build config 00:01:30.456 eventdev: explicitly disabled via build config 00:01:30.456 dispatcher: explicitly disabled via build config 00:01:30.456 gpudev: explicitly disabled via build config 00:01:30.456 gro: explicitly disabled via build config 00:01:30.456 gso: explicitly disabled via build config 00:01:30.456 ip_frag: explicitly disabled via build config 00:01:30.456 jobstats: explicitly disabled via build config 00:01:30.457 latencystats: explicitly disabled via build config 00:01:30.457 lpm: explicitly disabled via build config 00:01:30.457 member: explicitly disabled via build config 00:01:30.457 pcapng: explicitly disabled via build config 00:01:30.457 rawdev: explicitly disabled via build config 00:01:30.457 regexdev: explicitly disabled via build config 00:01:30.457 mldev: explicitly disabled via build config 00:01:30.457 rib: explicitly disabled via build config 00:01:30.457 sched: explicitly disabled via build config 00:01:30.457 stack: explicitly disabled via build config 00:01:30.457 ipsec: explicitly disabled via build config 00:01:30.457 pdcp: explicitly disabled via build config 00:01:30.457 fib: explicitly disabled via build config 00:01:30.457 port: explicitly disabled via build config 00:01:30.457 pdump: explicitly disabled via build config 00:01:30.457 table: explicitly disabled via build config 00:01:30.457 pipeline: explicitly disabled via build config 00:01:30.457 graph: explicitly disabled via build config 00:01:30.457 node: explicitly disabled via build config 00:01:30.457 00:01:30.457 drivers: 00:01:30.457 common/cpt: not in enabled drivers build config 00:01:30.457 common/dpaax: not in enabled drivers build config 00:01:30.457 common/iavf: not in enabled drivers build config 00:01:30.457 common/idpf: not in enabled drivers build config 00:01:30.457 common/ionic: not in enabled drivers build config 00:01:30.457 common/mvep: not in enabled drivers build config 00:01:30.457 common/octeontx: not in enabled drivers build config 00:01:30.457 bus/auxiliary: not in enabled drivers build config 00:01:30.457 bus/cdx: not in enabled drivers build config 00:01:30.457 bus/dpaa: not in enabled drivers build config 00:01:30.457 bus/fslmc: not in enabled drivers build config 00:01:30.457 bus/ifpga: not in enabled drivers build config 00:01:30.457 bus/platform: not in enabled drivers build config 00:01:30.457 bus/uacce: not in enabled drivers build config 00:01:30.457 bus/vmbus: not in enabled drivers build config 00:01:30.457 common/cnxk: not in enabled drivers build config 00:01:30.457 common/mlx5: not in enabled drivers build config 00:01:30.457 common/nfp: not in enabled drivers build config 00:01:30.457 common/nitrox: not in enabled drivers build config 00:01:30.457 common/qat: not in enabled drivers build config 00:01:30.457 common/sfc_efx: not in enabled drivers build config 00:01:30.457 mempool/bucket: not in enabled drivers build config 00:01:30.457 mempool/cnxk: not in enabled drivers build config 00:01:30.457 mempool/dpaa: not in enabled drivers build config 00:01:30.457 mempool/dpaa2: not in enabled drivers build config 00:01:30.457 mempool/octeontx: not in enabled drivers build config 00:01:30.457 mempool/stack: not in enabled drivers build config 00:01:30.457 dma/cnxk: not in enabled drivers build config 00:01:30.457 dma/dpaa: not in enabled drivers build config 00:01:30.457 dma/dpaa2: not in enabled drivers build config 00:01:30.457 dma/hisilicon: not in enabled drivers build config 00:01:30.457 dma/idxd: not in enabled drivers build config 00:01:30.457 dma/ioat: not in enabled drivers build config 00:01:30.457 dma/skeleton: not in enabled drivers build config 00:01:30.457 net/af_packet: not in enabled drivers build config 00:01:30.457 net/af_xdp: not in enabled drivers build config 00:01:30.457 net/ark: not in enabled drivers build config 00:01:30.457 net/atlantic: not in enabled drivers build config 00:01:30.457 net/avp: not in enabled drivers build config 00:01:30.457 net/axgbe: not in enabled drivers build config 00:01:30.457 net/bnx2x: not in enabled drivers build config 00:01:30.457 net/bnxt: not in enabled drivers build config 00:01:30.457 net/bonding: not in enabled drivers build config 00:01:30.457 net/cnxk: not in enabled drivers build config 00:01:30.457 net/cpfl: not in enabled drivers build config 00:01:30.457 net/cxgbe: not in enabled drivers build config 00:01:30.457 net/dpaa: not in enabled drivers build config 00:01:30.457 net/dpaa2: not in enabled drivers build config 00:01:30.457 net/e1000: not in enabled drivers build config 00:01:30.457 net/ena: not in enabled drivers build config 00:01:30.457 net/enetc: not in enabled drivers build config 00:01:30.457 net/enetfec: not in enabled drivers build config 00:01:30.457 net/enic: not in enabled drivers build config 00:01:30.457 net/failsafe: not in enabled drivers build config 00:01:30.457 net/fm10k: not in enabled drivers build config 00:01:30.457 net/gve: not in enabled drivers build config 00:01:30.457 net/hinic: not in enabled drivers build config 00:01:30.457 net/hns3: not in enabled drivers build config 00:01:30.457 net/i40e: not in enabled drivers build config 00:01:30.457 net/iavf: not in enabled drivers build config 00:01:30.457 net/ice: not in enabled drivers build config 00:01:30.457 net/idpf: not in enabled drivers build config 00:01:30.457 net/igc: not in enabled drivers build config 00:01:30.457 net/ionic: not in enabled drivers build config 00:01:30.457 net/ipn3ke: not in enabled drivers build config 00:01:30.457 net/ixgbe: not in enabled drivers build config 00:01:30.457 net/mana: not in enabled drivers build config 00:01:30.457 net/memif: not in enabled drivers build config 00:01:30.457 net/mlx4: not in enabled drivers build config 00:01:30.457 net/mlx5: not in enabled drivers build config 00:01:30.457 net/mvneta: not in enabled drivers build config 00:01:30.457 net/mvpp2: not in enabled drivers build config 00:01:30.457 net/netvsc: not in enabled drivers build config 00:01:30.457 net/nfb: not in enabled drivers build config 00:01:30.457 net/nfp: not in enabled drivers build config 00:01:30.457 net/ngbe: not in enabled drivers build config 00:01:30.457 net/null: not in enabled drivers build config 00:01:30.457 net/octeontx: not in enabled drivers build config 00:01:30.457 net/octeon_ep: not in enabled drivers build config 00:01:30.457 net/pcap: not in enabled drivers build config 00:01:30.457 net/pfe: not in enabled drivers build config 00:01:30.457 net/qede: not in enabled drivers build config 00:01:30.457 net/ring: not in enabled drivers build config 00:01:30.457 net/sfc: not in enabled drivers build config 00:01:30.457 net/softnic: not in enabled drivers build config 00:01:30.457 net/tap: not in enabled drivers build config 00:01:30.457 net/thunderx: not in enabled drivers build config 00:01:30.457 net/txgbe: not in enabled drivers build config 00:01:30.457 net/vdev_netvsc: not in enabled drivers build config 00:01:30.457 net/vhost: not in enabled drivers build config 00:01:30.457 net/virtio: not in enabled drivers build config 00:01:30.457 net/vmxnet3: not in enabled drivers build config 00:01:30.457 raw/*: missing internal dependency, "rawdev" 00:01:30.457 crypto/armv8: not in enabled drivers build config 00:01:30.457 crypto/bcmfs: not in enabled drivers build config 00:01:30.457 crypto/caam_jr: not in enabled drivers build config 00:01:30.457 crypto/ccp: not in enabled drivers build config 00:01:30.457 crypto/cnxk: not in enabled drivers build config 00:01:30.457 crypto/dpaa_sec: not in enabled drivers build config 00:01:30.457 crypto/dpaa2_sec: not in enabled drivers build config 00:01:30.457 crypto/ipsec_mb: not in enabled drivers build config 00:01:30.457 crypto/mlx5: not in enabled drivers build config 00:01:30.457 crypto/mvsam: not in enabled drivers build config 00:01:30.457 crypto/nitrox: not in enabled drivers build config 00:01:30.457 crypto/null: not in enabled drivers build config 00:01:30.457 crypto/octeontx: not in enabled drivers build config 00:01:30.457 crypto/openssl: not in enabled drivers build config 00:01:30.457 crypto/scheduler: not in enabled drivers build config 00:01:30.457 crypto/uadk: not in enabled drivers build config 00:01:30.457 crypto/virtio: not in enabled drivers build config 00:01:30.457 compress/isal: not in enabled drivers build config 00:01:30.458 compress/mlx5: not in enabled drivers build config 00:01:30.458 compress/nitrox: not in enabled drivers build config 00:01:30.458 compress/octeontx: not in enabled drivers build config 00:01:30.458 compress/zlib: not in enabled drivers build config 00:01:30.458 regex/*: missing internal dependency, "regexdev" 00:01:30.458 ml/*: missing internal dependency, "mldev" 00:01:30.458 vdpa/ifc: not in enabled drivers build config 00:01:30.458 vdpa/mlx5: not in enabled drivers build config 00:01:30.458 vdpa/nfp: not in enabled drivers build config 00:01:30.458 vdpa/sfc: not in enabled drivers build config 00:01:30.458 event/*: missing internal dependency, "eventdev" 00:01:30.458 baseband/*: missing internal dependency, "bbdev" 00:01:30.458 gpu/*: missing internal dependency, "gpudev" 00:01:30.458 00:01:30.458 00:01:30.458 Build targets in project: 85 00:01:30.458 00:01:30.458 DPDK 24.03.0 00:01:30.458 00:01:30.458 User defined options 00:01:30.458 buildtype : debug 00:01:30.458 default_library : shared 00:01:30.458 libdir : lib 00:01:30.458 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:30.458 b_sanitize : address 00:01:30.458 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:30.458 c_link_args : 00:01:30.458 cpu_instruction_set: native 00:01:30.458 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:01:30.458 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:01:30.458 enable_docs : false 00:01:30.458 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:30.458 enable_kmods : false 00:01:30.458 max_lcores : 128 00:01:30.458 tests : false 00:01:30.458 00:01:30.458 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:30.458 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:30.458 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:30.458 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:30.458 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:30.458 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:30.458 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:30.458 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:30.458 [7/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:30.458 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:30.458 [9/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:30.458 [10/268] Linking static target lib/librte_kvargs.a 00:01:30.458 [11/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:30.458 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:30.458 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:30.458 [14/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:30.458 [15/268] Linking static target lib/librte_log.a 00:01:30.458 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:31.039 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.039 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:31.039 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:31.040 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:31.040 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:31.040 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:31.040 [23/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:31.040 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:31.040 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:31.040 [26/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:31.040 [27/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:31.040 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:31.040 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:31.040 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:31.040 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:31.040 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:31.040 [33/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:31.040 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:31.040 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:31.040 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:31.040 [37/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:31.040 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:31.040 [39/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:31.040 [40/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:31.040 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:31.040 [42/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:31.040 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:31.040 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:31.040 [45/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:31.299 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:31.299 [47/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:31.299 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:31.299 [49/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:31.299 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:31.299 [51/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:31.299 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:31.299 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:31.299 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:31.299 [55/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:31.299 [56/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:31.299 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:31.299 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:31.299 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:31.299 [60/268] Linking static target lib/librte_telemetry.a 00:01:31.299 [61/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.299 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:31.562 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:31.562 [64/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:31.562 [65/268] Linking target lib/librte_log.so.24.1 00:01:31.562 [66/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:31.824 [67/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:31.824 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:31.824 [69/268] Linking target lib/librte_kvargs.so.24.1 00:01:31.824 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:31.824 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:32.084 [72/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:32.084 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:32.084 [74/268] Linking static target lib/librte_pci.a 00:01:32.084 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:32.084 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:32.084 [77/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:32.084 [78/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:32.084 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:32.084 [80/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:32.084 [81/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:32.084 [82/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:32.084 [83/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:32.084 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:32.084 [85/268] Linking static target lib/librte_meter.a 00:01:32.084 [86/268] Linking static target lib/librte_ring.a 00:01:32.084 [87/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:32.084 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:32.084 [89/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:32.084 [90/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:32.084 [91/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:32.084 [92/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:32.349 [93/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:32.349 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:32.349 [95/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:32.349 [96/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:32.349 [97/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:32.349 [98/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:32.349 [99/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:32.349 [100/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:32.349 [101/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:32.349 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:32.349 [103/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:32.349 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:32.349 [105/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:32.349 [106/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.349 [107/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:32.349 [108/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:32.349 [109/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:32.349 [110/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:32.349 [111/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:32.349 [112/268] Linking target lib/librte_telemetry.so.24.1 00:01:32.613 [113/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.613 [114/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:32.613 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:32.613 [116/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:32.613 [117/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:32.613 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:32.613 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:32.613 [120/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:32.613 [121/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:32.613 [122/268] Linking static target lib/librte_mempool.a 00:01:32.613 [123/268] Linking static target lib/librte_rcu.a 00:01:32.613 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:32.872 [125/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:32.872 [126/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:32.872 [127/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.872 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:32.872 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:32.872 [130/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.872 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:32.872 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:32.872 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:32.872 [134/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:32.872 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:33.135 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:33.135 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:33.135 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:33.135 [139/268] Linking static target lib/librte_cmdline.a 00:01:33.135 [140/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:33.135 [141/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:33.135 [142/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:33.135 [143/268] Linking static target lib/librte_eal.a 00:01:33.135 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:33.135 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:33.135 [146/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:33.135 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:33.398 [148/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:33.399 [149/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:33.399 [150/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:33.399 [151/268] Linking static target lib/librte_timer.a 00:01:33.399 [152/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:33.399 [153/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.399 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:33.399 [155/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:33.399 [156/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:33.399 [157/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:33.658 [158/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:33.658 [159/268] Linking static target lib/librte_dmadev.a 00:01:33.658 [160/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.916 [161/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.916 [162/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:33.916 [163/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:33.916 [164/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:33.916 [165/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:33.916 [166/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:33.916 [167/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:33.916 [168/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:33.916 [169/268] Linking static target lib/librte_net.a 00:01:34.175 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:34.175 [171/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:34.175 [172/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:34.175 [173/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.175 [174/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.175 [175/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:34.175 [176/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:34.175 [177/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:34.175 [178/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:34.175 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:34.175 [180/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:34.175 [181/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:34.175 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:34.175 [183/268] Linking static target lib/librte_power.a 00:01:34.175 [184/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:34.175 [185/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:34.175 [186/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:34.175 [187/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.433 [188/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:34.433 [189/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:34.433 [190/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:34.433 [191/268] Linking static target lib/librte_hash.a 00:01:34.433 [192/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:34.433 [193/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:34.433 [194/268] Linking static target lib/librte_compressdev.a 00:01:34.433 [195/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:34.433 [196/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:34.433 [197/268] Linking static target drivers/librte_bus_vdev.a 00:01:34.433 [198/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:34.433 [199/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:34.433 [200/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:34.433 [201/268] Linking static target drivers/librte_bus_pci.a 00:01:34.433 [202/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:34.692 [203/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:34.692 [204/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:34.692 [205/268] Linking static target drivers/librte_mempool_ring.a 00:01:34.692 [206/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:34.692 [207/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.692 [208/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.950 [209/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:34.950 [210/268] Linking static target lib/librte_reorder.a 00:01:34.950 [211/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:34.950 [212/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.950 [213/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.950 [214/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.208 [215/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.496 [216/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:35.496 [217/268] Linking static target lib/librte_security.a 00:01:36.062 [218/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:36.062 [219/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.628 [220/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:36.628 [221/268] Linking static target lib/librte_mbuf.a 00:01:36.886 [222/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:36.886 [223/268] Linking static target lib/librte_cryptodev.a 00:01:36.886 [224/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.819 [225/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.819 [226/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:38.077 [227/268] Linking static target lib/librte_ethdev.a 00:01:39.019 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.019 [229/268] Linking target lib/librte_eal.so.24.1 00:01:39.277 [230/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:39.277 [231/268] Linking target lib/librte_meter.so.24.1 00:01:39.277 [232/268] Linking target lib/librte_pci.so.24.1 00:01:39.277 [233/268] Linking target lib/librte_ring.so.24.1 00:01:39.277 [234/268] Linking target lib/librte_timer.so.24.1 00:01:39.277 [235/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:39.277 [236/268] Linking target lib/librte_dmadev.so.24.1 00:01:39.534 [237/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:39.534 [238/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:39.534 [239/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:39.534 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:39.534 [241/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:39.534 [242/268] Linking target lib/librte_rcu.so.24.1 00:01:39.534 [243/268] Linking target lib/librte_mempool.so.24.1 00:01:39.534 [244/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:39.534 [245/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:39.534 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:39.534 [247/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:39.534 [248/268] Linking target lib/librte_mbuf.so.24.1 00:01:39.792 [249/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:39.792 [250/268] Linking target lib/librte_reorder.so.24.1 00:01:39.792 [251/268] Linking target lib/librte_compressdev.so.24.1 00:01:39.792 [252/268] Linking target lib/librte_net.so.24.1 00:01:39.792 [253/268] Linking target lib/librte_cryptodev.so.24.1 00:01:40.083 [254/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:40.083 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:40.083 [256/268] Linking target lib/librte_cmdline.so.24.1 00:01:40.083 [257/268] Linking target lib/librte_hash.so.24.1 00:01:40.083 [258/268] Linking target lib/librte_security.so.24.1 00:01:40.083 [259/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:41.041 [260/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:41.977 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.977 [262/268] Linking target lib/librte_ethdev.so.24.1 00:01:41.977 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:42.236 [264/268] Linking target lib/librte_power.so.24.1 00:02:08.770 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:08.770 [266/268] Linking static target lib/librte_vhost.a 00:02:08.770 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.770 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:08.770 INFO: autodetecting backend as ninja 00:02:08.770 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:02:08.770 CC lib/ut/ut.o 00:02:08.770 CC lib/ut_mock/mock.o 00:02:08.770 CC lib/log/log.o 00:02:08.770 CC lib/log/log_flags.o 00:02:08.770 CC lib/log/log_deprecated.o 00:02:08.770 LIB libspdk_log.a 00:02:08.770 LIB libspdk_ut.a 00:02:08.770 LIB libspdk_ut_mock.a 00:02:08.770 SO libspdk_ut.so.2.0 00:02:08.770 SO libspdk_log.so.7.0 00:02:08.770 SO libspdk_ut_mock.so.6.0 00:02:08.770 SYMLINK libspdk_ut.so 00:02:08.770 SYMLINK libspdk_ut_mock.so 00:02:08.770 SYMLINK libspdk_log.so 00:02:08.770 CXX lib/trace_parser/trace.o 00:02:08.770 CC lib/dma/dma.o 00:02:08.770 CC lib/ioat/ioat.o 00:02:08.770 CC lib/util/base64.o 00:02:08.770 CC lib/util/bit_array.o 00:02:08.770 CC lib/util/cpuset.o 00:02:08.770 CC lib/util/crc16.o 00:02:08.770 CC lib/util/crc32.o 00:02:08.770 CC lib/util/crc32c.o 00:02:08.770 CC lib/util/crc32_ieee.o 00:02:08.770 CC lib/util/crc64.o 00:02:08.770 CC lib/util/dif.o 00:02:08.770 CC lib/util/fd.o 00:02:08.770 CC lib/util/fd_group.o 00:02:08.770 CC lib/util/file.o 00:02:08.770 CC lib/util/hexlify.o 00:02:08.770 CC lib/util/iov.o 00:02:08.770 CC lib/util/math.o 00:02:08.770 CC lib/util/net.o 00:02:08.770 CC lib/util/pipe.o 00:02:08.770 CC lib/util/strerror_tls.o 00:02:08.770 CC lib/util/string.o 00:02:08.770 CC lib/util/xor.o 00:02:08.770 CC lib/util/uuid.o 00:02:08.770 CC lib/util/zipf.o 00:02:08.770 CC lib/util/md5.o 00:02:08.770 CC lib/vfio_user/host/vfio_user_pci.o 00:02:08.770 CC lib/vfio_user/host/vfio_user.o 00:02:08.770 LIB libspdk_dma.a 00:02:08.770 SO libspdk_dma.so.5.0 00:02:08.770 SYMLINK libspdk_dma.so 00:02:08.770 LIB libspdk_ioat.a 00:02:08.770 SO libspdk_ioat.so.7.0 00:02:08.770 SYMLINK libspdk_ioat.so 00:02:08.770 LIB libspdk_vfio_user.a 00:02:08.770 SO libspdk_vfio_user.so.5.0 00:02:08.770 SYMLINK libspdk_vfio_user.so 00:02:08.770 LIB libspdk_util.a 00:02:09.028 SO libspdk_util.so.10.0 00:02:09.028 SYMLINK libspdk_util.so 00:02:09.286 LIB libspdk_trace_parser.a 00:02:09.286 SO libspdk_trace_parser.so.6.0 00:02:09.286 CC lib/rdma_provider/common.o 00:02:09.286 CC lib/rdma_utils/rdma_utils.o 00:02:09.286 CC lib/vmd/vmd.o 00:02:09.286 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:09.286 CC lib/conf/conf.o 00:02:09.286 CC lib/vmd/led.o 00:02:09.286 CC lib/env_dpdk/env.o 00:02:09.286 CC lib/env_dpdk/memory.o 00:02:09.286 CC lib/env_dpdk/pci.o 00:02:09.286 CC lib/env_dpdk/init.o 00:02:09.286 CC lib/env_dpdk/threads.o 00:02:09.286 CC lib/idxd/idxd.o 00:02:09.286 CC lib/json/json_parse.o 00:02:09.286 CC lib/env_dpdk/pci_ioat.o 00:02:09.286 CC lib/env_dpdk/pci_virtio.o 00:02:09.286 CC lib/json/json_util.o 00:02:09.286 CC lib/idxd/idxd_user.o 00:02:09.286 CC lib/idxd/idxd_kernel.o 00:02:09.286 CC lib/json/json_write.o 00:02:09.286 CC lib/env_dpdk/pci_vmd.o 00:02:09.286 CC lib/env_dpdk/pci_idxd.o 00:02:09.286 CC lib/env_dpdk/pci_event.o 00:02:09.286 CC lib/env_dpdk/sigbus_handler.o 00:02:09.286 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:09.286 CC lib/env_dpdk/pci_dpdk.o 00:02:09.286 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:09.286 SYMLINK libspdk_trace_parser.so 00:02:09.544 LIB libspdk_rdma_provider.a 00:02:09.544 SO libspdk_rdma_provider.so.6.0 00:02:09.544 SYMLINK libspdk_rdma_provider.so 00:02:09.544 LIB libspdk_rdma_utils.a 00:02:09.544 LIB libspdk_json.a 00:02:09.544 SO libspdk_rdma_utils.so.1.0 00:02:09.544 LIB libspdk_conf.a 00:02:09.544 SO libspdk_conf.so.6.0 00:02:09.544 SO libspdk_json.so.6.0 00:02:09.544 SYMLINK libspdk_rdma_utils.so 00:02:09.802 SYMLINK libspdk_conf.so 00:02:09.802 SYMLINK libspdk_json.so 00:02:09.802 CC lib/jsonrpc/jsonrpc_server.o 00:02:09.802 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:09.802 CC lib/jsonrpc/jsonrpc_client.o 00:02:09.802 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:10.060 LIB libspdk_idxd.a 00:02:10.060 SO libspdk_idxd.so.12.1 00:02:10.060 LIB libspdk_jsonrpc.a 00:02:10.060 SYMLINK libspdk_idxd.so 00:02:10.318 SO libspdk_jsonrpc.so.6.0 00:02:10.318 LIB libspdk_vmd.a 00:02:10.318 SYMLINK libspdk_jsonrpc.so 00:02:10.318 SO libspdk_vmd.so.6.0 00:02:10.318 SYMLINK libspdk_vmd.so 00:02:10.318 CC lib/rpc/rpc.o 00:02:10.576 LIB libspdk_rpc.a 00:02:10.576 SO libspdk_rpc.so.6.0 00:02:10.834 SYMLINK libspdk_rpc.so 00:02:10.834 CC lib/notify/notify.o 00:02:10.834 CC lib/notify/notify_rpc.o 00:02:10.834 CC lib/trace/trace.o 00:02:10.834 CC lib/trace/trace_flags.o 00:02:10.834 CC lib/keyring/keyring.o 00:02:10.834 CC lib/keyring/keyring_rpc.o 00:02:10.834 CC lib/trace/trace_rpc.o 00:02:11.091 LIB libspdk_notify.a 00:02:11.091 SO libspdk_notify.so.6.0 00:02:11.091 SYMLINK libspdk_notify.so 00:02:11.091 LIB libspdk_keyring.a 00:02:11.091 SO libspdk_keyring.so.2.0 00:02:11.091 LIB libspdk_trace.a 00:02:11.349 SO libspdk_trace.so.11.0 00:02:11.349 SYMLINK libspdk_keyring.so 00:02:11.349 SYMLINK libspdk_trace.so 00:02:11.349 CC lib/thread/thread.o 00:02:11.349 CC lib/thread/iobuf.o 00:02:11.349 CC lib/sock/sock.o 00:02:11.349 CC lib/sock/sock_rpc.o 00:02:11.916 LIB libspdk_sock.a 00:02:11.916 SO libspdk_sock.so.10.0 00:02:11.916 SYMLINK libspdk_sock.so 00:02:12.174 LIB libspdk_env_dpdk.a 00:02:12.174 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:12.174 CC lib/nvme/nvme_ctrlr.o 00:02:12.174 CC lib/nvme/nvme_fabric.o 00:02:12.174 CC lib/nvme/nvme_ns_cmd.o 00:02:12.174 CC lib/nvme/nvme_ns.o 00:02:12.174 CC lib/nvme/nvme_pcie_common.o 00:02:12.174 CC lib/nvme/nvme_pcie.o 00:02:12.174 CC lib/nvme/nvme_qpair.o 00:02:12.174 CC lib/nvme/nvme.o 00:02:12.174 CC lib/nvme/nvme_quirks.o 00:02:12.174 CC lib/nvme/nvme_transport.o 00:02:12.174 CC lib/nvme/nvme_discovery.o 00:02:12.174 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:12.174 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:12.174 CC lib/nvme/nvme_tcp.o 00:02:12.174 CC lib/nvme/nvme_opal.o 00:02:12.174 CC lib/nvme/nvme_io_msg.o 00:02:12.174 CC lib/nvme/nvme_poll_group.o 00:02:12.174 CC lib/nvme/nvme_zns.o 00:02:12.174 CC lib/nvme/nvme_stubs.o 00:02:12.174 CC lib/nvme/nvme_auth.o 00:02:12.174 CC lib/nvme/nvme_cuse.o 00:02:12.174 CC lib/nvme/nvme_rdma.o 00:02:12.174 SO libspdk_env_dpdk.so.15.0 00:02:12.433 SYMLINK libspdk_env_dpdk.so 00:02:13.808 LIB libspdk_thread.a 00:02:13.808 SO libspdk_thread.so.10.1 00:02:13.808 SYMLINK libspdk_thread.so 00:02:13.808 CC lib/init/json_config.o 00:02:13.808 CC lib/blob/blobstore.o 00:02:13.808 CC lib/virtio/virtio.o 00:02:13.808 CC lib/accel/accel.o 00:02:13.808 CC lib/fsdev/fsdev.o 00:02:13.808 CC lib/blob/request.o 00:02:13.808 CC lib/virtio/virtio_vhost_user.o 00:02:13.808 CC lib/init/subsystem.o 00:02:13.808 CC lib/fsdev/fsdev_io.o 00:02:13.808 CC lib/accel/accel_rpc.o 00:02:13.808 CC lib/virtio/virtio_vfio_user.o 00:02:13.808 CC lib/blob/zeroes.o 00:02:13.808 CC lib/init/subsystem_rpc.o 00:02:13.808 CC lib/fsdev/fsdev_rpc.o 00:02:13.808 CC lib/virtio/virtio_pci.o 00:02:13.808 CC lib/accel/accel_sw.o 00:02:13.808 CC lib/init/rpc.o 00:02:13.808 CC lib/blob/blob_bs_dev.o 00:02:14.066 LIB libspdk_init.a 00:02:14.324 SO libspdk_init.so.6.0 00:02:14.324 SYMLINK libspdk_init.so 00:02:14.324 LIB libspdk_virtio.a 00:02:14.324 SO libspdk_virtio.so.7.0 00:02:14.324 SYMLINK libspdk_virtio.so 00:02:14.324 CC lib/event/app.o 00:02:14.324 CC lib/event/reactor.o 00:02:14.324 CC lib/event/log_rpc.o 00:02:14.324 CC lib/event/app_rpc.o 00:02:14.324 CC lib/event/scheduler_static.o 00:02:14.891 LIB libspdk_fsdev.a 00:02:14.891 SO libspdk_fsdev.so.1.0 00:02:14.891 SYMLINK libspdk_fsdev.so 00:02:14.891 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:15.149 LIB libspdk_event.a 00:02:15.149 SO libspdk_event.so.14.0 00:02:15.149 SYMLINK libspdk_event.so 00:02:15.149 LIB libspdk_nvme.a 00:02:15.409 SO libspdk_nvme.so.14.0 00:02:15.409 LIB libspdk_accel.a 00:02:15.409 SO libspdk_accel.so.16.0 00:02:15.409 SYMLINK libspdk_accel.so 00:02:15.669 SYMLINK libspdk_nvme.so 00:02:15.669 CC lib/bdev/bdev.o 00:02:15.669 CC lib/bdev/bdev_rpc.o 00:02:15.669 CC lib/bdev/bdev_zone.o 00:02:15.669 CC lib/bdev/part.o 00:02:15.669 CC lib/bdev/scsi_nvme.o 00:02:15.928 LIB libspdk_fuse_dispatcher.a 00:02:15.928 SO libspdk_fuse_dispatcher.so.1.0 00:02:15.928 SYMLINK libspdk_fuse_dispatcher.so 00:02:18.461 LIB libspdk_blob.a 00:02:18.461 SO libspdk_blob.so.11.0 00:02:18.461 SYMLINK libspdk_blob.so 00:02:18.461 CC lib/blobfs/blobfs.o 00:02:18.461 CC lib/lvol/lvol.o 00:02:18.461 CC lib/blobfs/tree.o 00:02:19.394 LIB libspdk_bdev.a 00:02:19.394 SO libspdk_bdev.so.16.0 00:02:19.394 SYMLINK libspdk_bdev.so 00:02:19.660 LIB libspdk_blobfs.a 00:02:19.660 CC lib/scsi/dev.o 00:02:19.660 CC lib/nbd/nbd.o 00:02:19.660 CC lib/ublk/ublk.o 00:02:19.660 CC lib/nvmf/ctrlr.o 00:02:19.660 CC lib/ftl/ftl_core.o 00:02:19.660 CC lib/ublk/ublk_rpc.o 00:02:19.660 CC lib/nbd/nbd_rpc.o 00:02:19.660 CC lib/scsi/lun.o 00:02:19.660 CC lib/nvmf/ctrlr_discovery.o 00:02:19.660 SO libspdk_blobfs.so.10.0 00:02:19.660 CC lib/ftl/ftl_init.o 00:02:19.660 CC lib/nvmf/ctrlr_bdev.o 00:02:19.660 CC lib/scsi/port.o 00:02:19.660 CC lib/ftl/ftl_layout.o 00:02:19.660 CC lib/nvmf/subsystem.o 00:02:19.660 CC lib/scsi/scsi.o 00:02:19.660 CC lib/nvmf/nvmf.o 00:02:19.660 CC lib/ftl/ftl_debug.o 00:02:19.660 CC lib/scsi/scsi_bdev.o 00:02:19.660 CC lib/nvmf/nvmf_rpc.o 00:02:19.660 CC lib/ftl/ftl_io.o 00:02:19.660 CC lib/scsi/scsi_pr.o 00:02:19.660 CC lib/ftl/ftl_sb.o 00:02:19.660 CC lib/ftl/ftl_l2p.o 00:02:19.660 CC lib/scsi/task.o 00:02:19.660 CC lib/nvmf/transport.o 00:02:19.660 CC lib/scsi/scsi_rpc.o 00:02:19.660 CC lib/ftl/ftl_l2p_flat.o 00:02:19.660 CC lib/nvmf/tcp.o 00:02:19.660 CC lib/ftl/ftl_nv_cache.o 00:02:19.660 CC lib/ftl/ftl_band.o 00:02:19.660 CC lib/nvmf/stubs.o 00:02:19.660 CC lib/nvmf/mdns_server.o 00:02:19.660 CC lib/ftl/ftl_band_ops.o 00:02:19.660 CC lib/ftl/ftl_writer.o 00:02:19.660 CC lib/nvmf/rdma.o 00:02:19.660 CC lib/ftl/ftl_rq.o 00:02:19.660 CC lib/nvmf/auth.o 00:02:19.660 CC lib/ftl/ftl_reloc.o 00:02:19.660 CC lib/ftl/ftl_l2p_cache.o 00:02:19.660 CC lib/ftl/ftl_p2l.o 00:02:19.660 CC lib/ftl/ftl_p2l_log.o 00:02:19.660 CC lib/ftl/mngt/ftl_mngt.o 00:02:19.660 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:19.660 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:19.660 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:19.660 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:19.660 SYMLINK libspdk_blobfs.so 00:02:19.660 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:19.922 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:19.922 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:19.922 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:19.922 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:19.922 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:19.922 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:19.922 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:19.922 CC lib/ftl/utils/ftl_conf.o 00:02:20.187 CC lib/ftl/utils/ftl_md.o 00:02:20.187 CC lib/ftl/utils/ftl_mempool.o 00:02:20.187 CC lib/ftl/utils/ftl_bitmap.o 00:02:20.187 CC lib/ftl/utils/ftl_property.o 00:02:20.187 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:20.187 LIB libspdk_lvol.a 00:02:20.187 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:20.187 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:20.187 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:20.187 SO libspdk_lvol.so.10.0 00:02:20.187 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:20.187 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:20.187 SYMLINK libspdk_lvol.so 00:02:20.187 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:20.445 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:20.445 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:20.445 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:20.445 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:20.445 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:20.445 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:20.445 CC lib/ftl/base/ftl_base_dev.o 00:02:20.445 CC lib/ftl/base/ftl_base_bdev.o 00:02:20.445 CC lib/ftl/ftl_trace.o 00:02:20.704 LIB libspdk_nbd.a 00:02:20.704 SO libspdk_nbd.so.7.0 00:02:20.704 SYMLINK libspdk_nbd.so 00:02:20.704 LIB libspdk_scsi.a 00:02:20.704 SO libspdk_scsi.so.9.0 00:02:20.962 SYMLINK libspdk_scsi.so 00:02:20.962 LIB libspdk_ublk.a 00:02:20.962 SO libspdk_ublk.so.3.0 00:02:20.962 SYMLINK libspdk_ublk.so 00:02:20.962 CC lib/iscsi/conn.o 00:02:20.962 CC lib/vhost/vhost.o 00:02:20.962 CC lib/iscsi/init_grp.o 00:02:20.962 CC lib/vhost/vhost_rpc.o 00:02:20.962 CC lib/iscsi/iscsi.o 00:02:20.962 CC lib/iscsi/param.o 00:02:20.962 CC lib/vhost/vhost_scsi.o 00:02:20.962 CC lib/iscsi/portal_grp.o 00:02:20.962 CC lib/vhost/vhost_blk.o 00:02:20.962 CC lib/iscsi/tgt_node.o 00:02:20.962 CC lib/iscsi/iscsi_subsystem.o 00:02:20.962 CC lib/vhost/rte_vhost_user.o 00:02:20.962 CC lib/iscsi/iscsi_rpc.o 00:02:20.962 CC lib/iscsi/task.o 00:02:21.528 LIB libspdk_ftl.a 00:02:21.528 SO libspdk_ftl.so.9.0 00:02:21.786 SYMLINK libspdk_ftl.so 00:02:22.352 LIB libspdk_vhost.a 00:02:22.610 SO libspdk_vhost.so.8.0 00:02:22.610 SYMLINK libspdk_vhost.so 00:02:22.868 LIB libspdk_iscsi.a 00:02:22.868 SO libspdk_iscsi.so.8.0 00:02:23.126 SYMLINK libspdk_iscsi.so 00:02:23.126 LIB libspdk_nvmf.a 00:02:23.126 SO libspdk_nvmf.so.19.0 00:02:23.384 SYMLINK libspdk_nvmf.so 00:02:23.643 CC module/env_dpdk/env_dpdk_rpc.o 00:02:23.901 CC module/blob/bdev/blob_bdev.o 00:02:23.901 CC module/accel/error/accel_error.o 00:02:23.901 CC module/accel/iaa/accel_iaa.o 00:02:23.901 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:23.901 CC module/accel/ioat/accel_ioat.o 00:02:23.901 CC module/sock/posix/posix.o 00:02:23.901 CC module/accel/error/accel_error_rpc.o 00:02:23.901 CC module/accel/iaa/accel_iaa_rpc.o 00:02:23.901 CC module/accel/ioat/accel_ioat_rpc.o 00:02:23.901 CC module/keyring/linux/keyring.o 00:02:23.901 CC module/keyring/linux/keyring_rpc.o 00:02:23.901 CC module/accel/dsa/accel_dsa.o 00:02:23.901 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:23.901 CC module/accel/dsa/accel_dsa_rpc.o 00:02:23.901 CC module/fsdev/aio/fsdev_aio.o 00:02:23.901 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:23.901 CC module/fsdev/aio/linux_aio_mgr.o 00:02:23.901 CC module/scheduler/gscheduler/gscheduler.o 00:02:23.901 CC module/keyring/file/keyring.o 00:02:23.901 CC module/keyring/file/keyring_rpc.o 00:02:23.901 LIB libspdk_env_dpdk_rpc.a 00:02:23.901 SO libspdk_env_dpdk_rpc.so.6.0 00:02:23.901 SYMLINK libspdk_env_dpdk_rpc.so 00:02:23.901 LIB libspdk_keyring_linux.a 00:02:23.901 LIB libspdk_scheduler_gscheduler.a 00:02:23.901 SO libspdk_keyring_linux.so.1.0 00:02:23.901 LIB libspdk_scheduler_dpdk_governor.a 00:02:23.901 LIB libspdk_keyring_file.a 00:02:24.158 SO libspdk_scheduler_gscheduler.so.4.0 00:02:24.158 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:24.158 SO libspdk_keyring_file.so.2.0 00:02:24.158 LIB libspdk_accel_error.a 00:02:24.158 LIB libspdk_accel_ioat.a 00:02:24.158 SO libspdk_accel_error.so.2.0 00:02:24.158 SO libspdk_accel_ioat.so.6.0 00:02:24.158 LIB libspdk_scheduler_dynamic.a 00:02:24.158 SYMLINK libspdk_keyring_linux.so 00:02:24.158 LIB libspdk_accel_iaa.a 00:02:24.158 SYMLINK libspdk_scheduler_gscheduler.so 00:02:24.158 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:24.158 SO libspdk_scheduler_dynamic.so.4.0 00:02:24.158 SYMLINK libspdk_keyring_file.so 00:02:24.158 SO libspdk_accel_iaa.so.3.0 00:02:24.158 SYMLINK libspdk_accel_error.so 00:02:24.158 SYMLINK libspdk_accel_ioat.so 00:02:24.158 SYMLINK libspdk_scheduler_dynamic.so 00:02:24.158 SYMLINK libspdk_accel_iaa.so 00:02:24.158 LIB libspdk_blob_bdev.a 00:02:24.158 SO libspdk_blob_bdev.so.11.0 00:02:24.158 LIB libspdk_accel_dsa.a 00:02:24.158 SO libspdk_accel_dsa.so.5.0 00:02:24.158 SYMLINK libspdk_blob_bdev.so 00:02:24.419 SYMLINK libspdk_accel_dsa.so 00:02:24.419 CC module/bdev/error/vbdev_error.o 00:02:24.419 CC module/bdev/delay/vbdev_delay.o 00:02:24.419 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:24.419 CC module/bdev/error/vbdev_error_rpc.o 00:02:24.419 CC module/bdev/gpt/gpt.o 00:02:24.419 CC module/blobfs/bdev/blobfs_bdev.o 00:02:24.419 CC module/bdev/gpt/vbdev_gpt.o 00:02:24.419 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:24.419 CC module/bdev/lvol/vbdev_lvol.o 00:02:24.419 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:24.419 CC module/bdev/null/bdev_null.o 00:02:24.419 CC module/bdev/passthru/vbdev_passthru.o 00:02:24.419 CC module/bdev/null/bdev_null_rpc.o 00:02:24.419 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:24.419 CC module/bdev/ftl/bdev_ftl.o 00:02:24.419 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:24.419 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:24.419 CC module/bdev/nvme/bdev_nvme.o 00:02:24.419 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:24.419 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:24.419 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:24.419 CC module/bdev/malloc/bdev_malloc.o 00:02:24.419 CC module/bdev/nvme/nvme_rpc.o 00:02:24.419 CC module/bdev/split/vbdev_split.o 00:02:24.419 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:24.419 CC module/bdev/aio/bdev_aio.o 00:02:24.419 CC module/bdev/nvme/bdev_mdns_client.o 00:02:24.419 CC module/bdev/aio/bdev_aio_rpc.o 00:02:24.419 CC module/bdev/split/vbdev_split_rpc.o 00:02:24.419 CC module/bdev/iscsi/bdev_iscsi.o 00:02:24.419 CC module/bdev/raid/bdev_raid.o 00:02:24.419 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:24.419 CC module/bdev/nvme/vbdev_opal.o 00:02:24.419 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:24.419 CC module/bdev/raid/bdev_raid_rpc.o 00:02:24.419 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:24.419 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:24.419 CC module/bdev/raid/bdev_raid_sb.o 00:02:24.419 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:24.419 CC module/bdev/raid/raid0.o 00:02:24.419 CC module/bdev/raid/raid1.o 00:02:24.419 CC module/bdev/raid/concat.o 00:02:24.989 LIB libspdk_fsdev_aio.a 00:02:24.989 SO libspdk_fsdev_aio.so.1.0 00:02:24.989 LIB libspdk_blobfs_bdev.a 00:02:24.989 SO libspdk_blobfs_bdev.so.6.0 00:02:24.989 LIB libspdk_bdev_split.a 00:02:24.989 SYMLINK libspdk_fsdev_aio.so 00:02:24.989 SO libspdk_bdev_split.so.6.0 00:02:24.989 SYMLINK libspdk_blobfs_bdev.so 00:02:24.989 LIB libspdk_bdev_gpt.a 00:02:24.989 LIB libspdk_bdev_null.a 00:02:24.989 SYMLINK libspdk_bdev_split.so 00:02:24.989 SO libspdk_bdev_gpt.so.6.0 00:02:24.989 SO libspdk_bdev_null.so.6.0 00:02:24.989 LIB libspdk_sock_posix.a 00:02:24.989 LIB libspdk_bdev_error.a 00:02:24.989 LIB libspdk_bdev_delay.a 00:02:24.989 LIB libspdk_bdev_passthru.a 00:02:24.989 SO libspdk_sock_posix.so.6.0 00:02:24.989 SO libspdk_bdev_delay.so.6.0 00:02:24.989 SO libspdk_bdev_error.so.6.0 00:02:24.989 SO libspdk_bdev_passthru.so.6.0 00:02:25.245 SYMLINK libspdk_bdev_gpt.so 00:02:25.245 LIB libspdk_bdev_aio.a 00:02:25.245 SYMLINK libspdk_bdev_null.so 00:02:25.245 LIB libspdk_bdev_zone_block.a 00:02:25.245 LIB libspdk_bdev_ftl.a 00:02:25.245 SO libspdk_bdev_aio.so.6.0 00:02:25.245 LIB libspdk_bdev_iscsi.a 00:02:25.245 SO libspdk_bdev_zone_block.so.6.0 00:02:25.245 SYMLINK libspdk_bdev_error.so 00:02:25.245 SYMLINK libspdk_bdev_delay.so 00:02:25.245 SO libspdk_bdev_ftl.so.6.0 00:02:25.245 SYMLINK libspdk_bdev_passthru.so 00:02:25.245 SYMLINK libspdk_sock_posix.so 00:02:25.245 SO libspdk_bdev_iscsi.so.6.0 00:02:25.245 SYMLINK libspdk_bdev_aio.so 00:02:25.245 SYMLINK libspdk_bdev_zone_block.so 00:02:25.245 SYMLINK libspdk_bdev_ftl.so 00:02:25.245 SYMLINK libspdk_bdev_iscsi.so 00:02:25.245 LIB libspdk_bdev_malloc.a 00:02:25.245 SO libspdk_bdev_malloc.so.6.0 00:02:25.245 SYMLINK libspdk_bdev_malloc.so 00:02:25.503 LIB libspdk_bdev_lvol.a 00:02:25.503 LIB libspdk_bdev_virtio.a 00:02:25.503 SO libspdk_bdev_lvol.so.6.0 00:02:25.503 SO libspdk_bdev_virtio.so.6.0 00:02:25.503 SYMLINK libspdk_bdev_lvol.so 00:02:25.503 SYMLINK libspdk_bdev_virtio.so 00:02:26.068 LIB libspdk_bdev_raid.a 00:02:26.068 SO libspdk_bdev_raid.so.6.0 00:02:26.327 SYMLINK libspdk_bdev_raid.so 00:02:27.706 LIB libspdk_bdev_nvme.a 00:02:27.706 SO libspdk_bdev_nvme.so.7.0 00:02:27.706 SYMLINK libspdk_bdev_nvme.so 00:02:28.272 CC module/event/subsystems/vmd/vmd.o 00:02:28.272 CC module/event/subsystems/iobuf/iobuf.o 00:02:28.272 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:28.272 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:28.272 CC module/event/subsystems/sock/sock.o 00:02:28.272 CC module/event/subsystems/scheduler/scheduler.o 00:02:28.272 CC module/event/subsystems/keyring/keyring.o 00:02:28.272 CC module/event/subsystems/fsdev/fsdev.o 00:02:28.272 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:28.272 LIB libspdk_event_keyring.a 00:02:28.272 LIB libspdk_event_sock.a 00:02:28.272 LIB libspdk_event_vhost_blk.a 00:02:28.272 LIB libspdk_event_fsdev.a 00:02:28.272 LIB libspdk_event_scheduler.a 00:02:28.272 LIB libspdk_event_vmd.a 00:02:28.272 LIB libspdk_event_iobuf.a 00:02:28.272 SO libspdk_event_keyring.so.1.0 00:02:28.272 SO libspdk_event_sock.so.5.0 00:02:28.272 SO libspdk_event_fsdev.so.1.0 00:02:28.272 SO libspdk_event_vhost_blk.so.3.0 00:02:28.272 SO libspdk_event_scheduler.so.4.0 00:02:28.272 SO libspdk_event_vmd.so.6.0 00:02:28.272 SO libspdk_event_iobuf.so.3.0 00:02:28.272 SYMLINK libspdk_event_sock.so 00:02:28.272 SYMLINK libspdk_event_keyring.so 00:02:28.272 SYMLINK libspdk_event_fsdev.so 00:02:28.272 SYMLINK libspdk_event_vhost_blk.so 00:02:28.272 SYMLINK libspdk_event_scheduler.so 00:02:28.272 SYMLINK libspdk_event_vmd.so 00:02:28.272 SYMLINK libspdk_event_iobuf.so 00:02:28.529 CC module/event/subsystems/accel/accel.o 00:02:28.787 LIB libspdk_event_accel.a 00:02:28.787 SO libspdk_event_accel.so.6.0 00:02:28.787 SYMLINK libspdk_event_accel.so 00:02:29.044 CC module/event/subsystems/bdev/bdev.o 00:02:29.044 LIB libspdk_event_bdev.a 00:02:29.044 SO libspdk_event_bdev.so.6.0 00:02:29.302 SYMLINK libspdk_event_bdev.so 00:02:29.302 CC module/event/subsystems/scsi/scsi.o 00:02:29.302 CC module/event/subsystems/nbd/nbd.o 00:02:29.302 CC module/event/subsystems/ublk/ublk.o 00:02:29.302 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:29.302 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:29.559 LIB libspdk_event_nbd.a 00:02:29.559 LIB libspdk_event_ublk.a 00:02:29.559 LIB libspdk_event_scsi.a 00:02:29.559 SO libspdk_event_nbd.so.6.0 00:02:29.559 SO libspdk_event_ublk.so.3.0 00:02:29.559 SO libspdk_event_scsi.so.6.0 00:02:29.559 SYMLINK libspdk_event_nbd.so 00:02:29.559 SYMLINK libspdk_event_ublk.so 00:02:29.559 SYMLINK libspdk_event_scsi.so 00:02:29.559 LIB libspdk_event_nvmf.a 00:02:29.559 SO libspdk_event_nvmf.so.6.0 00:02:29.818 SYMLINK libspdk_event_nvmf.so 00:02:29.818 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:29.818 CC module/event/subsystems/iscsi/iscsi.o 00:02:29.818 LIB libspdk_event_vhost_scsi.a 00:02:30.075 LIB libspdk_event_iscsi.a 00:02:30.075 SO libspdk_event_vhost_scsi.so.3.0 00:02:30.075 SO libspdk_event_iscsi.so.6.0 00:02:30.075 SYMLINK libspdk_event_vhost_scsi.so 00:02:30.075 SYMLINK libspdk_event_iscsi.so 00:02:30.075 SO libspdk.so.6.0 00:02:30.075 SYMLINK libspdk.so 00:02:30.335 CXX app/trace/trace.o 00:02:30.335 CC app/trace_record/trace_record.o 00:02:30.335 CC test/rpc_client/rpc_client_test.o 00:02:30.335 CC app/spdk_nvme_perf/perf.o 00:02:30.335 CC app/spdk_top/spdk_top.o 00:02:30.335 CC app/spdk_nvme_discover/discovery_aer.o 00:02:30.335 CC app/spdk_nvme_identify/identify.o 00:02:30.335 TEST_HEADER include/spdk/accel.h 00:02:30.335 CC app/spdk_lspci/spdk_lspci.o 00:02:30.335 TEST_HEADER include/spdk/accel_module.h 00:02:30.335 TEST_HEADER include/spdk/assert.h 00:02:30.335 TEST_HEADER include/spdk/barrier.h 00:02:30.335 TEST_HEADER include/spdk/base64.h 00:02:30.335 TEST_HEADER include/spdk/bdev.h 00:02:30.335 TEST_HEADER include/spdk/bdev_module.h 00:02:30.335 TEST_HEADER include/spdk/bdev_zone.h 00:02:30.335 TEST_HEADER include/spdk/bit_array.h 00:02:30.335 TEST_HEADER include/spdk/bit_pool.h 00:02:30.335 TEST_HEADER include/spdk/blob_bdev.h 00:02:30.335 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:30.335 TEST_HEADER include/spdk/blob.h 00:02:30.335 TEST_HEADER include/spdk/blobfs.h 00:02:30.335 TEST_HEADER include/spdk/conf.h 00:02:30.335 TEST_HEADER include/spdk/config.h 00:02:30.335 TEST_HEADER include/spdk/cpuset.h 00:02:30.335 TEST_HEADER include/spdk/crc16.h 00:02:30.335 TEST_HEADER include/spdk/crc32.h 00:02:30.335 TEST_HEADER include/spdk/crc64.h 00:02:30.335 TEST_HEADER include/spdk/dif.h 00:02:30.335 TEST_HEADER include/spdk/dma.h 00:02:30.335 TEST_HEADER include/spdk/endian.h 00:02:30.335 TEST_HEADER include/spdk/env_dpdk.h 00:02:30.335 TEST_HEADER include/spdk/env.h 00:02:30.335 TEST_HEADER include/spdk/fd_group.h 00:02:30.335 TEST_HEADER include/spdk/event.h 00:02:30.335 TEST_HEADER include/spdk/fd.h 00:02:30.335 TEST_HEADER include/spdk/fsdev.h 00:02:30.335 TEST_HEADER include/spdk/file.h 00:02:30.335 TEST_HEADER include/spdk/fsdev_module.h 00:02:30.335 TEST_HEADER include/spdk/ftl.h 00:02:30.335 TEST_HEADER include/spdk/gpt_spec.h 00:02:30.335 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:30.335 TEST_HEADER include/spdk/hexlify.h 00:02:30.335 TEST_HEADER include/spdk/histogram_data.h 00:02:30.335 TEST_HEADER include/spdk/idxd.h 00:02:30.335 TEST_HEADER include/spdk/idxd_spec.h 00:02:30.335 TEST_HEADER include/spdk/init.h 00:02:30.335 TEST_HEADER include/spdk/ioat.h 00:02:30.335 TEST_HEADER include/spdk/ioat_spec.h 00:02:30.335 TEST_HEADER include/spdk/iscsi_spec.h 00:02:30.335 TEST_HEADER include/spdk/json.h 00:02:30.335 TEST_HEADER include/spdk/keyring.h 00:02:30.335 TEST_HEADER include/spdk/jsonrpc.h 00:02:30.335 TEST_HEADER include/spdk/keyring_module.h 00:02:30.335 TEST_HEADER include/spdk/likely.h 00:02:30.336 TEST_HEADER include/spdk/log.h 00:02:30.336 TEST_HEADER include/spdk/lvol.h 00:02:30.336 TEST_HEADER include/spdk/md5.h 00:02:30.336 TEST_HEADER include/spdk/mmio.h 00:02:30.336 TEST_HEADER include/spdk/memory.h 00:02:30.336 TEST_HEADER include/spdk/nbd.h 00:02:30.336 TEST_HEADER include/spdk/net.h 00:02:30.336 TEST_HEADER include/spdk/notify.h 00:02:30.336 TEST_HEADER include/spdk/nvme.h 00:02:30.336 TEST_HEADER include/spdk/nvme_intel.h 00:02:30.336 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:30.336 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:30.336 TEST_HEADER include/spdk/nvme_spec.h 00:02:30.336 TEST_HEADER include/spdk/nvme_zns.h 00:02:30.336 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:30.336 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:30.336 TEST_HEADER include/spdk/nvmf.h 00:02:30.336 TEST_HEADER include/spdk/nvmf_spec.h 00:02:30.336 TEST_HEADER include/spdk/nvmf_transport.h 00:02:30.336 TEST_HEADER include/spdk/opal.h 00:02:30.336 TEST_HEADER include/spdk/opal_spec.h 00:02:30.336 TEST_HEADER include/spdk/pci_ids.h 00:02:30.336 TEST_HEADER include/spdk/pipe.h 00:02:30.336 TEST_HEADER include/spdk/queue.h 00:02:30.336 TEST_HEADER include/spdk/rpc.h 00:02:30.336 TEST_HEADER include/spdk/reduce.h 00:02:30.336 TEST_HEADER include/spdk/scheduler.h 00:02:30.336 TEST_HEADER include/spdk/scsi.h 00:02:30.336 TEST_HEADER include/spdk/scsi_spec.h 00:02:30.336 TEST_HEADER include/spdk/sock.h 00:02:30.336 TEST_HEADER include/spdk/stdinc.h 00:02:30.336 TEST_HEADER include/spdk/string.h 00:02:30.336 TEST_HEADER include/spdk/thread.h 00:02:30.336 TEST_HEADER include/spdk/trace.h 00:02:30.336 TEST_HEADER include/spdk/trace_parser.h 00:02:30.336 TEST_HEADER include/spdk/tree.h 00:02:30.336 TEST_HEADER include/spdk/ublk.h 00:02:30.336 TEST_HEADER include/spdk/util.h 00:02:30.336 TEST_HEADER include/spdk/uuid.h 00:02:30.336 TEST_HEADER include/spdk/version.h 00:02:30.336 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:30.336 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:30.336 TEST_HEADER include/spdk/vmd.h 00:02:30.336 TEST_HEADER include/spdk/vhost.h 00:02:30.336 TEST_HEADER include/spdk/xor.h 00:02:30.336 TEST_HEADER include/spdk/zipf.h 00:02:30.336 CXX test/cpp_headers/accel.o 00:02:30.336 CXX test/cpp_headers/accel_module.o 00:02:30.336 CXX test/cpp_headers/assert.o 00:02:30.336 CXX test/cpp_headers/barrier.o 00:02:30.336 CXX test/cpp_headers/base64.o 00:02:30.336 CXX test/cpp_headers/bdev.o 00:02:30.336 CXX test/cpp_headers/bdev_module.o 00:02:30.336 CXX test/cpp_headers/bdev_zone.o 00:02:30.336 CXX test/cpp_headers/bit_array.o 00:02:30.336 CXX test/cpp_headers/bit_pool.o 00:02:30.336 CXX test/cpp_headers/blob_bdev.o 00:02:30.336 CXX test/cpp_headers/blobfs_bdev.o 00:02:30.336 CXX test/cpp_headers/blobfs.o 00:02:30.336 CXX test/cpp_headers/blob.o 00:02:30.336 CXX test/cpp_headers/conf.o 00:02:30.336 CXX test/cpp_headers/config.o 00:02:30.336 CXX test/cpp_headers/cpuset.o 00:02:30.336 CC app/iscsi_tgt/iscsi_tgt.o 00:02:30.336 CXX test/cpp_headers/crc16.o 00:02:30.336 CC app/nvmf_tgt/nvmf_main.o 00:02:30.336 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:30.336 CC app/spdk_dd/spdk_dd.o 00:02:30.336 CXX test/cpp_headers/crc32.o 00:02:30.336 CC test/env/pci/pci_ut.o 00:02:30.336 CC examples/ioat/perf/perf.o 00:02:30.336 CC test/thread/poller_perf/poller_perf.o 00:02:30.336 CC test/env/vtophys/vtophys.o 00:02:30.336 CC examples/ioat/verify/verify.o 00:02:30.336 CC test/env/memory/memory_ut.o 00:02:30.336 CC test/app/jsoncat/jsoncat.o 00:02:30.336 CC examples/util/zipf/zipf.o 00:02:30.336 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:30.336 CC app/spdk_tgt/spdk_tgt.o 00:02:30.597 CC test/app/histogram_perf/histogram_perf.o 00:02:30.597 CC test/app/stub/stub.o 00:02:30.597 CC app/fio/nvme/fio_plugin.o 00:02:30.597 CC test/dma/test_dma/test_dma.o 00:02:30.597 CC test/app/bdev_svc/bdev_svc.o 00:02:30.597 CC app/fio/bdev/fio_plugin.o 00:02:30.597 LINK spdk_lspci 00:02:30.597 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:30.597 CC test/env/mem_callbacks/mem_callbacks.o 00:02:30.860 LINK rpc_client_test 00:02:30.860 LINK vtophys 00:02:30.860 LINK poller_perf 00:02:30.860 LINK nvmf_tgt 00:02:30.860 LINK jsoncat 00:02:30.860 LINK spdk_nvme_discover 00:02:30.860 LINK histogram_perf 00:02:30.860 CXX test/cpp_headers/crc64.o 00:02:30.860 LINK zipf 00:02:30.860 CXX test/cpp_headers/dif.o 00:02:30.860 CXX test/cpp_headers/dma.o 00:02:30.860 CXX test/cpp_headers/endian.o 00:02:30.860 CXX test/cpp_headers/env_dpdk.o 00:02:30.860 LINK env_dpdk_post_init 00:02:30.860 CXX test/cpp_headers/env.o 00:02:30.860 CXX test/cpp_headers/event.o 00:02:30.860 LINK interrupt_tgt 00:02:30.860 CXX test/cpp_headers/fd_group.o 00:02:30.860 CXX test/cpp_headers/fd.o 00:02:30.860 LINK iscsi_tgt 00:02:30.860 CXX test/cpp_headers/file.o 00:02:30.860 CXX test/cpp_headers/fsdev.o 00:02:30.860 CXX test/cpp_headers/fsdev_module.o 00:02:30.860 CXX test/cpp_headers/ftl.o 00:02:30.860 LINK stub 00:02:30.860 CXX test/cpp_headers/fuse_dispatcher.o 00:02:30.860 CXX test/cpp_headers/gpt_spec.o 00:02:30.860 LINK spdk_trace_record 00:02:30.860 CXX test/cpp_headers/hexlify.o 00:02:30.860 LINK spdk_tgt 00:02:30.860 LINK verify 00:02:30.860 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:30.860 LINK bdev_svc 00:02:30.860 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:30.860 CXX test/cpp_headers/histogram_data.o 00:02:31.124 LINK ioat_perf 00:02:31.124 CXX test/cpp_headers/idxd.o 00:02:31.124 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:31.124 CXX test/cpp_headers/idxd_spec.o 00:02:31.124 CXX test/cpp_headers/init.o 00:02:31.124 CXX test/cpp_headers/ioat.o 00:02:31.124 CXX test/cpp_headers/ioat_spec.o 00:02:31.124 CXX test/cpp_headers/iscsi_spec.o 00:02:31.124 LINK spdk_trace 00:02:31.124 CXX test/cpp_headers/json.o 00:02:31.393 CXX test/cpp_headers/jsonrpc.o 00:02:31.393 CXX test/cpp_headers/keyring.o 00:02:31.393 CXX test/cpp_headers/keyring_module.o 00:02:31.393 CXX test/cpp_headers/likely.o 00:02:31.393 CXX test/cpp_headers/log.o 00:02:31.393 CXX test/cpp_headers/lvol.o 00:02:31.393 CXX test/cpp_headers/md5.o 00:02:31.393 CXX test/cpp_headers/memory.o 00:02:31.393 CXX test/cpp_headers/mmio.o 00:02:31.393 CXX test/cpp_headers/nbd.o 00:02:31.393 LINK spdk_dd 00:02:31.393 CXX test/cpp_headers/net.o 00:02:31.393 CXX test/cpp_headers/notify.o 00:02:31.393 CXX test/cpp_headers/nvme.o 00:02:31.393 CXX test/cpp_headers/nvme_intel.o 00:02:31.393 CXX test/cpp_headers/nvme_ocssd.o 00:02:31.393 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:31.393 CXX test/cpp_headers/nvme_spec.o 00:02:31.393 CXX test/cpp_headers/nvme_zns.o 00:02:31.393 CXX test/cpp_headers/nvmf_cmd.o 00:02:31.393 LINK pci_ut 00:02:31.393 CC test/event/reactor/reactor.o 00:02:31.393 CC test/event/event_perf/event_perf.o 00:02:31.393 CC test/event/reactor_perf/reactor_perf.o 00:02:31.393 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:31.393 CXX test/cpp_headers/nvmf.o 00:02:31.683 CXX test/cpp_headers/nvmf_spec.o 00:02:31.683 CC test/event/app_repeat/app_repeat.o 00:02:31.683 CC examples/sock/hello_world/hello_sock.o 00:02:31.683 CXX test/cpp_headers/nvmf_transport.o 00:02:31.683 CXX test/cpp_headers/opal.o 00:02:31.683 CC examples/idxd/perf/perf.o 00:02:31.683 CC examples/vmd/lsvmd/lsvmd.o 00:02:31.683 CC test/event/scheduler/scheduler.o 00:02:31.683 CC examples/thread/thread/thread_ex.o 00:02:31.683 CC examples/vmd/led/led.o 00:02:31.683 CXX test/cpp_headers/opal_spec.o 00:02:31.683 CXX test/cpp_headers/pci_ids.o 00:02:31.683 CXX test/cpp_headers/pipe.o 00:02:31.683 LINK test_dma 00:02:31.683 CXX test/cpp_headers/queue.o 00:02:31.683 CXX test/cpp_headers/reduce.o 00:02:31.683 CXX test/cpp_headers/rpc.o 00:02:31.683 CXX test/cpp_headers/scheduler.o 00:02:31.683 CXX test/cpp_headers/scsi.o 00:02:31.683 CXX test/cpp_headers/scsi_spec.o 00:02:31.683 CXX test/cpp_headers/sock.o 00:02:31.683 LINK nvme_fuzz 00:02:31.683 CXX test/cpp_headers/stdinc.o 00:02:31.683 CXX test/cpp_headers/string.o 00:02:31.683 LINK spdk_bdev 00:02:31.683 CXX test/cpp_headers/thread.o 00:02:31.683 CXX test/cpp_headers/trace.o 00:02:31.683 LINK reactor 00:02:31.683 LINK reactor_perf 00:02:31.989 CXX test/cpp_headers/trace_parser.o 00:02:31.989 LINK event_perf 00:02:31.989 LINK spdk_nvme 00:02:31.989 CXX test/cpp_headers/tree.o 00:02:31.989 CXX test/cpp_headers/ublk.o 00:02:31.989 CXX test/cpp_headers/util.o 00:02:31.989 CC app/vhost/vhost.o 00:02:31.989 LINK app_repeat 00:02:31.989 CXX test/cpp_headers/uuid.o 00:02:31.989 CXX test/cpp_headers/version.o 00:02:31.989 CXX test/cpp_headers/vfio_user_pci.o 00:02:31.989 LINK mem_callbacks 00:02:31.989 CXX test/cpp_headers/vfio_user_spec.o 00:02:31.989 LINK lsvmd 00:02:31.989 CXX test/cpp_headers/vhost.o 00:02:31.989 CXX test/cpp_headers/vmd.o 00:02:31.989 CXX test/cpp_headers/xor.o 00:02:31.989 CXX test/cpp_headers/zipf.o 00:02:31.989 LINK led 00:02:32.254 LINK scheduler 00:02:32.254 LINK vhost_fuzz 00:02:32.254 LINK thread 00:02:32.254 LINK hello_sock 00:02:32.254 LINK vhost 00:02:32.254 CC test/nvme/fdp/fdp.o 00:02:32.254 CC test/nvme/reset/reset.o 00:02:32.254 CC test/nvme/connect_stress/connect_stress.o 00:02:32.254 CC test/nvme/overhead/overhead.o 00:02:32.254 CC test/nvme/reserve/reserve.o 00:02:32.254 CC test/nvme/sgl/sgl.o 00:02:32.254 CC test/nvme/aer/aer.o 00:02:32.254 CC test/nvme/cuse/cuse.o 00:02:32.254 CC test/nvme/e2edp/nvme_dp.o 00:02:32.254 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:32.254 CC test/nvme/startup/startup.o 00:02:32.254 CC test/nvme/err_injection/err_injection.o 00:02:32.254 CC test/nvme/boot_partition/boot_partition.o 00:02:32.254 CC test/nvme/compliance/nvme_compliance.o 00:02:32.254 CC test/nvme/fused_ordering/fused_ordering.o 00:02:32.254 CC test/nvme/simple_copy/simple_copy.o 00:02:32.254 LINK idxd_perf 00:02:32.254 CC test/accel/dif/dif.o 00:02:32.513 LINK spdk_nvme_perf 00:02:32.513 CC test/blobfs/mkfs/mkfs.o 00:02:32.513 CC test/lvol/esnap/esnap.o 00:02:32.513 LINK spdk_top 00:02:32.513 LINK spdk_nvme_identify 00:02:32.513 LINK boot_partition 00:02:32.513 LINK connect_stress 00:02:32.513 LINK doorbell_aers 00:02:32.772 LINK fused_ordering 00:02:32.772 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:32.772 CC examples/nvme/arbitration/arbitration.o 00:02:32.772 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:32.772 CC examples/nvme/abort/abort.o 00:02:32.772 CC examples/nvme/hotplug/hotplug.o 00:02:32.772 LINK mkfs 00:02:32.772 CC examples/nvme/reconnect/reconnect.o 00:02:32.772 CC examples/nvme/hello_world/hello_world.o 00:02:32.772 CC examples/accel/perf/accel_perf.o 00:02:32.772 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:32.772 LINK startup 00:02:32.772 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:32.772 LINK err_injection 00:02:32.772 LINK nvme_dp 00:02:32.772 CC examples/blob/cli/blobcli.o 00:02:32.772 CC examples/blob/hello_world/hello_blob.o 00:02:32.772 LINK reserve 00:02:32.772 LINK aer 00:02:32.772 LINK memory_ut 00:02:32.772 LINK simple_copy 00:02:32.772 LINK reset 00:02:32.772 LINK fdp 00:02:32.772 LINK overhead 00:02:32.772 LINK sgl 00:02:32.772 LINK nvme_compliance 00:02:33.031 LINK cmb_copy 00:02:33.031 LINK pmr_persistence 00:02:33.031 LINK hotplug 00:02:33.031 LINK hello_world 00:02:33.309 LINK hello_fsdev 00:02:33.309 LINK abort 00:02:33.309 LINK hello_blob 00:02:33.309 LINK arbitration 00:02:33.309 LINK reconnect 00:02:33.309 LINK dif 00:02:33.567 LINK accel_perf 00:02:33.567 LINK nvme_manage 00:02:33.567 LINK blobcli 00:02:33.826 CC examples/bdev/hello_world/hello_bdev.o 00:02:33.826 CC test/bdev/bdevio/bdevio.o 00:02:33.826 CC examples/bdev/bdevperf/bdevperf.o 00:02:34.084 LINK iscsi_fuzz 00:02:34.084 LINK hello_bdev 00:02:34.342 LINK cuse 00:02:34.342 LINK bdevio 00:02:34.908 LINK bdevperf 00:02:35.166 CC examples/nvmf/nvmf/nvmf.o 00:02:35.425 LINK nvmf 00:02:39.608 LINK esnap 00:02:39.608 00:02:39.608 real 1m19.750s 00:02:39.608 user 13m6.793s 00:02:39.608 sys 2m33.886s 00:02:39.608 16:10:40 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:39.608 16:10:40 make -- common/autotest_common.sh@10 -- $ set +x 00:02:39.608 ************************************ 00:02:39.608 END TEST make 00:02:39.608 ************************************ 00:02:39.608 16:10:40 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:39.608 16:10:40 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:39.608 16:10:40 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:39.608 16:10:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:39.608 16:10:40 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:39.608 16:10:40 -- pm/common@44 -- $ pid=2928006 00:02:39.608 16:10:40 -- pm/common@50 -- $ kill -TERM 2928006 00:02:39.608 16:10:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:39.608 16:10:40 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:39.608 16:10:40 -- pm/common@44 -- $ pid=2928008 00:02:39.608 16:10:40 -- pm/common@50 -- $ kill -TERM 2928008 00:02:39.608 16:10:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:39.608 16:10:40 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:39.608 16:10:40 -- pm/common@44 -- $ pid=2928010 00:02:39.608 16:10:40 -- pm/common@50 -- $ kill -TERM 2928010 00:02:39.608 16:10:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:39.608 16:10:40 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:39.608 16:10:40 -- pm/common@44 -- $ pid=2928038 00:02:39.608 16:10:40 -- pm/common@50 -- $ sudo -E kill -TERM 2928038 00:02:39.866 16:10:40 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:02:39.866 16:10:40 -- common/autotest_common.sh@1681 -- # lcov --version 00:02:39.866 16:10:40 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:02:39.866 16:10:40 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:02:39.866 16:10:40 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:39.866 16:10:40 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:39.866 16:10:40 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:39.866 16:10:40 -- scripts/common.sh@336 -- # IFS=.-: 00:02:39.866 16:10:40 -- scripts/common.sh@336 -- # read -ra ver1 00:02:39.866 16:10:40 -- scripts/common.sh@337 -- # IFS=.-: 00:02:39.866 16:10:40 -- scripts/common.sh@337 -- # read -ra ver2 00:02:39.866 16:10:40 -- scripts/common.sh@338 -- # local 'op=<' 00:02:39.866 16:10:40 -- scripts/common.sh@340 -- # ver1_l=2 00:02:39.866 16:10:40 -- scripts/common.sh@341 -- # ver2_l=1 00:02:39.866 16:10:40 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:39.866 16:10:40 -- scripts/common.sh@344 -- # case "$op" in 00:02:39.866 16:10:40 -- scripts/common.sh@345 -- # : 1 00:02:39.866 16:10:40 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:39.866 16:10:40 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:39.866 16:10:40 -- scripts/common.sh@365 -- # decimal 1 00:02:39.866 16:10:40 -- scripts/common.sh@353 -- # local d=1 00:02:39.866 16:10:40 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:39.866 16:10:40 -- scripts/common.sh@355 -- # echo 1 00:02:39.866 16:10:40 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:39.866 16:10:40 -- scripts/common.sh@366 -- # decimal 2 00:02:39.866 16:10:40 -- scripts/common.sh@353 -- # local d=2 00:02:39.866 16:10:40 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:39.866 16:10:40 -- scripts/common.sh@355 -- # echo 2 00:02:39.866 16:10:40 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:39.866 16:10:40 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:39.866 16:10:40 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:39.866 16:10:40 -- scripts/common.sh@368 -- # return 0 00:02:39.866 16:10:40 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:39.866 16:10:40 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:02:39.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:39.866 --rc genhtml_branch_coverage=1 00:02:39.866 --rc genhtml_function_coverage=1 00:02:39.866 --rc genhtml_legend=1 00:02:39.866 --rc geninfo_all_blocks=1 00:02:39.866 --rc geninfo_unexecuted_blocks=1 00:02:39.866 00:02:39.866 ' 00:02:39.866 16:10:40 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:02:39.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:39.866 --rc genhtml_branch_coverage=1 00:02:39.866 --rc genhtml_function_coverage=1 00:02:39.866 --rc genhtml_legend=1 00:02:39.866 --rc geninfo_all_blocks=1 00:02:39.866 --rc geninfo_unexecuted_blocks=1 00:02:39.866 00:02:39.866 ' 00:02:39.866 16:10:40 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:02:39.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:39.866 --rc genhtml_branch_coverage=1 00:02:39.866 --rc genhtml_function_coverage=1 00:02:39.866 --rc genhtml_legend=1 00:02:39.866 --rc geninfo_all_blocks=1 00:02:39.866 --rc geninfo_unexecuted_blocks=1 00:02:39.866 00:02:39.866 ' 00:02:39.866 16:10:40 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:02:39.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:39.866 --rc genhtml_branch_coverage=1 00:02:39.866 --rc genhtml_function_coverage=1 00:02:39.866 --rc genhtml_legend=1 00:02:39.866 --rc geninfo_all_blocks=1 00:02:39.866 --rc geninfo_unexecuted_blocks=1 00:02:39.866 00:02:39.866 ' 00:02:39.866 16:10:40 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:39.866 16:10:40 -- nvmf/common.sh@7 -- # uname -s 00:02:39.866 16:10:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:39.866 16:10:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:39.866 16:10:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:39.866 16:10:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:39.866 16:10:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:39.866 16:10:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:39.866 16:10:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:39.867 16:10:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:39.867 16:10:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:39.867 16:10:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:39.867 16:10:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:02:39.867 16:10:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:02:39.867 16:10:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:39.867 16:10:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:39.867 16:10:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:39.867 16:10:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:39.867 16:10:40 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:39.867 16:10:40 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:39.867 16:10:40 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:39.867 16:10:40 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:39.867 16:10:40 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:39.867 16:10:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:39.867 16:10:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:39.867 16:10:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:39.867 16:10:40 -- paths/export.sh@5 -- # export PATH 00:02:39.867 16:10:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:39.867 16:10:40 -- nvmf/common.sh@51 -- # : 0 00:02:39.867 16:10:40 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:39.867 16:10:40 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:39.867 16:10:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:39.867 16:10:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:39.867 16:10:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:39.867 16:10:40 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:39.867 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:39.867 16:10:40 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:39.867 16:10:40 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:39.867 16:10:40 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:39.867 16:10:40 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:39.867 16:10:40 -- spdk/autotest.sh@32 -- # uname -s 00:02:39.867 16:10:40 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:39.867 16:10:40 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:39.867 16:10:40 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:39.867 16:10:40 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:39.867 16:10:40 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:39.867 16:10:40 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:39.867 16:10:40 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:39.867 16:10:40 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:39.867 16:10:40 -- spdk/autotest.sh@48 -- # udevadm_pid=2988664 00:02:39.867 16:10:40 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:39.867 16:10:40 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:39.867 16:10:40 -- pm/common@17 -- # local monitor 00:02:39.867 16:10:40 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:39.867 16:10:40 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:39.867 16:10:40 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:39.867 16:10:40 -- pm/common@21 -- # date +%s 00:02:39.867 16:10:40 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:39.867 16:10:40 -- pm/common@21 -- # date +%s 00:02:39.867 16:10:40 -- pm/common@25 -- # sleep 1 00:02:39.867 16:10:40 -- pm/common@21 -- # date +%s 00:02:39.867 16:10:40 -- pm/common@21 -- # date +%s 00:02:39.867 16:10:40 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1727619040 00:02:39.867 16:10:40 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1727619040 00:02:39.867 16:10:40 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1727619040 00:02:39.867 16:10:40 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1727619040 00:02:39.867 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1727619040_collect-vmstat.pm.log 00:02:39.867 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1727619040_collect-cpu-load.pm.log 00:02:39.867 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1727619040_collect-cpu-temp.pm.log 00:02:39.867 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1727619040_collect-bmc-pm.bmc.pm.log 00:02:40.803 16:10:41 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:40.803 16:10:41 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:40.803 16:10:41 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:40.803 16:10:41 -- common/autotest_common.sh@10 -- # set +x 00:02:40.803 16:10:41 -- spdk/autotest.sh@59 -- # create_test_list 00:02:40.803 16:10:41 -- common/autotest_common.sh@748 -- # xtrace_disable 00:02:40.803 16:10:41 -- common/autotest_common.sh@10 -- # set +x 00:02:41.061 16:10:41 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:41.061 16:10:41 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:41.061 16:10:41 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:41.061 16:10:41 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:41.061 16:10:41 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:41.061 16:10:41 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:41.061 16:10:41 -- common/autotest_common.sh@1455 -- # uname 00:02:41.061 16:10:41 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:41.061 16:10:41 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:41.061 16:10:41 -- common/autotest_common.sh@1475 -- # uname 00:02:41.061 16:10:41 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:41.061 16:10:41 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:41.061 16:10:41 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:41.061 lcov: LCOV version 1.15 00:02:41.061 16:10:41 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:02.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:02.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:24.900 16:11:25 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:24.900 16:11:25 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:24.900 16:11:25 -- common/autotest_common.sh@10 -- # set +x 00:03:24.900 16:11:25 -- spdk/autotest.sh@78 -- # rm -f 00:03:24.900 16:11:25 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:25.833 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:03:25.833 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:03:25.833 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:03:25.833 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:03:25.833 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:03:25.833 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:03:25.833 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:03:25.833 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:03:25.833 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:03:25.833 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:03:25.833 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:03:25.833 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:03:25.833 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:03:25.833 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:03:25.833 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:03:25.833 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:03:25.833 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:03:26.090 16:11:26 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:26.090 16:11:26 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:26.090 16:11:26 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:26.090 16:11:26 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:26.090 16:11:26 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:26.090 16:11:26 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:26.090 16:11:26 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:26.090 16:11:26 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:26.090 16:11:26 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:26.090 16:11:26 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:26.090 16:11:26 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:26.090 16:11:26 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:26.090 16:11:26 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:26.090 16:11:26 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:26.090 16:11:26 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:26.090 No valid GPT data, bailing 00:03:26.090 16:11:26 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:26.090 16:11:26 -- scripts/common.sh@394 -- # pt= 00:03:26.090 16:11:26 -- scripts/common.sh@395 -- # return 1 00:03:26.090 16:11:26 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:26.090 1+0 records in 00:03:26.090 1+0 records out 00:03:26.090 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00236176 s, 444 MB/s 00:03:26.090 16:11:26 -- spdk/autotest.sh@105 -- # sync 00:03:26.090 16:11:26 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:26.091 16:11:26 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:26.091 16:11:26 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:27.993 16:11:28 -- spdk/autotest.sh@111 -- # uname -s 00:03:27.993 16:11:28 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:27.993 16:11:28 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:27.993 16:11:28 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:29.370 Hugepages 00:03:29.370 node hugesize free / total 00:03:29.370 node0 1048576kB 0 / 0 00:03:29.370 node0 2048kB 0 / 0 00:03:29.370 node1 1048576kB 0 / 0 00:03:29.370 node1 2048kB 0 / 0 00:03:29.370 00:03:29.370 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:29.370 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:29.370 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:29.370 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:29.370 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:29.370 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:29.370 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:29.370 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:29.370 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:29.370 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:29.370 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:29.370 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:29.370 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:29.370 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:29.370 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:29.370 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:29.370 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:29.370 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:29.370 16:11:29 -- spdk/autotest.sh@117 -- # uname -s 00:03:29.370 16:11:29 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:29.370 16:11:29 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:29.370 16:11:29 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:30.305 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:30.305 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:30.565 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:30.565 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:30.565 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:30.565 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:30.565 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:30.565 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:30.565 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:30.565 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:30.565 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:30.565 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:30.565 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:30.565 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:30.565 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:30.565 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:31.502 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:31.502 16:11:32 -- common/autotest_common.sh@1515 -- # sleep 1 00:03:32.883 16:11:33 -- common/autotest_common.sh@1516 -- # bdfs=() 00:03:32.883 16:11:33 -- common/autotest_common.sh@1516 -- # local bdfs 00:03:32.883 16:11:33 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:03:32.883 16:11:33 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:03:32.883 16:11:33 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:32.883 16:11:33 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:32.883 16:11:33 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:32.883 16:11:33 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:32.883 16:11:33 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:32.883 16:11:33 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:03:32.883 16:11:33 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:88:00.0 00:03:32.883 16:11:33 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:33.818 Waiting for block devices as requested 00:03:33.818 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:03:33.818 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:34.076 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:34.076 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:34.076 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:34.335 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:34.335 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:34.335 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:34.335 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:34.593 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:34.593 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:34.593 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:34.593 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:34.851 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:34.851 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:34.851 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:34.852 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:35.110 16:11:35 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:03:35.110 16:11:35 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:03:35.110 16:11:35 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:03:35.110 16:11:35 -- common/autotest_common.sh@1485 -- # grep 0000:88:00.0/nvme/nvme 00:03:35.110 16:11:35 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:35.111 16:11:35 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:03:35.111 16:11:35 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:35.111 16:11:35 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:03:35.111 16:11:35 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:03:35.111 16:11:35 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:03:35.111 16:11:35 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:03:35.111 16:11:35 -- common/autotest_common.sh@1529 -- # grep oacs 00:03:35.111 16:11:35 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:03:35.111 16:11:35 -- common/autotest_common.sh@1529 -- # oacs=' 0xf' 00:03:35.111 16:11:35 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:03:35.111 16:11:35 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:03:35.111 16:11:35 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:03:35.111 16:11:35 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:03:35.111 16:11:35 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:03:35.111 16:11:35 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:03:35.111 16:11:35 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:03:35.111 16:11:35 -- common/autotest_common.sh@1541 -- # continue 00:03:35.111 16:11:35 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:35.111 16:11:35 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:35.111 16:11:35 -- common/autotest_common.sh@10 -- # set +x 00:03:35.111 16:11:35 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:35.111 16:11:35 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:35.111 16:11:35 -- common/autotest_common.sh@10 -- # set +x 00:03:35.111 16:11:35 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:36.487 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:36.487 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:36.487 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:36.487 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:36.487 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:36.487 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:36.487 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:36.487 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:36.487 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:36.487 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:36.487 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:36.487 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:36.487 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:36.487 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:36.487 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:36.487 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:37.424 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:37.424 16:11:37 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:37.424 16:11:37 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:37.424 16:11:37 -- common/autotest_common.sh@10 -- # set +x 00:03:37.424 16:11:37 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:37.424 16:11:37 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:03:37.424 16:11:37 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:03:37.424 16:11:37 -- common/autotest_common.sh@1561 -- # bdfs=() 00:03:37.424 16:11:37 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:03:37.424 16:11:37 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:03:37.424 16:11:37 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:03:37.424 16:11:37 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:03:37.424 16:11:37 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:37.424 16:11:37 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:37.424 16:11:37 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:37.424 16:11:37 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:37.424 16:11:37 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:37.424 16:11:37 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:03:37.424 16:11:37 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:88:00.0 00:03:37.424 16:11:37 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:03:37.424 16:11:37 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:03:37.424 16:11:37 -- common/autotest_common.sh@1564 -- # device=0x0a54 00:03:37.424 16:11:37 -- common/autotest_common.sh@1565 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:37.424 16:11:37 -- common/autotest_common.sh@1566 -- # bdfs+=($bdf) 00:03:37.424 16:11:37 -- common/autotest_common.sh@1570 -- # (( 1 > 0 )) 00:03:37.424 16:11:37 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:88:00.0 00:03:37.424 16:11:37 -- common/autotest_common.sh@1577 -- # [[ -z 0000:88:00.0 ]] 00:03:37.424 16:11:37 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=2999855 00:03:37.424 16:11:37 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:37.424 16:11:37 -- common/autotest_common.sh@1583 -- # waitforlisten 2999855 00:03:37.424 16:11:37 -- common/autotest_common.sh@831 -- # '[' -z 2999855 ']' 00:03:37.424 16:11:37 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:37.424 16:11:37 -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:37.424 16:11:37 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:37.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:37.425 16:11:37 -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:37.425 16:11:37 -- common/autotest_common.sh@10 -- # set +x 00:03:37.683 [2024-09-29 16:11:38.077148] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:03:37.683 [2024-09-29 16:11:38.077291] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2999855 ] 00:03:37.683 [2024-09-29 16:11:38.210082] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:37.942 [2024-09-29 16:11:38.466584] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:03:38.875 16:11:39 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:38.875 16:11:39 -- common/autotest_common.sh@864 -- # return 0 00:03:38.875 16:11:39 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:03:38.875 16:11:39 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:03:38.875 16:11:39 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:03:42.242 nvme0n1 00:03:42.242 16:11:42 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:42.500 [2024-09-29 16:11:42.812298] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:03:42.500 [2024-09-29 16:11:42.812380] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:03:42.500 request: 00:03:42.500 { 00:03:42.500 "nvme_ctrlr_name": "nvme0", 00:03:42.500 "password": "test", 00:03:42.500 "method": "bdev_nvme_opal_revert", 00:03:42.500 "req_id": 1 00:03:42.500 } 00:03:42.500 Got JSON-RPC error response 00:03:42.500 response: 00:03:42.500 { 00:03:42.500 "code": -32603, 00:03:42.500 "message": "Internal error" 00:03:42.500 } 00:03:42.500 16:11:42 -- common/autotest_common.sh@1589 -- # true 00:03:42.500 16:11:42 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:03:42.500 16:11:42 -- common/autotest_common.sh@1593 -- # killprocess 2999855 00:03:42.500 16:11:42 -- common/autotest_common.sh@950 -- # '[' -z 2999855 ']' 00:03:42.500 16:11:42 -- common/autotest_common.sh@954 -- # kill -0 2999855 00:03:42.500 16:11:42 -- common/autotest_common.sh@955 -- # uname 00:03:42.500 16:11:42 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:42.500 16:11:42 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2999855 00:03:42.500 16:11:42 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:42.500 16:11:42 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:42.500 16:11:42 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2999855' 00:03:42.500 killing process with pid 2999855 00:03:42.500 16:11:42 -- common/autotest_common.sh@969 -- # kill 2999855 00:03:42.500 16:11:42 -- common/autotest_common.sh@974 -- # wait 2999855 00:03:46.683 16:11:46 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:46.683 16:11:46 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:46.683 16:11:46 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:46.683 16:11:46 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:46.683 16:11:46 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:46.683 16:11:46 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:46.683 16:11:46 -- common/autotest_common.sh@10 -- # set +x 00:03:46.683 16:11:46 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:46.683 16:11:46 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:46.683 16:11:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:46.683 16:11:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:46.683 16:11:46 -- common/autotest_common.sh@10 -- # set +x 00:03:46.683 ************************************ 00:03:46.683 START TEST env 00:03:46.683 ************************************ 00:03:46.683 16:11:46 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:46.683 * Looking for test storage... 00:03:46.683 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:46.683 16:11:46 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:46.683 16:11:46 env -- common/autotest_common.sh@1681 -- # lcov --version 00:03:46.683 16:11:46 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:46.683 16:11:46 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:46.683 16:11:46 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:46.683 16:11:46 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:46.683 16:11:46 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:46.683 16:11:46 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:46.683 16:11:46 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:46.683 16:11:46 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:46.683 16:11:46 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:46.683 16:11:46 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:46.683 16:11:46 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:46.683 16:11:46 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:46.683 16:11:46 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:46.683 16:11:46 env -- scripts/common.sh@344 -- # case "$op" in 00:03:46.683 16:11:46 env -- scripts/common.sh@345 -- # : 1 00:03:46.683 16:11:46 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:46.683 16:11:46 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:46.683 16:11:46 env -- scripts/common.sh@365 -- # decimal 1 00:03:46.683 16:11:46 env -- scripts/common.sh@353 -- # local d=1 00:03:46.683 16:11:46 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:46.683 16:11:46 env -- scripts/common.sh@355 -- # echo 1 00:03:46.683 16:11:46 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:46.683 16:11:46 env -- scripts/common.sh@366 -- # decimal 2 00:03:46.683 16:11:46 env -- scripts/common.sh@353 -- # local d=2 00:03:46.683 16:11:46 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:46.683 16:11:46 env -- scripts/common.sh@355 -- # echo 2 00:03:46.683 16:11:46 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:46.683 16:11:46 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:46.683 16:11:46 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:46.683 16:11:46 env -- scripts/common.sh@368 -- # return 0 00:03:46.683 16:11:46 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:46.683 16:11:46 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:46.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.683 --rc genhtml_branch_coverage=1 00:03:46.683 --rc genhtml_function_coverage=1 00:03:46.683 --rc genhtml_legend=1 00:03:46.683 --rc geninfo_all_blocks=1 00:03:46.683 --rc geninfo_unexecuted_blocks=1 00:03:46.683 00:03:46.683 ' 00:03:46.683 16:11:46 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:46.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.683 --rc genhtml_branch_coverage=1 00:03:46.683 --rc genhtml_function_coverage=1 00:03:46.683 --rc genhtml_legend=1 00:03:46.683 --rc geninfo_all_blocks=1 00:03:46.683 --rc geninfo_unexecuted_blocks=1 00:03:46.683 00:03:46.683 ' 00:03:46.683 16:11:46 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:46.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.683 --rc genhtml_branch_coverage=1 00:03:46.683 --rc genhtml_function_coverage=1 00:03:46.683 --rc genhtml_legend=1 00:03:46.683 --rc geninfo_all_blocks=1 00:03:46.683 --rc geninfo_unexecuted_blocks=1 00:03:46.683 00:03:46.683 ' 00:03:46.683 16:11:46 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:46.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.683 --rc genhtml_branch_coverage=1 00:03:46.683 --rc genhtml_function_coverage=1 00:03:46.683 --rc genhtml_legend=1 00:03:46.683 --rc geninfo_all_blocks=1 00:03:46.683 --rc geninfo_unexecuted_blocks=1 00:03:46.683 00:03:46.683 ' 00:03:46.683 16:11:46 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:46.683 16:11:46 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:46.683 16:11:46 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:46.683 16:11:46 env -- common/autotest_common.sh@10 -- # set +x 00:03:46.683 ************************************ 00:03:46.683 START TEST env_memory 00:03:46.683 ************************************ 00:03:46.683 16:11:46 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:46.683 00:03:46.683 00:03:46.683 CUnit - A unit testing framework for C - Version 2.1-3 00:03:46.683 http://cunit.sourceforge.net/ 00:03:46.683 00:03:46.683 00:03:46.683 Suite: memory 00:03:46.683 Test: alloc and free memory map ...[2024-09-29 16:11:46.986799] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:46.683 passed 00:03:46.683 Test: mem map translation ...[2024-09-29 16:11:47.026449] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:46.683 [2024-09-29 16:11:47.026489] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:46.683 [2024-09-29 16:11:47.026570] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:46.683 [2024-09-29 16:11:47.026597] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:46.683 passed 00:03:46.683 Test: mem map registration ...[2024-09-29 16:11:47.090320] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:46.683 [2024-09-29 16:11:47.090360] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:46.683 passed 00:03:46.683 Test: mem map adjacent registrations ...passed 00:03:46.683 00:03:46.683 Run Summary: Type Total Ran Passed Failed Inactive 00:03:46.683 suites 1 1 n/a 0 0 00:03:46.683 tests 4 4 4 0 0 00:03:46.683 asserts 152 152 152 0 n/a 00:03:46.683 00:03:46.683 Elapsed time = 0.226 seconds 00:03:46.683 00:03:46.683 real 0m0.247s 00:03:46.683 user 0m0.234s 00:03:46.683 sys 0m0.011s 00:03:46.683 16:11:47 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:46.683 16:11:47 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:46.683 ************************************ 00:03:46.683 END TEST env_memory 00:03:46.683 ************************************ 00:03:46.683 16:11:47 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:46.683 16:11:47 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:46.683 16:11:47 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:46.683 16:11:47 env -- common/autotest_common.sh@10 -- # set +x 00:03:46.683 ************************************ 00:03:46.683 START TEST env_vtophys 00:03:46.683 ************************************ 00:03:46.683 16:11:47 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:46.940 EAL: lib.eal log level changed from notice to debug 00:03:46.940 EAL: Detected lcore 0 as core 0 on socket 0 00:03:46.940 EAL: Detected lcore 1 as core 1 on socket 0 00:03:46.940 EAL: Detected lcore 2 as core 2 on socket 0 00:03:46.940 EAL: Detected lcore 3 as core 3 on socket 0 00:03:46.940 EAL: Detected lcore 4 as core 4 on socket 0 00:03:46.940 EAL: Detected lcore 5 as core 5 on socket 0 00:03:46.940 EAL: Detected lcore 6 as core 8 on socket 0 00:03:46.940 EAL: Detected lcore 7 as core 9 on socket 0 00:03:46.940 EAL: Detected lcore 8 as core 10 on socket 0 00:03:46.940 EAL: Detected lcore 9 as core 11 on socket 0 00:03:46.941 EAL: Detected lcore 10 as core 12 on socket 0 00:03:46.941 EAL: Detected lcore 11 as core 13 on socket 0 00:03:46.941 EAL: Detected lcore 12 as core 0 on socket 1 00:03:46.941 EAL: Detected lcore 13 as core 1 on socket 1 00:03:46.941 EAL: Detected lcore 14 as core 2 on socket 1 00:03:46.941 EAL: Detected lcore 15 as core 3 on socket 1 00:03:46.941 EAL: Detected lcore 16 as core 4 on socket 1 00:03:46.941 EAL: Detected lcore 17 as core 5 on socket 1 00:03:46.941 EAL: Detected lcore 18 as core 8 on socket 1 00:03:46.941 EAL: Detected lcore 19 as core 9 on socket 1 00:03:46.941 EAL: Detected lcore 20 as core 10 on socket 1 00:03:46.941 EAL: Detected lcore 21 as core 11 on socket 1 00:03:46.941 EAL: Detected lcore 22 as core 12 on socket 1 00:03:46.941 EAL: Detected lcore 23 as core 13 on socket 1 00:03:46.941 EAL: Detected lcore 24 as core 0 on socket 0 00:03:46.941 EAL: Detected lcore 25 as core 1 on socket 0 00:03:46.941 EAL: Detected lcore 26 as core 2 on socket 0 00:03:46.941 EAL: Detected lcore 27 as core 3 on socket 0 00:03:46.941 EAL: Detected lcore 28 as core 4 on socket 0 00:03:46.941 EAL: Detected lcore 29 as core 5 on socket 0 00:03:46.941 EAL: Detected lcore 30 as core 8 on socket 0 00:03:46.941 EAL: Detected lcore 31 as core 9 on socket 0 00:03:46.941 EAL: Detected lcore 32 as core 10 on socket 0 00:03:46.941 EAL: Detected lcore 33 as core 11 on socket 0 00:03:46.941 EAL: Detected lcore 34 as core 12 on socket 0 00:03:46.941 EAL: Detected lcore 35 as core 13 on socket 0 00:03:46.941 EAL: Detected lcore 36 as core 0 on socket 1 00:03:46.941 EAL: Detected lcore 37 as core 1 on socket 1 00:03:46.941 EAL: Detected lcore 38 as core 2 on socket 1 00:03:46.941 EAL: Detected lcore 39 as core 3 on socket 1 00:03:46.941 EAL: Detected lcore 40 as core 4 on socket 1 00:03:46.941 EAL: Detected lcore 41 as core 5 on socket 1 00:03:46.941 EAL: Detected lcore 42 as core 8 on socket 1 00:03:46.941 EAL: Detected lcore 43 as core 9 on socket 1 00:03:46.941 EAL: Detected lcore 44 as core 10 on socket 1 00:03:46.941 EAL: Detected lcore 45 as core 11 on socket 1 00:03:46.941 EAL: Detected lcore 46 as core 12 on socket 1 00:03:46.941 EAL: Detected lcore 47 as core 13 on socket 1 00:03:46.941 EAL: Maximum logical cores by configuration: 128 00:03:46.941 EAL: Detected CPU lcores: 48 00:03:46.941 EAL: Detected NUMA nodes: 2 00:03:46.941 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:46.941 EAL: Detected shared linkage of DPDK 00:03:46.941 EAL: No shared files mode enabled, IPC will be disabled 00:03:46.941 EAL: Bus pci wants IOVA as 'DC' 00:03:46.941 EAL: Buses did not request a specific IOVA mode. 00:03:46.941 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:46.941 EAL: Selected IOVA mode 'VA' 00:03:46.941 EAL: Probing VFIO support... 00:03:46.941 EAL: IOMMU type 1 (Type 1) is supported 00:03:46.941 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:46.941 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:46.941 EAL: VFIO support initialized 00:03:46.941 EAL: Ask a virtual area of 0x2e000 bytes 00:03:46.941 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:46.941 EAL: Setting up physically contiguous memory... 00:03:46.941 EAL: Setting maximum number of open files to 524288 00:03:46.941 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:46.941 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:46.941 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:46.941 EAL: Ask a virtual area of 0x61000 bytes 00:03:46.941 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:46.941 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:46.941 EAL: Ask a virtual area of 0x400000000 bytes 00:03:46.941 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:46.941 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:46.941 EAL: Ask a virtual area of 0x61000 bytes 00:03:46.941 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:46.941 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:46.941 EAL: Ask a virtual area of 0x400000000 bytes 00:03:46.941 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:46.941 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:46.941 EAL: Ask a virtual area of 0x61000 bytes 00:03:46.941 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:46.941 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:46.941 EAL: Ask a virtual area of 0x400000000 bytes 00:03:46.941 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:46.941 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:46.941 EAL: Ask a virtual area of 0x61000 bytes 00:03:46.941 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:46.941 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:46.941 EAL: Ask a virtual area of 0x400000000 bytes 00:03:46.941 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:46.941 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:46.941 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:46.941 EAL: Ask a virtual area of 0x61000 bytes 00:03:46.941 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:46.941 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:46.941 EAL: Ask a virtual area of 0x400000000 bytes 00:03:46.941 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:46.941 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:46.941 EAL: Ask a virtual area of 0x61000 bytes 00:03:46.941 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:46.941 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:46.941 EAL: Ask a virtual area of 0x400000000 bytes 00:03:46.941 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:46.941 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:46.941 EAL: Ask a virtual area of 0x61000 bytes 00:03:46.941 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:46.941 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:46.941 EAL: Ask a virtual area of 0x400000000 bytes 00:03:46.941 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:46.941 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:46.941 EAL: Ask a virtual area of 0x61000 bytes 00:03:46.941 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:46.941 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:46.941 EAL: Ask a virtual area of 0x400000000 bytes 00:03:46.941 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:46.941 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:46.941 EAL: Hugepages will be freed exactly as allocated. 00:03:46.941 EAL: No shared files mode enabled, IPC is disabled 00:03:46.941 EAL: No shared files mode enabled, IPC is disabled 00:03:46.941 EAL: TSC frequency is ~2700000 KHz 00:03:46.941 EAL: Main lcore 0 is ready (tid=7fbd3e4a9a40;cpuset=[0]) 00:03:46.941 EAL: Trying to obtain current memory policy. 00:03:46.941 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:46.941 EAL: Restoring previous memory policy: 0 00:03:46.941 EAL: request: mp_malloc_sync 00:03:46.941 EAL: No shared files mode enabled, IPC is disabled 00:03:46.941 EAL: Heap on socket 0 was expanded by 2MB 00:03:46.941 EAL: No shared files mode enabled, IPC is disabled 00:03:46.941 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:46.941 EAL: Mem event callback 'spdk:(nil)' registered 00:03:46.941 00:03:46.941 00:03:46.941 CUnit - A unit testing framework for C - Version 2.1-3 00:03:46.941 http://cunit.sourceforge.net/ 00:03:46.941 00:03:46.941 00:03:46.941 Suite: components_suite 00:03:47.506 Test: vtophys_malloc_test ...passed 00:03:47.506 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:47.506 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:47.506 EAL: Restoring previous memory policy: 4 00:03:47.506 EAL: Calling mem event callback 'spdk:(nil)' 00:03:47.506 EAL: request: mp_malloc_sync 00:03:47.506 EAL: No shared files mode enabled, IPC is disabled 00:03:47.506 EAL: Heap on socket 0 was expanded by 4MB 00:03:47.506 EAL: Calling mem event callback 'spdk:(nil)' 00:03:47.506 EAL: request: mp_malloc_sync 00:03:47.506 EAL: No shared files mode enabled, IPC is disabled 00:03:47.506 EAL: Heap on socket 0 was shrunk by 4MB 00:03:47.506 EAL: Trying to obtain current memory policy. 00:03:47.506 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:47.506 EAL: Restoring previous memory policy: 4 00:03:47.506 EAL: Calling mem event callback 'spdk:(nil)' 00:03:47.506 EAL: request: mp_malloc_sync 00:03:47.506 EAL: No shared files mode enabled, IPC is disabled 00:03:47.506 EAL: Heap on socket 0 was expanded by 6MB 00:03:47.506 EAL: Calling mem event callback 'spdk:(nil)' 00:03:47.506 EAL: request: mp_malloc_sync 00:03:47.506 EAL: No shared files mode enabled, IPC is disabled 00:03:47.506 EAL: Heap on socket 0 was shrunk by 6MB 00:03:47.506 EAL: Trying to obtain current memory policy. 00:03:47.506 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:47.506 EAL: Restoring previous memory policy: 4 00:03:47.506 EAL: Calling mem event callback 'spdk:(nil)' 00:03:47.506 EAL: request: mp_malloc_sync 00:03:47.506 EAL: No shared files mode enabled, IPC is disabled 00:03:47.506 EAL: Heap on socket 0 was expanded by 10MB 00:03:47.506 EAL: Calling mem event callback 'spdk:(nil)' 00:03:47.506 EAL: request: mp_malloc_sync 00:03:47.506 EAL: No shared files mode enabled, IPC is disabled 00:03:47.506 EAL: Heap on socket 0 was shrunk by 10MB 00:03:47.506 EAL: Trying to obtain current memory policy. 00:03:47.506 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:47.506 EAL: Restoring previous memory policy: 4 00:03:47.506 EAL: Calling mem event callback 'spdk:(nil)' 00:03:47.506 EAL: request: mp_malloc_sync 00:03:47.506 EAL: No shared files mode enabled, IPC is disabled 00:03:47.506 EAL: Heap on socket 0 was expanded by 18MB 00:03:47.506 EAL: Calling mem event callback 'spdk:(nil)' 00:03:47.506 EAL: request: mp_malloc_sync 00:03:47.506 EAL: No shared files mode enabled, IPC is disabled 00:03:47.506 EAL: Heap on socket 0 was shrunk by 18MB 00:03:47.506 EAL: Trying to obtain current memory policy. 00:03:47.506 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:47.506 EAL: Restoring previous memory policy: 4 00:03:47.506 EAL: Calling mem event callback 'spdk:(nil)' 00:03:47.506 EAL: request: mp_malloc_sync 00:03:47.506 EAL: No shared files mode enabled, IPC is disabled 00:03:47.506 EAL: Heap on socket 0 was expanded by 34MB 00:03:47.506 EAL: Calling mem event callback 'spdk:(nil)' 00:03:47.506 EAL: request: mp_malloc_sync 00:03:47.506 EAL: No shared files mode enabled, IPC is disabled 00:03:47.506 EAL: Heap on socket 0 was shrunk by 34MB 00:03:47.506 EAL: Trying to obtain current memory policy. 00:03:47.506 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:47.506 EAL: Restoring previous memory policy: 4 00:03:47.506 EAL: Calling mem event callback 'spdk:(nil)' 00:03:47.506 EAL: request: mp_malloc_sync 00:03:47.506 EAL: No shared files mode enabled, IPC is disabled 00:03:47.506 EAL: Heap on socket 0 was expanded by 66MB 00:03:47.764 EAL: Calling mem event callback 'spdk:(nil)' 00:03:47.764 EAL: request: mp_malloc_sync 00:03:47.764 EAL: No shared files mode enabled, IPC is disabled 00:03:47.764 EAL: Heap on socket 0 was shrunk by 66MB 00:03:47.764 EAL: Trying to obtain current memory policy. 00:03:47.764 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.021 EAL: Restoring previous memory policy: 4 00:03:48.021 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.021 EAL: request: mp_malloc_sync 00:03:48.021 EAL: No shared files mode enabled, IPC is disabled 00:03:48.021 EAL: Heap on socket 0 was expanded by 130MB 00:03:48.021 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.279 EAL: request: mp_malloc_sync 00:03:48.279 EAL: No shared files mode enabled, IPC is disabled 00:03:48.279 EAL: Heap on socket 0 was shrunk by 130MB 00:03:48.279 EAL: Trying to obtain current memory policy. 00:03:48.279 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.536 EAL: Restoring previous memory policy: 4 00:03:48.536 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.536 EAL: request: mp_malloc_sync 00:03:48.536 EAL: No shared files mode enabled, IPC is disabled 00:03:48.536 EAL: Heap on socket 0 was expanded by 258MB 00:03:48.793 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.050 EAL: request: mp_malloc_sync 00:03:49.050 EAL: No shared files mode enabled, IPC is disabled 00:03:49.050 EAL: Heap on socket 0 was shrunk by 258MB 00:03:49.308 EAL: Trying to obtain current memory policy. 00:03:49.308 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:49.565 EAL: Restoring previous memory policy: 4 00:03:49.565 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.565 EAL: request: mp_malloc_sync 00:03:49.565 EAL: No shared files mode enabled, IPC is disabled 00:03:49.565 EAL: Heap on socket 0 was expanded by 514MB 00:03:50.500 EAL: Calling mem event callback 'spdk:(nil)' 00:03:50.500 EAL: request: mp_malloc_sync 00:03:50.500 EAL: No shared files mode enabled, IPC is disabled 00:03:50.500 EAL: Heap on socket 0 was shrunk by 514MB 00:03:51.433 EAL: Trying to obtain current memory policy. 00:03:51.433 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:51.692 EAL: Restoring previous memory policy: 4 00:03:51.692 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.692 EAL: request: mp_malloc_sync 00:03:51.692 EAL: No shared files mode enabled, IPC is disabled 00:03:51.692 EAL: Heap on socket 0 was expanded by 1026MB 00:03:53.589 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.848 EAL: request: mp_malloc_sync 00:03:53.848 EAL: No shared files mode enabled, IPC is disabled 00:03:53.848 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:55.221 passed 00:03:55.221 00:03:55.221 Run Summary: Type Total Ran Passed Failed Inactive 00:03:55.221 suites 1 1 n/a 0 0 00:03:55.221 tests 2 2 2 0 0 00:03:55.221 asserts 497 497 497 0 n/a 00:03:55.221 00:03:55.221 Elapsed time = 8.295 seconds 00:03:55.221 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.221 EAL: request: mp_malloc_sync 00:03:55.221 EAL: No shared files mode enabled, IPC is disabled 00:03:55.221 EAL: Heap on socket 0 was shrunk by 2MB 00:03:55.221 EAL: No shared files mode enabled, IPC is disabled 00:03:55.221 EAL: No shared files mode enabled, IPC is disabled 00:03:55.221 EAL: No shared files mode enabled, IPC is disabled 00:03:55.480 00:03:55.480 real 0m8.566s 00:03:55.480 user 0m7.412s 00:03:55.480 sys 0m1.094s 00:03:55.480 16:11:55 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:55.480 16:11:55 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:55.480 ************************************ 00:03:55.480 END TEST env_vtophys 00:03:55.480 ************************************ 00:03:55.480 16:11:55 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:55.480 16:11:55 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:55.480 16:11:55 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:55.480 16:11:55 env -- common/autotest_common.sh@10 -- # set +x 00:03:55.480 ************************************ 00:03:55.480 START TEST env_pci 00:03:55.480 ************************************ 00:03:55.480 16:11:55 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:55.480 00:03:55.480 00:03:55.480 CUnit - A unit testing framework for C - Version 2.1-3 00:03:55.480 http://cunit.sourceforge.net/ 00:03:55.480 00:03:55.480 00:03:55.480 Suite: pci 00:03:55.480 Test: pci_hook ...[2024-09-29 16:11:55.874341] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3002054 has claimed it 00:03:55.480 EAL: Cannot find device (10000:00:01.0) 00:03:55.480 EAL: Failed to attach device on primary process 00:03:55.480 passed 00:03:55.480 00:03:55.480 Run Summary: Type Total Ran Passed Failed Inactive 00:03:55.480 suites 1 1 n/a 0 0 00:03:55.480 tests 1 1 1 0 0 00:03:55.480 asserts 25 25 25 0 n/a 00:03:55.480 00:03:55.480 Elapsed time = 0.051 seconds 00:03:55.480 00:03:55.480 real 0m0.105s 00:03:55.480 user 0m0.042s 00:03:55.480 sys 0m0.062s 00:03:55.480 16:11:55 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:55.480 16:11:55 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:55.480 ************************************ 00:03:55.480 END TEST env_pci 00:03:55.480 ************************************ 00:03:55.480 16:11:55 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:55.480 16:11:55 env -- env/env.sh@15 -- # uname 00:03:55.480 16:11:55 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:55.480 16:11:55 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:55.480 16:11:55 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:55.480 16:11:55 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:03:55.480 16:11:55 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:55.480 16:11:55 env -- common/autotest_common.sh@10 -- # set +x 00:03:55.480 ************************************ 00:03:55.480 START TEST env_dpdk_post_init 00:03:55.480 ************************************ 00:03:55.480 16:11:55 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:55.738 EAL: Detected CPU lcores: 48 00:03:55.738 EAL: Detected NUMA nodes: 2 00:03:55.738 EAL: Detected shared linkage of DPDK 00:03:55.738 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:55.738 EAL: Selected IOVA mode 'VA' 00:03:55.738 EAL: VFIO support initialized 00:03:55.738 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:55.738 EAL: Using IOMMU type 1 (Type 1) 00:03:55.738 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:03:55.738 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:03:55.738 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:03:55.738 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:03:55.738 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:03:55.997 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:03:55.997 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:03:55.997 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:03:55.997 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:03:55.997 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:03:55.997 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:03:55.997 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:03:55.997 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:03:55.997 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:03:55.997 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:03:55.997 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:03:56.932 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:04:00.213 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:04:00.213 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:04:00.213 Starting DPDK initialization... 00:04:00.213 Starting SPDK post initialization... 00:04:00.213 SPDK NVMe probe 00:04:00.213 Attaching to 0000:88:00.0 00:04:00.213 Attached to 0000:88:00.0 00:04:00.213 Cleaning up... 00:04:00.213 00:04:00.213 real 0m4.561s 00:04:00.213 user 0m3.095s 00:04:00.213 sys 0m0.519s 00:04:00.213 16:12:00 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:00.213 16:12:00 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:00.213 ************************************ 00:04:00.213 END TEST env_dpdk_post_init 00:04:00.213 ************************************ 00:04:00.213 16:12:00 env -- env/env.sh@26 -- # uname 00:04:00.213 16:12:00 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:00.213 16:12:00 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:00.213 16:12:00 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:00.213 16:12:00 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:00.213 16:12:00 env -- common/autotest_common.sh@10 -- # set +x 00:04:00.213 ************************************ 00:04:00.213 START TEST env_mem_callbacks 00:04:00.213 ************************************ 00:04:00.213 16:12:00 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:00.213 EAL: Detected CPU lcores: 48 00:04:00.213 EAL: Detected NUMA nodes: 2 00:04:00.213 EAL: Detected shared linkage of DPDK 00:04:00.213 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:00.213 EAL: Selected IOVA mode 'VA' 00:04:00.213 EAL: VFIO support initialized 00:04:00.213 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:00.213 00:04:00.213 00:04:00.213 CUnit - A unit testing framework for C - Version 2.1-3 00:04:00.213 http://cunit.sourceforge.net/ 00:04:00.213 00:04:00.213 00:04:00.213 Suite: memory 00:04:00.213 Test: test ... 00:04:00.213 register 0x200000200000 2097152 00:04:00.213 malloc 3145728 00:04:00.213 register 0x200000400000 4194304 00:04:00.213 buf 0x2000004fffc0 len 3145728 PASSED 00:04:00.213 malloc 64 00:04:00.213 buf 0x2000004ffec0 len 64 PASSED 00:04:00.213 malloc 4194304 00:04:00.213 register 0x200000800000 6291456 00:04:00.213 buf 0x2000009fffc0 len 4194304 PASSED 00:04:00.213 free 0x2000004fffc0 3145728 00:04:00.213 free 0x2000004ffec0 64 00:04:00.213 unregister 0x200000400000 4194304 PASSED 00:04:00.213 free 0x2000009fffc0 4194304 00:04:00.213 unregister 0x200000800000 6291456 PASSED 00:04:00.213 malloc 8388608 00:04:00.213 register 0x200000400000 10485760 00:04:00.213 buf 0x2000005fffc0 len 8388608 PASSED 00:04:00.213 free 0x2000005fffc0 8388608 00:04:00.213 unregister 0x200000400000 10485760 PASSED 00:04:00.471 passed 00:04:00.471 00:04:00.471 Run Summary: Type Total Ran Passed Failed Inactive 00:04:00.471 suites 1 1 n/a 0 0 00:04:00.471 tests 1 1 1 0 0 00:04:00.471 asserts 15 15 15 0 n/a 00:04:00.471 00:04:00.471 Elapsed time = 0.061 seconds 00:04:00.471 00:04:00.471 real 0m0.196s 00:04:00.471 user 0m0.103s 00:04:00.471 sys 0m0.092s 00:04:00.471 16:12:00 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:00.471 16:12:00 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:00.471 ************************************ 00:04:00.471 END TEST env_mem_callbacks 00:04:00.471 ************************************ 00:04:00.471 00:04:00.471 real 0m14.062s 00:04:00.471 user 0m11.087s 00:04:00.471 sys 0m1.987s 00:04:00.471 16:12:00 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:00.471 16:12:00 env -- common/autotest_common.sh@10 -- # set +x 00:04:00.471 ************************************ 00:04:00.471 END TEST env 00:04:00.471 ************************************ 00:04:00.471 16:12:00 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:00.471 16:12:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:00.471 16:12:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:00.471 16:12:00 -- common/autotest_common.sh@10 -- # set +x 00:04:00.471 ************************************ 00:04:00.471 START TEST rpc 00:04:00.471 ************************************ 00:04:00.471 16:12:00 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:00.471 * Looking for test storage... 00:04:00.471 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:00.471 16:12:00 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:00.471 16:12:00 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:00.471 16:12:00 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:00.471 16:12:01 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:00.471 16:12:01 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:00.472 16:12:01 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:00.472 16:12:01 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:00.472 16:12:01 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:00.472 16:12:01 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:00.472 16:12:01 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:00.472 16:12:01 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:00.472 16:12:01 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:00.472 16:12:01 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:00.472 16:12:01 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:00.472 16:12:01 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:00.472 16:12:01 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:00.472 16:12:01 rpc -- scripts/common.sh@345 -- # : 1 00:04:00.472 16:12:01 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:00.472 16:12:01 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:00.472 16:12:01 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:00.472 16:12:01 rpc -- scripts/common.sh@353 -- # local d=1 00:04:00.472 16:12:01 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:00.472 16:12:01 rpc -- scripts/common.sh@355 -- # echo 1 00:04:00.472 16:12:01 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:00.472 16:12:01 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:00.472 16:12:01 rpc -- scripts/common.sh@353 -- # local d=2 00:04:00.472 16:12:01 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:00.472 16:12:01 rpc -- scripts/common.sh@355 -- # echo 2 00:04:00.472 16:12:01 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:00.472 16:12:01 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:00.472 16:12:01 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:00.472 16:12:01 rpc -- scripts/common.sh@368 -- # return 0 00:04:00.472 16:12:01 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:00.472 16:12:01 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:00.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.472 --rc genhtml_branch_coverage=1 00:04:00.472 --rc genhtml_function_coverage=1 00:04:00.472 --rc genhtml_legend=1 00:04:00.472 --rc geninfo_all_blocks=1 00:04:00.472 --rc geninfo_unexecuted_blocks=1 00:04:00.472 00:04:00.472 ' 00:04:00.472 16:12:01 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:00.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.472 --rc genhtml_branch_coverage=1 00:04:00.472 --rc genhtml_function_coverage=1 00:04:00.472 --rc genhtml_legend=1 00:04:00.472 --rc geninfo_all_blocks=1 00:04:00.472 --rc geninfo_unexecuted_blocks=1 00:04:00.472 00:04:00.472 ' 00:04:00.472 16:12:01 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:00.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.472 --rc genhtml_branch_coverage=1 00:04:00.472 --rc genhtml_function_coverage=1 00:04:00.472 --rc genhtml_legend=1 00:04:00.472 --rc geninfo_all_blocks=1 00:04:00.472 --rc geninfo_unexecuted_blocks=1 00:04:00.472 00:04:00.472 ' 00:04:00.472 16:12:01 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:00.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.472 --rc genhtml_branch_coverage=1 00:04:00.472 --rc genhtml_function_coverage=1 00:04:00.472 --rc genhtml_legend=1 00:04:00.472 --rc geninfo_all_blocks=1 00:04:00.472 --rc geninfo_unexecuted_blocks=1 00:04:00.472 00:04:00.472 ' 00:04:00.472 16:12:01 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3002835 00:04:00.472 16:12:01 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:00.472 16:12:01 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:00.472 16:12:01 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3002835 00:04:00.472 16:12:01 rpc -- common/autotest_common.sh@831 -- # '[' -z 3002835 ']' 00:04:00.472 16:12:01 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:00.472 16:12:01 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:00.472 16:12:01 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:00.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:00.472 16:12:01 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:00.472 16:12:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:00.730 [2024-09-29 16:12:01.105261] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:00.730 [2024-09-29 16:12:01.105408] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3002835 ] 00:04:00.731 [2024-09-29 16:12:01.233116] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:00.989 [2024-09-29 16:12:01.479299] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:00.989 [2024-09-29 16:12:01.479376] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3002835' to capture a snapshot of events at runtime. 00:04:00.989 [2024-09-29 16:12:01.479406] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:00.989 [2024-09-29 16:12:01.479428] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:00.989 [2024-09-29 16:12:01.479450] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3002835 for offline analysis/debug. 00:04:00.989 [2024-09-29 16:12:01.479503] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:01.923 16:12:02 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:01.923 16:12:02 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:01.923 16:12:02 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:01.923 16:12:02 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:01.923 16:12:02 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:01.923 16:12:02 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:01.923 16:12:02 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:01.923 16:12:02 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:01.923 16:12:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:01.923 ************************************ 00:04:01.923 START TEST rpc_integrity 00:04:01.923 ************************************ 00:04:01.923 16:12:02 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:01.923 16:12:02 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:01.923 16:12:02 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:01.923 16:12:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.923 16:12:02 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:01.923 16:12:02 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:01.923 16:12:02 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:02.182 16:12:02 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:02.182 16:12:02 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:02.182 16:12:02 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:02.182 16:12:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.182 16:12:02 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:02.182 16:12:02 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:02.182 16:12:02 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:02.182 16:12:02 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:02.182 16:12:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.182 16:12:02 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:02.182 16:12:02 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:02.182 { 00:04:02.182 "name": "Malloc0", 00:04:02.182 "aliases": [ 00:04:02.182 "3a90a9b4-9b0e-422c-8178-e43e6904b9ae" 00:04:02.182 ], 00:04:02.182 "product_name": "Malloc disk", 00:04:02.182 "block_size": 512, 00:04:02.182 "num_blocks": 16384, 00:04:02.182 "uuid": "3a90a9b4-9b0e-422c-8178-e43e6904b9ae", 00:04:02.182 "assigned_rate_limits": { 00:04:02.182 "rw_ios_per_sec": 0, 00:04:02.182 "rw_mbytes_per_sec": 0, 00:04:02.182 "r_mbytes_per_sec": 0, 00:04:02.182 "w_mbytes_per_sec": 0 00:04:02.182 }, 00:04:02.182 "claimed": false, 00:04:02.182 "zoned": false, 00:04:02.182 "supported_io_types": { 00:04:02.182 "read": true, 00:04:02.182 "write": true, 00:04:02.182 "unmap": true, 00:04:02.182 "flush": true, 00:04:02.182 "reset": true, 00:04:02.182 "nvme_admin": false, 00:04:02.182 "nvme_io": false, 00:04:02.182 "nvme_io_md": false, 00:04:02.182 "write_zeroes": true, 00:04:02.182 "zcopy": true, 00:04:02.182 "get_zone_info": false, 00:04:02.182 "zone_management": false, 00:04:02.182 "zone_append": false, 00:04:02.182 "compare": false, 00:04:02.182 "compare_and_write": false, 00:04:02.182 "abort": true, 00:04:02.182 "seek_hole": false, 00:04:02.182 "seek_data": false, 00:04:02.182 "copy": true, 00:04:02.182 "nvme_iov_md": false 00:04:02.182 }, 00:04:02.182 "memory_domains": [ 00:04:02.182 { 00:04:02.182 "dma_device_id": "system", 00:04:02.182 "dma_device_type": 1 00:04:02.182 }, 00:04:02.182 { 00:04:02.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:02.182 "dma_device_type": 2 00:04:02.182 } 00:04:02.182 ], 00:04:02.182 "driver_specific": {} 00:04:02.182 } 00:04:02.182 ]' 00:04:02.182 16:12:02 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:02.182 16:12:02 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:02.182 16:12:02 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:02.182 16:12:02 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:02.182 16:12:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.182 [2024-09-29 16:12:02.570766] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:02.182 [2024-09-29 16:12:02.570829] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:02.182 [2024-09-29 16:12:02.570872] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000022880 00:04:02.182 [2024-09-29 16:12:02.570896] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:02.182 [2024-09-29 16:12:02.573833] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:02.182 [2024-09-29 16:12:02.573869] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:02.182 Passthru0 00:04:02.182 16:12:02 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:02.182 16:12:02 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:02.182 16:12:02 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:02.182 16:12:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.182 16:12:02 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:02.182 16:12:02 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:02.182 { 00:04:02.182 "name": "Malloc0", 00:04:02.182 "aliases": [ 00:04:02.182 "3a90a9b4-9b0e-422c-8178-e43e6904b9ae" 00:04:02.182 ], 00:04:02.182 "product_name": "Malloc disk", 00:04:02.182 "block_size": 512, 00:04:02.182 "num_blocks": 16384, 00:04:02.182 "uuid": "3a90a9b4-9b0e-422c-8178-e43e6904b9ae", 00:04:02.182 "assigned_rate_limits": { 00:04:02.182 "rw_ios_per_sec": 0, 00:04:02.182 "rw_mbytes_per_sec": 0, 00:04:02.182 "r_mbytes_per_sec": 0, 00:04:02.182 "w_mbytes_per_sec": 0 00:04:02.182 }, 00:04:02.182 "claimed": true, 00:04:02.182 "claim_type": "exclusive_write", 00:04:02.182 "zoned": false, 00:04:02.182 "supported_io_types": { 00:04:02.182 "read": true, 00:04:02.182 "write": true, 00:04:02.182 "unmap": true, 00:04:02.182 "flush": true, 00:04:02.182 "reset": true, 00:04:02.182 "nvme_admin": false, 00:04:02.182 "nvme_io": false, 00:04:02.182 "nvme_io_md": false, 00:04:02.182 "write_zeroes": true, 00:04:02.182 "zcopy": true, 00:04:02.182 "get_zone_info": false, 00:04:02.182 "zone_management": false, 00:04:02.182 "zone_append": false, 00:04:02.182 "compare": false, 00:04:02.182 "compare_and_write": false, 00:04:02.182 "abort": true, 00:04:02.182 "seek_hole": false, 00:04:02.182 "seek_data": false, 00:04:02.182 "copy": true, 00:04:02.182 "nvme_iov_md": false 00:04:02.182 }, 00:04:02.182 "memory_domains": [ 00:04:02.182 { 00:04:02.182 "dma_device_id": "system", 00:04:02.182 "dma_device_type": 1 00:04:02.182 }, 00:04:02.182 { 00:04:02.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:02.182 "dma_device_type": 2 00:04:02.182 } 00:04:02.182 ], 00:04:02.182 "driver_specific": {} 00:04:02.182 }, 00:04:02.182 { 00:04:02.182 "name": "Passthru0", 00:04:02.182 "aliases": [ 00:04:02.182 "4ffca47c-1d39-5253-bf08-23ad3b95143d" 00:04:02.182 ], 00:04:02.182 "product_name": "passthru", 00:04:02.182 "block_size": 512, 00:04:02.182 "num_blocks": 16384, 00:04:02.182 "uuid": "4ffca47c-1d39-5253-bf08-23ad3b95143d", 00:04:02.182 "assigned_rate_limits": { 00:04:02.182 "rw_ios_per_sec": 0, 00:04:02.182 "rw_mbytes_per_sec": 0, 00:04:02.182 "r_mbytes_per_sec": 0, 00:04:02.182 "w_mbytes_per_sec": 0 00:04:02.182 }, 00:04:02.182 "claimed": false, 00:04:02.182 "zoned": false, 00:04:02.182 "supported_io_types": { 00:04:02.182 "read": true, 00:04:02.182 "write": true, 00:04:02.182 "unmap": true, 00:04:02.182 "flush": true, 00:04:02.182 "reset": true, 00:04:02.182 "nvme_admin": false, 00:04:02.182 "nvme_io": false, 00:04:02.182 "nvme_io_md": false, 00:04:02.182 "write_zeroes": true, 00:04:02.182 "zcopy": true, 00:04:02.182 "get_zone_info": false, 00:04:02.182 "zone_management": false, 00:04:02.182 "zone_append": false, 00:04:02.182 "compare": false, 00:04:02.182 "compare_and_write": false, 00:04:02.182 "abort": true, 00:04:02.182 "seek_hole": false, 00:04:02.182 "seek_data": false, 00:04:02.182 "copy": true, 00:04:02.182 "nvme_iov_md": false 00:04:02.182 }, 00:04:02.182 "memory_domains": [ 00:04:02.182 { 00:04:02.182 "dma_device_id": "system", 00:04:02.182 "dma_device_type": 1 00:04:02.182 }, 00:04:02.182 { 00:04:02.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:02.182 "dma_device_type": 2 00:04:02.182 } 00:04:02.182 ], 00:04:02.182 "driver_specific": { 00:04:02.182 "passthru": { 00:04:02.182 "name": "Passthru0", 00:04:02.182 "base_bdev_name": "Malloc0" 00:04:02.182 } 00:04:02.182 } 00:04:02.182 } 00:04:02.182 ]' 00:04:02.182 16:12:02 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:02.182 16:12:02 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:02.182 16:12:02 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:02.182 16:12:02 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:02.182 16:12:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.182 16:12:02 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:02.182 16:12:02 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:02.182 16:12:02 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:02.182 16:12:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.182 16:12:02 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:02.182 16:12:02 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:02.182 16:12:02 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:02.182 16:12:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.182 16:12:02 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:02.182 16:12:02 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:02.182 16:12:02 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:02.182 16:12:02 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:02.182 00:04:02.182 real 0m0.266s 00:04:02.182 user 0m0.155s 00:04:02.182 sys 0m0.023s 00:04:02.182 16:12:02 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:02.182 16:12:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.182 ************************************ 00:04:02.182 END TEST rpc_integrity 00:04:02.182 ************************************ 00:04:02.182 16:12:02 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:02.183 16:12:02 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:02.183 16:12:02 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:02.183 16:12:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:02.441 ************************************ 00:04:02.441 START TEST rpc_plugins 00:04:02.441 ************************************ 00:04:02.441 16:12:02 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:02.441 16:12:02 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:02.441 16:12:02 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:02.441 16:12:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:02.441 16:12:02 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:02.441 16:12:02 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:02.441 16:12:02 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:02.441 16:12:02 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:02.441 16:12:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:02.441 16:12:02 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:02.441 16:12:02 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:02.441 { 00:04:02.441 "name": "Malloc1", 00:04:02.441 "aliases": [ 00:04:02.441 "44009543-fe18-4c27-a4af-3149c016a1e0" 00:04:02.441 ], 00:04:02.441 "product_name": "Malloc disk", 00:04:02.441 "block_size": 4096, 00:04:02.441 "num_blocks": 256, 00:04:02.441 "uuid": "44009543-fe18-4c27-a4af-3149c016a1e0", 00:04:02.441 "assigned_rate_limits": { 00:04:02.442 "rw_ios_per_sec": 0, 00:04:02.442 "rw_mbytes_per_sec": 0, 00:04:02.442 "r_mbytes_per_sec": 0, 00:04:02.442 "w_mbytes_per_sec": 0 00:04:02.442 }, 00:04:02.442 "claimed": false, 00:04:02.442 "zoned": false, 00:04:02.442 "supported_io_types": { 00:04:02.442 "read": true, 00:04:02.442 "write": true, 00:04:02.442 "unmap": true, 00:04:02.442 "flush": true, 00:04:02.442 "reset": true, 00:04:02.442 "nvme_admin": false, 00:04:02.442 "nvme_io": false, 00:04:02.442 "nvme_io_md": false, 00:04:02.442 "write_zeroes": true, 00:04:02.442 "zcopy": true, 00:04:02.442 "get_zone_info": false, 00:04:02.442 "zone_management": false, 00:04:02.442 "zone_append": false, 00:04:02.442 "compare": false, 00:04:02.442 "compare_and_write": false, 00:04:02.442 "abort": true, 00:04:02.442 "seek_hole": false, 00:04:02.442 "seek_data": false, 00:04:02.442 "copy": true, 00:04:02.442 "nvme_iov_md": false 00:04:02.442 }, 00:04:02.442 "memory_domains": [ 00:04:02.442 { 00:04:02.442 "dma_device_id": "system", 00:04:02.442 "dma_device_type": 1 00:04:02.442 }, 00:04:02.442 { 00:04:02.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:02.442 "dma_device_type": 2 00:04:02.442 } 00:04:02.442 ], 00:04:02.442 "driver_specific": {} 00:04:02.442 } 00:04:02.442 ]' 00:04:02.442 16:12:02 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:02.442 16:12:02 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:02.442 16:12:02 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:02.442 16:12:02 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:02.442 16:12:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:02.442 16:12:02 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:02.442 16:12:02 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:02.442 16:12:02 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:02.442 16:12:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:02.442 16:12:02 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:02.442 16:12:02 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:02.442 16:12:02 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:02.442 16:12:02 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:02.442 00:04:02.442 real 0m0.120s 00:04:02.442 user 0m0.073s 00:04:02.442 sys 0m0.015s 00:04:02.442 16:12:02 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:02.442 16:12:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:02.442 ************************************ 00:04:02.442 END TEST rpc_plugins 00:04:02.442 ************************************ 00:04:02.442 16:12:02 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:02.442 16:12:02 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:02.442 16:12:02 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:02.442 16:12:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:02.442 ************************************ 00:04:02.442 START TEST rpc_trace_cmd_test 00:04:02.442 ************************************ 00:04:02.442 16:12:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:02.442 16:12:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:02.442 16:12:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:02.442 16:12:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:02.442 16:12:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:02.442 16:12:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:02.442 16:12:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:02.442 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3002835", 00:04:02.442 "tpoint_group_mask": "0x8", 00:04:02.442 "iscsi_conn": { 00:04:02.442 "mask": "0x2", 00:04:02.442 "tpoint_mask": "0x0" 00:04:02.442 }, 00:04:02.442 "scsi": { 00:04:02.442 "mask": "0x4", 00:04:02.442 "tpoint_mask": "0x0" 00:04:02.442 }, 00:04:02.442 "bdev": { 00:04:02.442 "mask": "0x8", 00:04:02.442 "tpoint_mask": "0xffffffffffffffff" 00:04:02.442 }, 00:04:02.442 "nvmf_rdma": { 00:04:02.442 "mask": "0x10", 00:04:02.442 "tpoint_mask": "0x0" 00:04:02.442 }, 00:04:02.442 "nvmf_tcp": { 00:04:02.442 "mask": "0x20", 00:04:02.442 "tpoint_mask": "0x0" 00:04:02.442 }, 00:04:02.442 "ftl": { 00:04:02.442 "mask": "0x40", 00:04:02.442 "tpoint_mask": "0x0" 00:04:02.442 }, 00:04:02.442 "blobfs": { 00:04:02.442 "mask": "0x80", 00:04:02.442 "tpoint_mask": "0x0" 00:04:02.442 }, 00:04:02.442 "dsa": { 00:04:02.442 "mask": "0x200", 00:04:02.442 "tpoint_mask": "0x0" 00:04:02.442 }, 00:04:02.442 "thread": { 00:04:02.442 "mask": "0x400", 00:04:02.442 "tpoint_mask": "0x0" 00:04:02.442 }, 00:04:02.442 "nvme_pcie": { 00:04:02.442 "mask": "0x800", 00:04:02.442 "tpoint_mask": "0x0" 00:04:02.442 }, 00:04:02.442 "iaa": { 00:04:02.442 "mask": "0x1000", 00:04:02.442 "tpoint_mask": "0x0" 00:04:02.442 }, 00:04:02.442 "nvme_tcp": { 00:04:02.442 "mask": "0x2000", 00:04:02.442 "tpoint_mask": "0x0" 00:04:02.442 }, 00:04:02.442 "bdev_nvme": { 00:04:02.442 "mask": "0x4000", 00:04:02.442 "tpoint_mask": "0x0" 00:04:02.442 }, 00:04:02.442 "sock": { 00:04:02.442 "mask": "0x8000", 00:04:02.442 "tpoint_mask": "0x0" 00:04:02.442 }, 00:04:02.442 "blob": { 00:04:02.442 "mask": "0x10000", 00:04:02.442 "tpoint_mask": "0x0" 00:04:02.442 }, 00:04:02.442 "bdev_raid": { 00:04:02.442 "mask": "0x20000", 00:04:02.442 "tpoint_mask": "0x0" 00:04:02.442 } 00:04:02.442 }' 00:04:02.442 16:12:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:02.442 16:12:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:04:02.442 16:12:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:02.701 16:12:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:02.701 16:12:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:02.701 16:12:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:02.701 16:12:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:02.701 16:12:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:02.701 16:12:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:02.701 16:12:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:02.701 00:04:02.701 real 0m0.195s 00:04:02.701 user 0m0.173s 00:04:02.701 sys 0m0.014s 00:04:02.701 16:12:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:02.701 16:12:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:02.701 ************************************ 00:04:02.701 END TEST rpc_trace_cmd_test 00:04:02.701 ************************************ 00:04:02.701 16:12:03 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:02.701 16:12:03 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:02.701 16:12:03 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:02.701 16:12:03 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:02.701 16:12:03 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:02.701 16:12:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:02.701 ************************************ 00:04:02.701 START TEST rpc_daemon_integrity 00:04:02.701 ************************************ 00:04:02.701 16:12:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:02.701 16:12:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:02.701 16:12:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:02.701 16:12:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.701 16:12:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:02.701 16:12:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:02.701 16:12:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:02.701 16:12:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:02.701 16:12:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:02.701 16:12:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:02.701 16:12:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.701 16:12:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:02.701 16:12:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:02.701 16:12:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:02.701 16:12:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:02.701 16:12:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.701 16:12:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:02.701 16:12:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:02.701 { 00:04:02.701 "name": "Malloc2", 00:04:02.701 "aliases": [ 00:04:02.701 "af2ca100-65b6-4a3d-b74a-17d4cb56190a" 00:04:02.701 ], 00:04:02.701 "product_name": "Malloc disk", 00:04:02.701 "block_size": 512, 00:04:02.701 "num_blocks": 16384, 00:04:02.701 "uuid": "af2ca100-65b6-4a3d-b74a-17d4cb56190a", 00:04:02.701 "assigned_rate_limits": { 00:04:02.701 "rw_ios_per_sec": 0, 00:04:02.701 "rw_mbytes_per_sec": 0, 00:04:02.701 "r_mbytes_per_sec": 0, 00:04:02.701 "w_mbytes_per_sec": 0 00:04:02.701 }, 00:04:02.701 "claimed": false, 00:04:02.701 "zoned": false, 00:04:02.701 "supported_io_types": { 00:04:02.701 "read": true, 00:04:02.701 "write": true, 00:04:02.701 "unmap": true, 00:04:02.701 "flush": true, 00:04:02.701 "reset": true, 00:04:02.701 "nvme_admin": false, 00:04:02.701 "nvme_io": false, 00:04:02.701 "nvme_io_md": false, 00:04:02.701 "write_zeroes": true, 00:04:02.701 "zcopy": true, 00:04:02.701 "get_zone_info": false, 00:04:02.701 "zone_management": false, 00:04:02.701 "zone_append": false, 00:04:02.701 "compare": false, 00:04:02.701 "compare_and_write": false, 00:04:02.701 "abort": true, 00:04:02.701 "seek_hole": false, 00:04:02.701 "seek_data": false, 00:04:02.701 "copy": true, 00:04:02.701 "nvme_iov_md": false 00:04:02.701 }, 00:04:02.701 "memory_domains": [ 00:04:02.701 { 00:04:02.701 "dma_device_id": "system", 00:04:02.701 "dma_device_type": 1 00:04:02.701 }, 00:04:02.701 { 00:04:02.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:02.701 "dma_device_type": 2 00:04:02.701 } 00:04:02.701 ], 00:04:02.701 "driver_specific": {} 00:04:02.701 } 00:04:02.701 ]' 00:04:02.701 16:12:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:02.960 16:12:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:02.960 16:12:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:02.960 16:12:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:02.960 16:12:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.960 [2024-09-29 16:12:03.297869] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:02.960 [2024-09-29 16:12:03.297930] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:02.960 [2024-09-29 16:12:03.297987] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000023a80 00:04:02.960 [2024-09-29 16:12:03.298011] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:02.960 [2024-09-29 16:12:03.300866] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:02.960 [2024-09-29 16:12:03.300900] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:02.960 Passthru0 00:04:02.960 16:12:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:02.960 16:12:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:02.960 16:12:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:02.960 16:12:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.960 16:12:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:02.960 16:12:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:02.960 { 00:04:02.960 "name": "Malloc2", 00:04:02.960 "aliases": [ 00:04:02.960 "af2ca100-65b6-4a3d-b74a-17d4cb56190a" 00:04:02.960 ], 00:04:02.960 "product_name": "Malloc disk", 00:04:02.960 "block_size": 512, 00:04:02.960 "num_blocks": 16384, 00:04:02.960 "uuid": "af2ca100-65b6-4a3d-b74a-17d4cb56190a", 00:04:02.960 "assigned_rate_limits": { 00:04:02.960 "rw_ios_per_sec": 0, 00:04:02.960 "rw_mbytes_per_sec": 0, 00:04:02.960 "r_mbytes_per_sec": 0, 00:04:02.960 "w_mbytes_per_sec": 0 00:04:02.960 }, 00:04:02.960 "claimed": true, 00:04:02.960 "claim_type": "exclusive_write", 00:04:02.960 "zoned": false, 00:04:02.960 "supported_io_types": { 00:04:02.960 "read": true, 00:04:02.960 "write": true, 00:04:02.960 "unmap": true, 00:04:02.960 "flush": true, 00:04:02.960 "reset": true, 00:04:02.960 "nvme_admin": false, 00:04:02.960 "nvme_io": false, 00:04:02.960 "nvme_io_md": false, 00:04:02.960 "write_zeroes": true, 00:04:02.960 "zcopy": true, 00:04:02.960 "get_zone_info": false, 00:04:02.960 "zone_management": false, 00:04:02.960 "zone_append": false, 00:04:02.960 "compare": false, 00:04:02.960 "compare_and_write": false, 00:04:02.960 "abort": true, 00:04:02.960 "seek_hole": false, 00:04:02.960 "seek_data": false, 00:04:02.960 "copy": true, 00:04:02.960 "nvme_iov_md": false 00:04:02.960 }, 00:04:02.960 "memory_domains": [ 00:04:02.960 { 00:04:02.960 "dma_device_id": "system", 00:04:02.960 "dma_device_type": 1 00:04:02.960 }, 00:04:02.960 { 00:04:02.960 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:02.960 "dma_device_type": 2 00:04:02.960 } 00:04:02.960 ], 00:04:02.960 "driver_specific": {} 00:04:02.960 }, 00:04:02.960 { 00:04:02.960 "name": "Passthru0", 00:04:02.960 "aliases": [ 00:04:02.960 "dcabc8e9-c0a2-5662-a463-b220e2bf5298" 00:04:02.960 ], 00:04:02.960 "product_name": "passthru", 00:04:02.960 "block_size": 512, 00:04:02.960 "num_blocks": 16384, 00:04:02.960 "uuid": "dcabc8e9-c0a2-5662-a463-b220e2bf5298", 00:04:02.960 "assigned_rate_limits": { 00:04:02.960 "rw_ios_per_sec": 0, 00:04:02.960 "rw_mbytes_per_sec": 0, 00:04:02.960 "r_mbytes_per_sec": 0, 00:04:02.960 "w_mbytes_per_sec": 0 00:04:02.960 }, 00:04:02.960 "claimed": false, 00:04:02.960 "zoned": false, 00:04:02.960 "supported_io_types": { 00:04:02.960 "read": true, 00:04:02.960 "write": true, 00:04:02.960 "unmap": true, 00:04:02.961 "flush": true, 00:04:02.961 "reset": true, 00:04:02.961 "nvme_admin": false, 00:04:02.961 "nvme_io": false, 00:04:02.961 "nvme_io_md": false, 00:04:02.961 "write_zeroes": true, 00:04:02.961 "zcopy": true, 00:04:02.961 "get_zone_info": false, 00:04:02.961 "zone_management": false, 00:04:02.961 "zone_append": false, 00:04:02.961 "compare": false, 00:04:02.961 "compare_and_write": false, 00:04:02.961 "abort": true, 00:04:02.961 "seek_hole": false, 00:04:02.961 "seek_data": false, 00:04:02.961 "copy": true, 00:04:02.961 "nvme_iov_md": false 00:04:02.961 }, 00:04:02.961 "memory_domains": [ 00:04:02.961 { 00:04:02.961 "dma_device_id": "system", 00:04:02.961 "dma_device_type": 1 00:04:02.961 }, 00:04:02.961 { 00:04:02.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:02.961 "dma_device_type": 2 00:04:02.961 } 00:04:02.961 ], 00:04:02.961 "driver_specific": { 00:04:02.961 "passthru": { 00:04:02.961 "name": "Passthru0", 00:04:02.961 "base_bdev_name": "Malloc2" 00:04:02.961 } 00:04:02.961 } 00:04:02.961 } 00:04:02.961 ]' 00:04:02.961 16:12:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:02.961 16:12:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:02.961 16:12:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:02.961 16:12:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:02.961 16:12:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.961 16:12:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:02.961 16:12:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:02.961 16:12:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:02.961 16:12:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.961 16:12:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:02.961 16:12:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:02.961 16:12:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:02.961 16:12:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.961 16:12:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:02.961 16:12:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:02.961 16:12:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:02.961 16:12:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:02.961 00:04:02.961 real 0m0.264s 00:04:02.961 user 0m0.154s 00:04:02.961 sys 0m0.022s 00:04:02.961 16:12:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:02.961 16:12:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.961 ************************************ 00:04:02.961 END TEST rpc_daemon_integrity 00:04:02.961 ************************************ 00:04:02.961 16:12:03 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:02.961 16:12:03 rpc -- rpc/rpc.sh@84 -- # killprocess 3002835 00:04:02.961 16:12:03 rpc -- common/autotest_common.sh@950 -- # '[' -z 3002835 ']' 00:04:02.961 16:12:03 rpc -- common/autotest_common.sh@954 -- # kill -0 3002835 00:04:02.961 16:12:03 rpc -- common/autotest_common.sh@955 -- # uname 00:04:02.961 16:12:03 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:02.961 16:12:03 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3002835 00:04:02.961 16:12:03 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:02.961 16:12:03 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:02.961 16:12:03 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3002835' 00:04:02.961 killing process with pid 3002835 00:04:02.961 16:12:03 rpc -- common/autotest_common.sh@969 -- # kill 3002835 00:04:02.961 16:12:03 rpc -- common/autotest_common.sh@974 -- # wait 3002835 00:04:06.244 00:04:06.244 real 0m5.248s 00:04:06.244 user 0m5.750s 00:04:06.244 sys 0m0.903s 00:04:06.244 16:12:06 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:06.244 16:12:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.244 ************************************ 00:04:06.244 END TEST rpc 00:04:06.244 ************************************ 00:04:06.244 16:12:06 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:06.244 16:12:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:06.244 16:12:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:06.244 16:12:06 -- common/autotest_common.sh@10 -- # set +x 00:04:06.244 ************************************ 00:04:06.244 START TEST skip_rpc 00:04:06.244 ************************************ 00:04:06.244 16:12:06 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:06.244 * Looking for test storage... 00:04:06.245 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:06.245 16:12:06 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:06.245 16:12:06 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:06.245 16:12:06 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:06.245 16:12:06 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:06.245 16:12:06 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:06.245 16:12:06 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:06.245 16:12:06 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:06.245 16:12:06 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:06.245 16:12:06 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:06.245 16:12:06 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:06.245 16:12:06 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:06.245 16:12:06 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:06.245 16:12:06 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:06.245 16:12:06 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:06.245 16:12:06 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:06.245 16:12:06 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:06.245 16:12:06 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:06.245 16:12:06 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:06.245 16:12:06 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:06.245 16:12:06 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:06.245 16:12:06 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:06.245 16:12:06 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:06.245 16:12:06 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:06.245 16:12:06 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:06.245 16:12:06 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:06.245 16:12:06 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:06.245 16:12:06 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:06.245 16:12:06 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:06.245 16:12:06 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:06.245 16:12:06 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:06.245 16:12:06 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:06.245 16:12:06 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:06.245 16:12:06 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:06.245 16:12:06 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:06.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.245 --rc genhtml_branch_coverage=1 00:04:06.245 --rc genhtml_function_coverage=1 00:04:06.245 --rc genhtml_legend=1 00:04:06.245 --rc geninfo_all_blocks=1 00:04:06.245 --rc geninfo_unexecuted_blocks=1 00:04:06.245 00:04:06.245 ' 00:04:06.245 16:12:06 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:06.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.245 --rc genhtml_branch_coverage=1 00:04:06.245 --rc genhtml_function_coverage=1 00:04:06.245 --rc genhtml_legend=1 00:04:06.245 --rc geninfo_all_blocks=1 00:04:06.245 --rc geninfo_unexecuted_blocks=1 00:04:06.245 00:04:06.245 ' 00:04:06.245 16:12:06 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:06.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.245 --rc genhtml_branch_coverage=1 00:04:06.245 --rc genhtml_function_coverage=1 00:04:06.245 --rc genhtml_legend=1 00:04:06.245 --rc geninfo_all_blocks=1 00:04:06.245 --rc geninfo_unexecuted_blocks=1 00:04:06.245 00:04:06.245 ' 00:04:06.245 16:12:06 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:06.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.245 --rc genhtml_branch_coverage=1 00:04:06.245 --rc genhtml_function_coverage=1 00:04:06.245 --rc genhtml_legend=1 00:04:06.245 --rc geninfo_all_blocks=1 00:04:06.245 --rc geninfo_unexecuted_blocks=1 00:04:06.245 00:04:06.245 ' 00:04:06.245 16:12:06 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:06.245 16:12:06 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:06.245 16:12:06 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:06.245 16:12:06 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:06.245 16:12:06 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:06.245 16:12:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.245 ************************************ 00:04:06.245 START TEST skip_rpc 00:04:06.245 ************************************ 00:04:06.245 16:12:06 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:04:06.245 16:12:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3003722 00:04:06.245 16:12:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:06.245 16:12:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:06.245 16:12:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:06.245 [2024-09-29 16:12:06.433872] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:06.245 [2024-09-29 16:12:06.434014] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3003722 ] 00:04:06.245 [2024-09-29 16:12:06.568026] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:06.503 [2024-09-29 16:12:06.825561] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:11.765 16:12:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:11.765 16:12:11 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:11.765 16:12:11 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:11.765 16:12:11 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:11.765 16:12:11 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:11.765 16:12:11 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:11.765 16:12:11 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:11.765 16:12:11 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:11.765 16:12:11 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:11.765 16:12:11 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:11.765 16:12:11 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:11.765 16:12:11 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:11.765 16:12:11 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:11.765 16:12:11 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:11.765 16:12:11 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:11.765 16:12:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:11.765 16:12:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3003722 00:04:11.765 16:12:11 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 3003722 ']' 00:04:11.765 16:12:11 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 3003722 00:04:11.765 16:12:11 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:04:11.765 16:12:11 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:11.765 16:12:11 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3003722 00:04:11.765 16:12:11 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:11.765 16:12:11 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:11.765 16:12:11 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3003722' 00:04:11.765 killing process with pid 3003722 00:04:11.765 16:12:11 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 3003722 00:04:11.765 16:12:11 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 3003722 00:04:13.661 00:04:13.661 real 0m7.662s 00:04:13.661 user 0m7.120s 00:04:13.661 sys 0m0.538s 00:04:13.661 16:12:13 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:13.661 16:12:13 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.661 ************************************ 00:04:13.661 END TEST skip_rpc 00:04:13.661 ************************************ 00:04:13.661 16:12:14 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:13.661 16:12:14 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:13.661 16:12:14 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:13.661 16:12:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.661 ************************************ 00:04:13.661 START TEST skip_rpc_with_json 00:04:13.661 ************************************ 00:04:13.661 16:12:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:04:13.661 16:12:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:13.661 16:12:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3005189 00:04:13.661 16:12:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:13.661 16:12:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:13.661 16:12:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3005189 00:04:13.661 16:12:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 3005189 ']' 00:04:13.661 16:12:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:13.661 16:12:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:13.661 16:12:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:13.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:13.661 16:12:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:13.661 16:12:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:13.661 [2024-09-29 16:12:14.148678] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:13.661 [2024-09-29 16:12:14.148857] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3005189 ] 00:04:13.919 [2024-09-29 16:12:14.285836] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:14.176 [2024-09-29 16:12:14.539685] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.127 16:12:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:15.127 16:12:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:04:15.127 16:12:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:15.127 16:12:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.127 16:12:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:15.127 [2024-09-29 16:12:15.489660] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:15.127 request: 00:04:15.127 { 00:04:15.127 "trtype": "tcp", 00:04:15.127 "method": "nvmf_get_transports", 00:04:15.127 "req_id": 1 00:04:15.127 } 00:04:15.127 Got JSON-RPC error response 00:04:15.127 response: 00:04:15.127 { 00:04:15.127 "code": -19, 00:04:15.127 "message": "No such device" 00:04:15.127 } 00:04:15.127 16:12:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:15.127 16:12:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:15.127 16:12:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.127 16:12:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:15.127 [2024-09-29 16:12:15.497824] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:15.127 16:12:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.127 16:12:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:15.127 16:12:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.127 16:12:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:15.127 16:12:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.127 16:12:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:15.127 { 00:04:15.127 "subsystems": [ 00:04:15.127 { 00:04:15.127 "subsystem": "fsdev", 00:04:15.127 "config": [ 00:04:15.127 { 00:04:15.127 "method": "fsdev_set_opts", 00:04:15.127 "params": { 00:04:15.127 "fsdev_io_pool_size": 65535, 00:04:15.127 "fsdev_io_cache_size": 256 00:04:15.127 } 00:04:15.127 } 00:04:15.127 ] 00:04:15.127 }, 00:04:15.127 { 00:04:15.127 "subsystem": "keyring", 00:04:15.127 "config": [] 00:04:15.127 }, 00:04:15.127 { 00:04:15.127 "subsystem": "iobuf", 00:04:15.127 "config": [ 00:04:15.127 { 00:04:15.127 "method": "iobuf_set_options", 00:04:15.127 "params": { 00:04:15.127 "small_pool_count": 8192, 00:04:15.127 "large_pool_count": 1024, 00:04:15.127 "small_bufsize": 8192, 00:04:15.127 "large_bufsize": 135168 00:04:15.127 } 00:04:15.127 } 00:04:15.127 ] 00:04:15.127 }, 00:04:15.127 { 00:04:15.127 "subsystem": "sock", 00:04:15.127 "config": [ 00:04:15.127 { 00:04:15.127 "method": "sock_set_default_impl", 00:04:15.127 "params": { 00:04:15.127 "impl_name": "posix" 00:04:15.127 } 00:04:15.127 }, 00:04:15.127 { 00:04:15.127 "method": "sock_impl_set_options", 00:04:15.127 "params": { 00:04:15.127 "impl_name": "ssl", 00:04:15.127 "recv_buf_size": 4096, 00:04:15.127 "send_buf_size": 4096, 00:04:15.127 "enable_recv_pipe": true, 00:04:15.127 "enable_quickack": false, 00:04:15.127 "enable_placement_id": 0, 00:04:15.127 "enable_zerocopy_send_server": true, 00:04:15.127 "enable_zerocopy_send_client": false, 00:04:15.127 "zerocopy_threshold": 0, 00:04:15.127 "tls_version": 0, 00:04:15.127 "enable_ktls": false 00:04:15.127 } 00:04:15.127 }, 00:04:15.127 { 00:04:15.127 "method": "sock_impl_set_options", 00:04:15.127 "params": { 00:04:15.127 "impl_name": "posix", 00:04:15.127 "recv_buf_size": 2097152, 00:04:15.127 "send_buf_size": 2097152, 00:04:15.127 "enable_recv_pipe": true, 00:04:15.127 "enable_quickack": false, 00:04:15.127 "enable_placement_id": 0, 00:04:15.127 "enable_zerocopy_send_server": true, 00:04:15.127 "enable_zerocopy_send_client": false, 00:04:15.127 "zerocopy_threshold": 0, 00:04:15.127 "tls_version": 0, 00:04:15.127 "enable_ktls": false 00:04:15.127 } 00:04:15.127 } 00:04:15.127 ] 00:04:15.127 }, 00:04:15.127 { 00:04:15.127 "subsystem": "vmd", 00:04:15.127 "config": [] 00:04:15.127 }, 00:04:15.127 { 00:04:15.127 "subsystem": "accel", 00:04:15.127 "config": [ 00:04:15.127 { 00:04:15.127 "method": "accel_set_options", 00:04:15.127 "params": { 00:04:15.127 "small_cache_size": 128, 00:04:15.127 "large_cache_size": 16, 00:04:15.127 "task_count": 2048, 00:04:15.127 "sequence_count": 2048, 00:04:15.127 "buf_count": 2048 00:04:15.127 } 00:04:15.127 } 00:04:15.127 ] 00:04:15.127 }, 00:04:15.127 { 00:04:15.127 "subsystem": "bdev", 00:04:15.127 "config": [ 00:04:15.127 { 00:04:15.127 "method": "bdev_set_options", 00:04:15.127 "params": { 00:04:15.127 "bdev_io_pool_size": 65535, 00:04:15.127 "bdev_io_cache_size": 256, 00:04:15.127 "bdev_auto_examine": true, 00:04:15.127 "iobuf_small_cache_size": 128, 00:04:15.127 "iobuf_large_cache_size": 16 00:04:15.127 } 00:04:15.127 }, 00:04:15.127 { 00:04:15.127 "method": "bdev_raid_set_options", 00:04:15.127 "params": { 00:04:15.127 "process_window_size_kb": 1024, 00:04:15.127 "process_max_bandwidth_mb_sec": 0 00:04:15.127 } 00:04:15.127 }, 00:04:15.127 { 00:04:15.127 "method": "bdev_iscsi_set_options", 00:04:15.127 "params": { 00:04:15.127 "timeout_sec": 30 00:04:15.127 } 00:04:15.127 }, 00:04:15.127 { 00:04:15.127 "method": "bdev_nvme_set_options", 00:04:15.127 "params": { 00:04:15.127 "action_on_timeout": "none", 00:04:15.127 "timeout_us": 0, 00:04:15.127 "timeout_admin_us": 0, 00:04:15.127 "keep_alive_timeout_ms": 10000, 00:04:15.127 "arbitration_burst": 0, 00:04:15.127 "low_priority_weight": 0, 00:04:15.127 "medium_priority_weight": 0, 00:04:15.127 "high_priority_weight": 0, 00:04:15.127 "nvme_adminq_poll_period_us": 10000, 00:04:15.127 "nvme_ioq_poll_period_us": 0, 00:04:15.127 "io_queue_requests": 0, 00:04:15.127 "delay_cmd_submit": true, 00:04:15.127 "transport_retry_count": 4, 00:04:15.127 "bdev_retry_count": 3, 00:04:15.127 "transport_ack_timeout": 0, 00:04:15.127 "ctrlr_loss_timeout_sec": 0, 00:04:15.127 "reconnect_delay_sec": 0, 00:04:15.127 "fast_io_fail_timeout_sec": 0, 00:04:15.127 "disable_auto_failback": false, 00:04:15.127 "generate_uuids": false, 00:04:15.127 "transport_tos": 0, 00:04:15.127 "nvme_error_stat": false, 00:04:15.127 "rdma_srq_size": 0, 00:04:15.127 "io_path_stat": false, 00:04:15.127 "allow_accel_sequence": false, 00:04:15.127 "rdma_max_cq_size": 0, 00:04:15.127 "rdma_cm_event_timeout_ms": 0, 00:04:15.127 "dhchap_digests": [ 00:04:15.127 "sha256", 00:04:15.127 "sha384", 00:04:15.127 "sha512" 00:04:15.127 ], 00:04:15.127 "dhchap_dhgroups": [ 00:04:15.127 "null", 00:04:15.127 "ffdhe2048", 00:04:15.127 "ffdhe3072", 00:04:15.127 "ffdhe4096", 00:04:15.127 "ffdhe6144", 00:04:15.128 "ffdhe8192" 00:04:15.128 ] 00:04:15.128 } 00:04:15.128 }, 00:04:15.128 { 00:04:15.128 "method": "bdev_nvme_set_hotplug", 00:04:15.128 "params": { 00:04:15.128 "period_us": 100000, 00:04:15.128 "enable": false 00:04:15.128 } 00:04:15.128 }, 00:04:15.128 { 00:04:15.128 "method": "bdev_wait_for_examine" 00:04:15.128 } 00:04:15.128 ] 00:04:15.128 }, 00:04:15.128 { 00:04:15.128 "subsystem": "scsi", 00:04:15.128 "config": null 00:04:15.128 }, 00:04:15.128 { 00:04:15.128 "subsystem": "scheduler", 00:04:15.128 "config": [ 00:04:15.128 { 00:04:15.128 "method": "framework_set_scheduler", 00:04:15.128 "params": { 00:04:15.128 "name": "static" 00:04:15.128 } 00:04:15.128 } 00:04:15.128 ] 00:04:15.128 }, 00:04:15.128 { 00:04:15.128 "subsystem": "vhost_scsi", 00:04:15.128 "config": [] 00:04:15.128 }, 00:04:15.128 { 00:04:15.128 "subsystem": "vhost_blk", 00:04:15.128 "config": [] 00:04:15.128 }, 00:04:15.128 { 00:04:15.128 "subsystem": "ublk", 00:04:15.128 "config": [] 00:04:15.128 }, 00:04:15.128 { 00:04:15.128 "subsystem": "nbd", 00:04:15.128 "config": [] 00:04:15.128 }, 00:04:15.128 { 00:04:15.128 "subsystem": "nvmf", 00:04:15.128 "config": [ 00:04:15.128 { 00:04:15.128 "method": "nvmf_set_config", 00:04:15.128 "params": { 00:04:15.128 "discovery_filter": "match_any", 00:04:15.128 "admin_cmd_passthru": { 00:04:15.128 "identify_ctrlr": false 00:04:15.128 }, 00:04:15.128 "dhchap_digests": [ 00:04:15.128 "sha256", 00:04:15.128 "sha384", 00:04:15.128 "sha512" 00:04:15.128 ], 00:04:15.128 "dhchap_dhgroups": [ 00:04:15.128 "null", 00:04:15.128 "ffdhe2048", 00:04:15.128 "ffdhe3072", 00:04:15.128 "ffdhe4096", 00:04:15.128 "ffdhe6144", 00:04:15.128 "ffdhe8192" 00:04:15.128 ] 00:04:15.128 } 00:04:15.128 }, 00:04:15.128 { 00:04:15.128 "method": "nvmf_set_max_subsystems", 00:04:15.128 "params": { 00:04:15.128 "max_subsystems": 1024 00:04:15.128 } 00:04:15.128 }, 00:04:15.128 { 00:04:15.128 "method": "nvmf_set_crdt", 00:04:15.128 "params": { 00:04:15.128 "crdt1": 0, 00:04:15.128 "crdt2": 0, 00:04:15.128 "crdt3": 0 00:04:15.128 } 00:04:15.128 }, 00:04:15.128 { 00:04:15.128 "method": "nvmf_create_transport", 00:04:15.128 "params": { 00:04:15.128 "trtype": "TCP", 00:04:15.128 "max_queue_depth": 128, 00:04:15.128 "max_io_qpairs_per_ctrlr": 127, 00:04:15.128 "in_capsule_data_size": 4096, 00:04:15.128 "max_io_size": 131072, 00:04:15.128 "io_unit_size": 131072, 00:04:15.128 "max_aq_depth": 128, 00:04:15.128 "num_shared_buffers": 511, 00:04:15.128 "buf_cache_size": 4294967295, 00:04:15.128 "dif_insert_or_strip": false, 00:04:15.128 "zcopy": false, 00:04:15.128 "c2h_success": true, 00:04:15.128 "sock_priority": 0, 00:04:15.128 "abort_timeout_sec": 1, 00:04:15.128 "ack_timeout": 0, 00:04:15.128 "data_wr_pool_size": 0 00:04:15.128 } 00:04:15.128 } 00:04:15.128 ] 00:04:15.128 }, 00:04:15.128 { 00:04:15.128 "subsystem": "iscsi", 00:04:15.128 "config": [ 00:04:15.128 { 00:04:15.128 "method": "iscsi_set_options", 00:04:15.128 "params": { 00:04:15.128 "node_base": "iqn.2016-06.io.spdk", 00:04:15.128 "max_sessions": 128, 00:04:15.128 "max_connections_per_session": 2, 00:04:15.128 "max_queue_depth": 64, 00:04:15.128 "default_time2wait": 2, 00:04:15.128 "default_time2retain": 20, 00:04:15.128 "first_burst_length": 8192, 00:04:15.128 "immediate_data": true, 00:04:15.128 "allow_duplicated_isid": false, 00:04:15.128 "error_recovery_level": 0, 00:04:15.128 "nop_timeout": 60, 00:04:15.128 "nop_in_interval": 30, 00:04:15.128 "disable_chap": false, 00:04:15.128 "require_chap": false, 00:04:15.128 "mutual_chap": false, 00:04:15.128 "chap_group": 0, 00:04:15.128 "max_large_datain_per_connection": 64, 00:04:15.128 "max_r2t_per_connection": 4, 00:04:15.128 "pdu_pool_size": 36864, 00:04:15.128 "immediate_data_pool_size": 16384, 00:04:15.128 "data_out_pool_size": 2048 00:04:15.128 } 00:04:15.128 } 00:04:15.128 ] 00:04:15.128 } 00:04:15.128 ] 00:04:15.128 } 00:04:15.128 16:12:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:15.128 16:12:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3005189 00:04:15.128 16:12:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 3005189 ']' 00:04:15.128 16:12:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 3005189 00:04:15.128 16:12:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:15.128 16:12:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:15.128 16:12:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3005189 00:04:15.386 16:12:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:15.386 16:12:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:15.386 16:12:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3005189' 00:04:15.386 killing process with pid 3005189 00:04:15.386 16:12:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 3005189 00:04:15.386 16:12:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 3005189 00:04:17.913 16:12:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3005608 00:04:17.913 16:12:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:17.913 16:12:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:23.240 16:12:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3005608 00:04:23.240 16:12:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 3005608 ']' 00:04:23.240 16:12:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 3005608 00:04:23.240 16:12:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:23.240 16:12:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:23.240 16:12:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3005608 00:04:23.240 16:12:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:23.240 16:12:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:23.240 16:12:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3005608' 00:04:23.240 killing process with pid 3005608 00:04:23.240 16:12:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 3005608 00:04:23.240 16:12:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 3005608 00:04:25.770 16:12:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:25.770 16:12:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:25.770 00:04:25.770 real 0m11.920s 00:04:25.770 user 0m11.326s 00:04:25.770 sys 0m1.161s 00:04:25.770 16:12:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:25.770 16:12:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:25.770 ************************************ 00:04:25.770 END TEST skip_rpc_with_json 00:04:25.770 ************************************ 00:04:25.770 16:12:25 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:25.770 16:12:25 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:25.770 16:12:25 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:25.770 16:12:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.770 ************************************ 00:04:25.770 START TEST skip_rpc_with_delay 00:04:25.770 ************************************ 00:04:25.770 16:12:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:04:25.770 16:12:26 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:25.770 16:12:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:25.770 16:12:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:25.770 16:12:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:25.770 16:12:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:25.770 16:12:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:25.770 16:12:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:25.770 16:12:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:25.770 16:12:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:25.770 16:12:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:25.770 16:12:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:25.770 16:12:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:25.770 [2024-09-29 16:12:26.112722] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:25.770 [2024-09-29 16:12:26.112902] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:25.770 16:12:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:25.770 16:12:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:25.770 16:12:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:25.770 16:12:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:25.770 00:04:25.770 real 0m0.163s 00:04:25.770 user 0m0.089s 00:04:25.770 sys 0m0.073s 00:04:25.770 16:12:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:25.770 16:12:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:25.770 ************************************ 00:04:25.770 END TEST skip_rpc_with_delay 00:04:25.770 ************************************ 00:04:25.770 16:12:26 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:25.770 16:12:26 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:25.770 16:12:26 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:25.770 16:12:26 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:25.770 16:12:26 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:25.770 16:12:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.770 ************************************ 00:04:25.770 START TEST exit_on_failed_rpc_init 00:04:25.770 ************************************ 00:04:25.770 16:12:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:04:25.770 16:12:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3006601 00:04:25.770 16:12:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:25.770 16:12:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3006601 00:04:25.770 16:12:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 3006601 ']' 00:04:25.771 16:12:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:25.771 16:12:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:25.771 16:12:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:25.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:25.771 16:12:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:25.771 16:12:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:25.771 [2024-09-29 16:12:26.313743] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:25.771 [2024-09-29 16:12:26.313929] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3006601 ] 00:04:26.029 [2024-09-29 16:12:26.444034] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:26.287 [2024-09-29 16:12:26.693550] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.224 16:12:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:27.224 16:12:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:04:27.224 16:12:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:27.224 16:12:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:27.224 16:12:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:27.224 16:12:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:27.224 16:12:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:27.224 16:12:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:27.224 16:12:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:27.224 16:12:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:27.224 16:12:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:27.224 16:12:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:27.224 16:12:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:27.224 16:12:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:27.224 16:12:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:27.224 [2024-09-29 16:12:27.717192] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:27.224 [2024-09-29 16:12:27.717355] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3006855 ] 00:04:27.482 [2024-09-29 16:12:27.853778] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.740 [2024-09-29 16:12:28.108110] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:04:27.740 [2024-09-29 16:12:28.108256] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:27.740 [2024-09-29 16:12:28.108291] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:27.740 [2024-09-29 16:12:28.108314] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:28.306 16:12:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:28.306 16:12:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:28.306 16:12:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:28.306 16:12:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:28.306 16:12:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:28.306 16:12:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:28.306 16:12:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:28.306 16:12:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3006601 00:04:28.306 16:12:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 3006601 ']' 00:04:28.306 16:12:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 3006601 00:04:28.306 16:12:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:04:28.306 16:12:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:28.306 16:12:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3006601 00:04:28.306 16:12:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:28.306 16:12:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:28.306 16:12:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3006601' 00:04:28.306 killing process with pid 3006601 00:04:28.306 16:12:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 3006601 00:04:28.306 16:12:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 3006601 00:04:30.833 00:04:30.833 real 0m5.005s 00:04:30.833 user 0m5.725s 00:04:30.833 sys 0m0.783s 00:04:30.833 16:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:30.833 16:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:30.833 ************************************ 00:04:30.833 END TEST exit_on_failed_rpc_init 00:04:30.833 ************************************ 00:04:30.833 16:12:31 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:30.833 00:04:30.833 real 0m25.072s 00:04:30.833 user 0m24.435s 00:04:30.833 sys 0m2.724s 00:04:30.833 16:12:31 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:30.833 16:12:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.833 ************************************ 00:04:30.833 END TEST skip_rpc 00:04:30.833 ************************************ 00:04:30.833 16:12:31 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:30.833 16:12:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:30.833 16:12:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:30.833 16:12:31 -- common/autotest_common.sh@10 -- # set +x 00:04:30.833 ************************************ 00:04:30.833 START TEST rpc_client 00:04:30.833 ************************************ 00:04:30.833 16:12:31 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:30.833 * Looking for test storage... 00:04:30.833 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:30.833 16:12:31 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:30.833 16:12:31 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:04:30.833 16:12:31 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:31.092 16:12:31 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:31.092 16:12:31 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:31.092 16:12:31 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:31.092 16:12:31 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:31.092 16:12:31 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:31.092 16:12:31 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:31.092 16:12:31 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:31.092 16:12:31 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:31.092 16:12:31 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:31.092 16:12:31 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:31.092 16:12:31 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:31.092 16:12:31 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:31.092 16:12:31 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:31.092 16:12:31 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:31.092 16:12:31 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:31.092 16:12:31 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:31.092 16:12:31 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:31.092 16:12:31 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:31.092 16:12:31 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:31.092 16:12:31 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:31.092 16:12:31 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:31.092 16:12:31 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:31.092 16:12:31 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:31.092 16:12:31 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:31.092 16:12:31 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:31.092 16:12:31 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:31.092 16:12:31 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:31.092 16:12:31 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:31.092 16:12:31 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:31.092 16:12:31 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:31.092 16:12:31 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:31.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.092 --rc genhtml_branch_coverage=1 00:04:31.092 --rc genhtml_function_coverage=1 00:04:31.092 --rc genhtml_legend=1 00:04:31.092 --rc geninfo_all_blocks=1 00:04:31.092 --rc geninfo_unexecuted_blocks=1 00:04:31.092 00:04:31.092 ' 00:04:31.092 16:12:31 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:31.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.092 --rc genhtml_branch_coverage=1 00:04:31.092 --rc genhtml_function_coverage=1 00:04:31.092 --rc genhtml_legend=1 00:04:31.092 --rc geninfo_all_blocks=1 00:04:31.092 --rc geninfo_unexecuted_blocks=1 00:04:31.092 00:04:31.092 ' 00:04:31.092 16:12:31 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:31.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.092 --rc genhtml_branch_coverage=1 00:04:31.092 --rc genhtml_function_coverage=1 00:04:31.092 --rc genhtml_legend=1 00:04:31.092 --rc geninfo_all_blocks=1 00:04:31.092 --rc geninfo_unexecuted_blocks=1 00:04:31.092 00:04:31.092 ' 00:04:31.092 16:12:31 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:31.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.092 --rc genhtml_branch_coverage=1 00:04:31.092 --rc genhtml_function_coverage=1 00:04:31.092 --rc genhtml_legend=1 00:04:31.092 --rc geninfo_all_blocks=1 00:04:31.092 --rc geninfo_unexecuted_blocks=1 00:04:31.092 00:04:31.092 ' 00:04:31.092 16:12:31 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:31.092 OK 00:04:31.092 16:12:31 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:31.092 00:04:31.092 real 0m0.193s 00:04:31.092 user 0m0.119s 00:04:31.092 sys 0m0.083s 00:04:31.092 16:12:31 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:31.092 16:12:31 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:31.092 ************************************ 00:04:31.092 END TEST rpc_client 00:04:31.092 ************************************ 00:04:31.092 16:12:31 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:31.092 16:12:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:31.092 16:12:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:31.092 16:12:31 -- common/autotest_common.sh@10 -- # set +x 00:04:31.092 ************************************ 00:04:31.092 START TEST json_config 00:04:31.092 ************************************ 00:04:31.092 16:12:31 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:31.092 16:12:31 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:31.092 16:12:31 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:04:31.092 16:12:31 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:31.092 16:12:31 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:31.092 16:12:31 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:31.092 16:12:31 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:31.092 16:12:31 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:31.092 16:12:31 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:31.092 16:12:31 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:31.092 16:12:31 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:31.092 16:12:31 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:31.092 16:12:31 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:31.092 16:12:31 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:31.092 16:12:31 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:31.092 16:12:31 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:31.092 16:12:31 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:31.092 16:12:31 json_config -- scripts/common.sh@345 -- # : 1 00:04:31.092 16:12:31 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:31.092 16:12:31 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:31.092 16:12:31 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:31.350 16:12:31 json_config -- scripts/common.sh@353 -- # local d=1 00:04:31.350 16:12:31 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:31.350 16:12:31 json_config -- scripts/common.sh@355 -- # echo 1 00:04:31.350 16:12:31 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:31.350 16:12:31 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:31.350 16:12:31 json_config -- scripts/common.sh@353 -- # local d=2 00:04:31.350 16:12:31 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:31.350 16:12:31 json_config -- scripts/common.sh@355 -- # echo 2 00:04:31.350 16:12:31 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:31.350 16:12:31 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:31.350 16:12:31 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:31.350 16:12:31 json_config -- scripts/common.sh@368 -- # return 0 00:04:31.350 16:12:31 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:31.350 16:12:31 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:31.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.350 --rc genhtml_branch_coverage=1 00:04:31.350 --rc genhtml_function_coverage=1 00:04:31.350 --rc genhtml_legend=1 00:04:31.350 --rc geninfo_all_blocks=1 00:04:31.350 --rc geninfo_unexecuted_blocks=1 00:04:31.350 00:04:31.350 ' 00:04:31.350 16:12:31 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:31.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.350 --rc genhtml_branch_coverage=1 00:04:31.350 --rc genhtml_function_coverage=1 00:04:31.350 --rc genhtml_legend=1 00:04:31.350 --rc geninfo_all_blocks=1 00:04:31.350 --rc geninfo_unexecuted_blocks=1 00:04:31.350 00:04:31.350 ' 00:04:31.350 16:12:31 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:31.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.350 --rc genhtml_branch_coverage=1 00:04:31.350 --rc genhtml_function_coverage=1 00:04:31.350 --rc genhtml_legend=1 00:04:31.350 --rc geninfo_all_blocks=1 00:04:31.350 --rc geninfo_unexecuted_blocks=1 00:04:31.350 00:04:31.350 ' 00:04:31.350 16:12:31 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:31.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.350 --rc genhtml_branch_coverage=1 00:04:31.350 --rc genhtml_function_coverage=1 00:04:31.350 --rc genhtml_legend=1 00:04:31.350 --rc geninfo_all_blocks=1 00:04:31.350 --rc geninfo_unexecuted_blocks=1 00:04:31.350 00:04:31.350 ' 00:04:31.350 16:12:31 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:31.350 16:12:31 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:31.350 16:12:31 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:31.350 16:12:31 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:31.350 16:12:31 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:31.350 16:12:31 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:31.350 16:12:31 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:31.350 16:12:31 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:31.350 16:12:31 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:31.350 16:12:31 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:31.350 16:12:31 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:31.350 16:12:31 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:31.350 16:12:31 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:31.350 16:12:31 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:31.350 16:12:31 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:31.350 16:12:31 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:31.350 16:12:31 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:31.350 16:12:31 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:31.350 16:12:31 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:31.350 16:12:31 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:31.350 16:12:31 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:31.350 16:12:31 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:31.350 16:12:31 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:31.350 16:12:31 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:31.350 16:12:31 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:31.350 16:12:31 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:31.350 16:12:31 json_config -- paths/export.sh@5 -- # export PATH 00:04:31.350 16:12:31 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:31.350 16:12:31 json_config -- nvmf/common.sh@51 -- # : 0 00:04:31.350 16:12:31 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:31.350 16:12:31 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:31.350 16:12:31 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:31.350 16:12:31 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:31.350 16:12:31 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:31.350 16:12:31 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:31.350 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:31.350 16:12:31 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:31.350 16:12:31 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:31.350 16:12:31 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:31.350 16:12:31 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:31.350 16:12:31 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:31.350 16:12:31 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:31.350 16:12:31 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:31.350 16:12:31 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:31.350 16:12:31 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:31.350 16:12:31 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:31.350 16:12:31 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:31.350 16:12:31 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:31.350 16:12:31 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:31.350 16:12:31 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:31.350 16:12:31 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:31.350 16:12:31 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:31.350 16:12:31 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:31.350 16:12:31 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:31.350 16:12:31 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:31.350 INFO: JSON configuration test init 00:04:31.350 16:12:31 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:31.350 16:12:31 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:31.350 16:12:31 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:31.350 16:12:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.350 16:12:31 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:31.350 16:12:31 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:31.350 16:12:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.350 16:12:31 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:31.350 16:12:31 json_config -- json_config/common.sh@9 -- # local app=target 00:04:31.350 16:12:31 json_config -- json_config/common.sh@10 -- # shift 00:04:31.350 16:12:31 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:31.350 16:12:31 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:31.350 16:12:31 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:31.350 16:12:31 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:31.350 16:12:31 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:31.350 16:12:31 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3007397 00:04:31.350 16:12:31 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:31.350 16:12:31 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:31.350 Waiting for target to run... 00:04:31.350 16:12:31 json_config -- json_config/common.sh@25 -- # waitforlisten 3007397 /var/tmp/spdk_tgt.sock 00:04:31.350 16:12:31 json_config -- common/autotest_common.sh@831 -- # '[' -z 3007397 ']' 00:04:31.350 16:12:31 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:31.350 16:12:31 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:31.350 16:12:31 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:31.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:31.350 16:12:31 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:31.350 16:12:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.350 [2024-09-29 16:12:31.785561] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:31.351 [2024-09-29 16:12:31.785767] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3007397 ] 00:04:31.915 [2024-09-29 16:12:32.220306] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.916 [2024-09-29 16:12:32.444784] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.174 16:12:32 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:32.174 16:12:32 json_config -- common/autotest_common.sh@864 -- # return 0 00:04:32.174 16:12:32 json_config -- json_config/common.sh@26 -- # echo '' 00:04:32.174 00:04:32.174 16:12:32 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:32.174 16:12:32 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:32.174 16:12:32 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:32.174 16:12:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.174 16:12:32 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:32.174 16:12:32 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:32.174 16:12:32 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:32.174 16:12:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.432 16:12:32 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:32.432 16:12:32 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:32.432 16:12:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:36.620 16:12:36 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:36.620 16:12:36 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:36.620 16:12:36 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:36.620 16:12:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:36.620 16:12:36 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:36.620 16:12:36 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:36.620 16:12:36 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:36.620 16:12:36 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:36.620 16:12:36 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:36.620 16:12:36 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:36.620 16:12:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:36.620 16:12:36 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:36.620 16:12:36 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:36.620 16:12:36 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:36.620 16:12:36 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:36.620 16:12:36 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:36.620 16:12:36 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:36.620 16:12:36 json_config -- json_config/json_config.sh@54 -- # sort 00:04:36.620 16:12:36 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:36.620 16:12:36 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:36.620 16:12:36 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:36.620 16:12:36 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:36.620 16:12:36 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:36.620 16:12:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:36.620 16:12:36 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:36.620 16:12:36 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:36.620 16:12:36 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:36.620 16:12:36 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:36.620 16:12:36 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:36.620 16:12:36 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:36.620 16:12:36 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:36.620 16:12:36 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:36.620 16:12:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:36.620 16:12:36 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:36.620 16:12:36 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:36.620 16:12:36 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:36.620 16:12:36 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:36.620 16:12:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:36.878 MallocForNvmf0 00:04:36.878 16:12:37 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:36.878 16:12:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:37.135 MallocForNvmf1 00:04:37.135 16:12:37 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:37.135 16:12:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:37.392 [2024-09-29 16:12:37.762021] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:37.392 16:12:37 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:37.392 16:12:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:37.650 16:12:38 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:37.650 16:12:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:37.907 16:12:38 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:37.907 16:12:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:38.164 16:12:38 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:38.164 16:12:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:38.421 [2024-09-29 16:12:38.841790] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:38.421 16:12:38 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:38.422 16:12:38 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:38.422 16:12:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:38.422 16:12:38 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:38.422 16:12:38 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:38.422 16:12:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:38.422 16:12:38 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:38.422 16:12:38 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:38.422 16:12:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:38.678 MallocBdevForConfigChangeCheck 00:04:38.678 16:12:39 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:38.678 16:12:39 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:38.678 16:12:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:38.678 16:12:39 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:38.678 16:12:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:39.242 16:12:39 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:39.242 INFO: shutting down applications... 00:04:39.242 16:12:39 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:39.242 16:12:39 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:39.242 16:12:39 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:39.242 16:12:39 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:41.138 Calling clear_iscsi_subsystem 00:04:41.138 Calling clear_nvmf_subsystem 00:04:41.138 Calling clear_nbd_subsystem 00:04:41.138 Calling clear_ublk_subsystem 00:04:41.138 Calling clear_vhost_blk_subsystem 00:04:41.138 Calling clear_vhost_scsi_subsystem 00:04:41.138 Calling clear_bdev_subsystem 00:04:41.138 16:12:41 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:41.138 16:12:41 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:41.138 16:12:41 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:41.138 16:12:41 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:41.138 16:12:41 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:41.138 16:12:41 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:41.138 16:12:41 json_config -- json_config/json_config.sh@352 -- # break 00:04:41.138 16:12:41 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:41.138 16:12:41 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:41.138 16:12:41 json_config -- json_config/common.sh@31 -- # local app=target 00:04:41.138 16:12:41 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:41.138 16:12:41 json_config -- json_config/common.sh@35 -- # [[ -n 3007397 ]] 00:04:41.138 16:12:41 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3007397 00:04:41.138 16:12:41 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:41.138 16:12:41 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:41.138 16:12:41 json_config -- json_config/common.sh@41 -- # kill -0 3007397 00:04:41.138 16:12:41 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:41.703 16:12:42 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:41.703 16:12:42 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:41.703 16:12:42 json_config -- json_config/common.sh@41 -- # kill -0 3007397 00:04:41.703 16:12:42 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:42.270 16:12:42 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:42.270 16:12:42 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:42.270 16:12:42 json_config -- json_config/common.sh@41 -- # kill -0 3007397 00:04:42.270 16:12:42 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:42.836 16:12:43 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:42.836 16:12:43 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:42.836 16:12:43 json_config -- json_config/common.sh@41 -- # kill -0 3007397 00:04:42.836 16:12:43 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:42.836 16:12:43 json_config -- json_config/common.sh@43 -- # break 00:04:42.836 16:12:43 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:42.836 16:12:43 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:42.836 SPDK target shutdown done 00:04:42.836 16:12:43 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:42.836 INFO: relaunching applications... 00:04:42.836 16:12:43 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:42.836 16:12:43 json_config -- json_config/common.sh@9 -- # local app=target 00:04:42.836 16:12:43 json_config -- json_config/common.sh@10 -- # shift 00:04:42.836 16:12:43 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:42.836 16:12:43 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:42.836 16:12:43 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:42.836 16:12:43 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:42.836 16:12:43 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:42.836 16:12:43 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3008857 00:04:42.836 16:12:43 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:42.836 16:12:43 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:42.836 Waiting for target to run... 00:04:42.836 16:12:43 json_config -- json_config/common.sh@25 -- # waitforlisten 3008857 /var/tmp/spdk_tgt.sock 00:04:42.836 16:12:43 json_config -- common/autotest_common.sh@831 -- # '[' -z 3008857 ']' 00:04:42.836 16:12:43 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:42.836 16:12:43 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:42.836 16:12:43 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:42.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:42.836 16:12:43 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:42.836 16:12:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:42.836 [2024-09-29 16:12:43.267841] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:42.836 [2024-09-29 16:12:43.267988] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3008857 ] 00:04:43.402 [2024-09-29 16:12:43.871628] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.661 [2024-09-29 16:12:44.106462] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.847 [2024-09-29 16:12:47.880703] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:47.847 [2024-09-29 16:12:47.913301] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:48.105 16:12:48 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:48.105 16:12:48 json_config -- common/autotest_common.sh@864 -- # return 0 00:04:48.105 16:12:48 json_config -- json_config/common.sh@26 -- # echo '' 00:04:48.105 00:04:48.105 16:12:48 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:48.105 16:12:48 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:48.105 INFO: Checking if target configuration is the same... 00:04:48.105 16:12:48 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:48.105 16:12:48 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:48.105 16:12:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:48.105 + '[' 2 -ne 2 ']' 00:04:48.105 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:48.105 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:48.105 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:48.105 +++ basename /dev/fd/62 00:04:48.105 ++ mktemp /tmp/62.XXX 00:04:48.105 + tmp_file_1=/tmp/62.q07 00:04:48.105 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:48.105 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:48.105 + tmp_file_2=/tmp/spdk_tgt_config.json.5wC 00:04:48.105 + ret=0 00:04:48.105 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:48.364 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:48.622 + diff -u /tmp/62.q07 /tmp/spdk_tgt_config.json.5wC 00:04:48.622 + echo 'INFO: JSON config files are the same' 00:04:48.622 INFO: JSON config files are the same 00:04:48.622 + rm /tmp/62.q07 /tmp/spdk_tgt_config.json.5wC 00:04:48.622 + exit 0 00:04:48.622 16:12:48 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:48.622 16:12:48 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:48.622 INFO: changing configuration and checking if this can be detected... 00:04:48.622 16:12:48 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:48.622 16:12:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:48.880 16:12:49 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:48.880 16:12:49 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:48.880 16:12:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:48.880 + '[' 2 -ne 2 ']' 00:04:48.880 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:48.880 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:48.880 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:48.880 +++ basename /dev/fd/62 00:04:48.880 ++ mktemp /tmp/62.XXX 00:04:48.880 + tmp_file_1=/tmp/62.3VQ 00:04:48.880 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:48.880 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:48.880 + tmp_file_2=/tmp/spdk_tgt_config.json.EjG 00:04:48.880 + ret=0 00:04:48.880 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:49.138 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:49.138 + diff -u /tmp/62.3VQ /tmp/spdk_tgt_config.json.EjG 00:04:49.138 + ret=1 00:04:49.138 + echo '=== Start of file: /tmp/62.3VQ ===' 00:04:49.138 + cat /tmp/62.3VQ 00:04:49.138 + echo '=== End of file: /tmp/62.3VQ ===' 00:04:49.138 + echo '' 00:04:49.138 + echo '=== Start of file: /tmp/spdk_tgt_config.json.EjG ===' 00:04:49.138 + cat /tmp/spdk_tgt_config.json.EjG 00:04:49.138 + echo '=== End of file: /tmp/spdk_tgt_config.json.EjG ===' 00:04:49.138 + echo '' 00:04:49.138 + rm /tmp/62.3VQ /tmp/spdk_tgt_config.json.EjG 00:04:49.138 + exit 1 00:04:49.139 16:12:49 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:49.139 INFO: configuration change detected. 00:04:49.139 16:12:49 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:49.139 16:12:49 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:49.139 16:12:49 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:49.139 16:12:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:49.139 16:12:49 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:49.139 16:12:49 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:49.139 16:12:49 json_config -- json_config/json_config.sh@324 -- # [[ -n 3008857 ]] 00:04:49.139 16:12:49 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:49.139 16:12:49 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:49.139 16:12:49 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:49.139 16:12:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:49.139 16:12:49 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:49.139 16:12:49 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:49.139 16:12:49 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:49.139 16:12:49 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:49.139 16:12:49 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:49.139 16:12:49 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:49.139 16:12:49 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:49.139 16:12:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:49.397 16:12:49 json_config -- json_config/json_config.sh@330 -- # killprocess 3008857 00:04:49.397 16:12:49 json_config -- common/autotest_common.sh@950 -- # '[' -z 3008857 ']' 00:04:49.397 16:12:49 json_config -- common/autotest_common.sh@954 -- # kill -0 3008857 00:04:49.397 16:12:49 json_config -- common/autotest_common.sh@955 -- # uname 00:04:49.397 16:12:49 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:49.397 16:12:49 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3008857 00:04:49.397 16:12:49 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:49.397 16:12:49 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:49.397 16:12:49 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3008857' 00:04:49.397 killing process with pid 3008857 00:04:49.397 16:12:49 json_config -- common/autotest_common.sh@969 -- # kill 3008857 00:04:49.397 16:12:49 json_config -- common/autotest_common.sh@974 -- # wait 3008857 00:04:51.925 16:12:52 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:51.925 16:12:52 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:51.925 16:12:52 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:51.925 16:12:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:51.925 16:12:52 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:51.925 16:12:52 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:51.925 INFO: Success 00:04:51.925 00:04:51.925 real 0m20.708s 00:04:51.925 user 0m22.152s 00:04:51.925 sys 0m3.120s 00:04:51.925 16:12:52 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:51.925 16:12:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:51.925 ************************************ 00:04:51.925 END TEST json_config 00:04:51.925 ************************************ 00:04:51.925 16:12:52 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:51.925 16:12:52 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:51.925 16:12:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:51.925 16:12:52 -- common/autotest_common.sh@10 -- # set +x 00:04:51.925 ************************************ 00:04:51.925 START TEST json_config_extra_key 00:04:51.925 ************************************ 00:04:51.925 16:12:52 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:51.925 16:12:52 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:51.925 16:12:52 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:04:51.925 16:12:52 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:51.925 16:12:52 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:51.925 16:12:52 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:51.925 16:12:52 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:51.925 16:12:52 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:51.925 16:12:52 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:51.925 16:12:52 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:51.925 16:12:52 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:51.925 16:12:52 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:51.925 16:12:52 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:51.925 16:12:52 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:51.925 16:12:52 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:51.925 16:12:52 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:51.925 16:12:52 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:51.925 16:12:52 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:51.925 16:12:52 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:51.925 16:12:52 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:51.925 16:12:52 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:51.925 16:12:52 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:51.925 16:12:52 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:51.925 16:12:52 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:51.925 16:12:52 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:51.925 16:12:52 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:51.925 16:12:52 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:51.925 16:12:52 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:51.925 16:12:52 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:51.925 16:12:52 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:51.925 16:12:52 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:51.925 16:12:52 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:51.925 16:12:52 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:51.925 16:12:52 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:51.925 16:12:52 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:51.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.925 --rc genhtml_branch_coverage=1 00:04:51.925 --rc genhtml_function_coverage=1 00:04:51.925 --rc genhtml_legend=1 00:04:51.925 --rc geninfo_all_blocks=1 00:04:51.925 --rc geninfo_unexecuted_blocks=1 00:04:51.925 00:04:51.925 ' 00:04:51.925 16:12:52 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:51.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.925 --rc genhtml_branch_coverage=1 00:04:51.925 --rc genhtml_function_coverage=1 00:04:51.925 --rc genhtml_legend=1 00:04:51.925 --rc geninfo_all_blocks=1 00:04:51.925 --rc geninfo_unexecuted_blocks=1 00:04:51.925 00:04:51.925 ' 00:04:51.925 16:12:52 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:51.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.925 --rc genhtml_branch_coverage=1 00:04:51.925 --rc genhtml_function_coverage=1 00:04:51.925 --rc genhtml_legend=1 00:04:51.925 --rc geninfo_all_blocks=1 00:04:51.925 --rc geninfo_unexecuted_blocks=1 00:04:51.925 00:04:51.925 ' 00:04:51.925 16:12:52 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:51.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.925 --rc genhtml_branch_coverage=1 00:04:51.925 --rc genhtml_function_coverage=1 00:04:51.925 --rc genhtml_legend=1 00:04:51.925 --rc geninfo_all_blocks=1 00:04:51.925 --rc geninfo_unexecuted_blocks=1 00:04:51.925 00:04:51.925 ' 00:04:51.925 16:12:52 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:51.925 16:12:52 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:51.925 16:12:52 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:51.925 16:12:52 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:51.925 16:12:52 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:51.925 16:12:52 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:51.925 16:12:52 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:51.925 16:12:52 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:51.925 16:12:52 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:51.925 16:12:52 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:51.925 16:12:52 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:51.925 16:12:52 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:51.925 16:12:52 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:51.925 16:12:52 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:51.925 16:12:52 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:51.925 16:12:52 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:51.925 16:12:52 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:51.925 16:12:52 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:51.925 16:12:52 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:51.925 16:12:52 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:51.925 16:12:52 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:51.925 16:12:52 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:51.925 16:12:52 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:51.925 16:12:52 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.925 16:12:52 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.925 16:12:52 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.925 16:12:52 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:51.925 16:12:52 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.925 16:12:52 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:51.925 16:12:52 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:51.925 16:12:52 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:51.925 16:12:52 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:51.925 16:12:52 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:51.925 16:12:52 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:51.925 16:12:52 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:51.925 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:51.926 16:12:52 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:51.926 16:12:52 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:51.926 16:12:52 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:51.926 16:12:52 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:51.926 16:12:52 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:51.926 16:12:52 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:51.926 16:12:52 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:51.926 16:12:52 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:51.926 16:12:52 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:51.926 16:12:52 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:51.926 16:12:52 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:51.926 16:12:52 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:51.926 16:12:52 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:51.926 16:12:52 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:51.926 INFO: launching applications... 00:04:51.926 16:12:52 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:51.926 16:12:52 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:51.926 16:12:52 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:51.926 16:12:52 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:51.926 16:12:52 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:51.926 16:12:52 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:51.926 16:12:52 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:51.926 16:12:52 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:51.926 16:12:52 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3010168 00:04:51.926 16:12:52 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:51.926 16:12:52 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:51.926 Waiting for target to run... 00:04:51.926 16:12:52 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3010168 /var/tmp/spdk_tgt.sock 00:04:51.926 16:12:52 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 3010168 ']' 00:04:51.926 16:12:52 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:51.926 16:12:52 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:51.926 16:12:52 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:51.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:51.926 16:12:52 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:51.926 16:12:52 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:52.184 [2024-09-29 16:12:52.522872] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:52.184 [2024-09-29 16:12:52.523013] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3010168 ] 00:04:52.751 [2024-09-29 16:12:53.104927] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.009 [2024-09-29 16:12:53.342796] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.574 16:12:54 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:53.575 16:12:54 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:04:53.575 16:12:54 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:53.575 00:04:53.575 16:12:54 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:53.575 INFO: shutting down applications... 00:04:53.575 16:12:54 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:53.575 16:12:54 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:53.575 16:12:54 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:53.575 16:12:54 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3010168 ]] 00:04:53.575 16:12:54 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3010168 00:04:53.575 16:12:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:53.575 16:12:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:53.575 16:12:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3010168 00:04:53.575 16:12:54 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:54.141 16:12:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:54.141 16:12:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:54.141 16:12:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3010168 00:04:54.141 16:12:54 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:54.706 16:12:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:54.706 16:12:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:54.706 16:12:55 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3010168 00:04:54.706 16:12:55 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:55.272 16:12:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:55.272 16:12:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:55.272 16:12:55 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3010168 00:04:55.272 16:12:55 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:55.837 16:12:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:55.837 16:12:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:55.837 16:12:56 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3010168 00:04:55.837 16:12:56 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:56.095 16:12:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:56.095 16:12:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:56.095 16:12:56 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3010168 00:04:56.095 16:12:56 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:56.660 16:12:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:56.660 16:12:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:56.660 16:12:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3010168 00:04:56.660 16:12:57 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:56.660 16:12:57 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:56.660 16:12:57 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:56.660 16:12:57 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:56.660 SPDK target shutdown done 00:04:56.660 16:12:57 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:56.660 Success 00:04:56.660 00:04:56.660 real 0m4.844s 00:04:56.660 user 0m4.490s 00:04:56.660 sys 0m0.815s 00:04:56.660 16:12:57 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:56.660 16:12:57 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:56.660 ************************************ 00:04:56.660 END TEST json_config_extra_key 00:04:56.660 ************************************ 00:04:56.660 16:12:57 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:56.660 16:12:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:56.660 16:12:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:56.660 16:12:57 -- common/autotest_common.sh@10 -- # set +x 00:04:56.660 ************************************ 00:04:56.660 START TEST alias_rpc 00:04:56.660 ************************************ 00:04:56.660 16:12:57 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:56.919 * Looking for test storage... 00:04:56.919 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:56.919 16:12:57 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:56.919 16:12:57 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:56.919 16:12:57 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:56.919 16:12:57 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:56.919 16:12:57 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:56.919 16:12:57 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:56.919 16:12:57 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:56.919 16:12:57 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:56.919 16:12:57 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:56.919 16:12:57 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:56.919 16:12:57 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:56.919 16:12:57 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:56.919 16:12:57 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:56.919 16:12:57 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:56.919 16:12:57 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:56.919 16:12:57 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:56.919 16:12:57 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:56.919 16:12:57 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:56.919 16:12:57 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:56.919 16:12:57 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:56.919 16:12:57 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:56.919 16:12:57 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:56.919 16:12:57 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:56.919 16:12:57 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:56.919 16:12:57 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:56.919 16:12:57 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:56.919 16:12:57 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:56.919 16:12:57 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:56.919 16:12:57 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:56.919 16:12:57 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:56.919 16:12:57 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:56.919 16:12:57 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:56.919 16:12:57 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:56.919 16:12:57 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:56.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.919 --rc genhtml_branch_coverage=1 00:04:56.919 --rc genhtml_function_coverage=1 00:04:56.919 --rc genhtml_legend=1 00:04:56.919 --rc geninfo_all_blocks=1 00:04:56.919 --rc geninfo_unexecuted_blocks=1 00:04:56.919 00:04:56.919 ' 00:04:56.919 16:12:57 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:56.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.919 --rc genhtml_branch_coverage=1 00:04:56.919 --rc genhtml_function_coverage=1 00:04:56.919 --rc genhtml_legend=1 00:04:56.919 --rc geninfo_all_blocks=1 00:04:56.919 --rc geninfo_unexecuted_blocks=1 00:04:56.919 00:04:56.919 ' 00:04:56.919 16:12:57 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:56.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.919 --rc genhtml_branch_coverage=1 00:04:56.919 --rc genhtml_function_coverage=1 00:04:56.919 --rc genhtml_legend=1 00:04:56.919 --rc geninfo_all_blocks=1 00:04:56.919 --rc geninfo_unexecuted_blocks=1 00:04:56.919 00:04:56.919 ' 00:04:56.920 16:12:57 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:56.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.920 --rc genhtml_branch_coverage=1 00:04:56.920 --rc genhtml_function_coverage=1 00:04:56.920 --rc genhtml_legend=1 00:04:56.920 --rc geninfo_all_blocks=1 00:04:56.920 --rc geninfo_unexecuted_blocks=1 00:04:56.920 00:04:56.920 ' 00:04:56.920 16:12:57 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:56.920 16:12:57 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3010764 00:04:56.920 16:12:57 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:56.920 16:12:57 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3010764 00:04:56.920 16:12:57 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 3010764 ']' 00:04:56.920 16:12:57 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.920 16:12:57 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:56.920 16:12:57 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.920 16:12:57 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:56.920 16:12:57 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.920 [2024-09-29 16:12:57.416776] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:56.920 [2024-09-29 16:12:57.416929] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3010764 ] 00:04:57.178 [2024-09-29 16:12:57.550950] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.436 [2024-09-29 16:12:57.807210] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.370 16:12:58 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:58.370 16:12:58 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:58.370 16:12:58 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:58.634 16:12:59 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3010764 00:04:58.634 16:12:59 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 3010764 ']' 00:04:58.634 16:12:59 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 3010764 00:04:58.634 16:12:59 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:04:58.634 16:12:59 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:58.634 16:12:59 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3010764 00:04:58.634 16:12:59 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:58.634 16:12:59 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:58.634 16:12:59 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3010764' 00:04:58.634 killing process with pid 3010764 00:04:58.634 16:12:59 alias_rpc -- common/autotest_common.sh@969 -- # kill 3010764 00:04:58.634 16:12:59 alias_rpc -- common/autotest_common.sh@974 -- # wait 3010764 00:05:01.239 00:05:01.239 real 0m4.534s 00:05:01.239 user 0m4.653s 00:05:01.239 sys 0m0.669s 00:05:01.240 16:13:01 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:01.240 16:13:01 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.240 ************************************ 00:05:01.240 END TEST alias_rpc 00:05:01.240 ************************************ 00:05:01.240 16:13:01 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:01.240 16:13:01 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:01.240 16:13:01 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:01.240 16:13:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:01.240 16:13:01 -- common/autotest_common.sh@10 -- # set +x 00:05:01.240 ************************************ 00:05:01.240 START TEST spdkcli_tcp 00:05:01.240 ************************************ 00:05:01.240 16:13:01 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:01.498 * Looking for test storage... 00:05:01.498 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:01.498 16:13:01 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:01.498 16:13:01 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:05:01.498 16:13:01 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:01.498 16:13:01 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:01.498 16:13:01 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:01.498 16:13:01 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:01.498 16:13:01 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:01.498 16:13:01 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:01.498 16:13:01 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:01.498 16:13:01 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:01.498 16:13:01 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:01.498 16:13:01 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:01.498 16:13:01 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:01.498 16:13:01 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:01.498 16:13:01 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:01.498 16:13:01 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:01.498 16:13:01 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:01.498 16:13:01 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:01.498 16:13:01 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:01.498 16:13:01 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:01.498 16:13:01 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:01.498 16:13:01 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:01.498 16:13:01 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:01.498 16:13:01 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:01.498 16:13:01 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:01.498 16:13:01 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:01.498 16:13:01 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:01.498 16:13:01 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:01.498 16:13:01 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:01.498 16:13:01 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:01.498 16:13:01 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:01.498 16:13:01 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:01.498 16:13:01 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:01.498 16:13:01 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:01.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.498 --rc genhtml_branch_coverage=1 00:05:01.498 --rc genhtml_function_coverage=1 00:05:01.498 --rc genhtml_legend=1 00:05:01.498 --rc geninfo_all_blocks=1 00:05:01.498 --rc geninfo_unexecuted_blocks=1 00:05:01.498 00:05:01.498 ' 00:05:01.498 16:13:01 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:01.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.498 --rc genhtml_branch_coverage=1 00:05:01.498 --rc genhtml_function_coverage=1 00:05:01.498 --rc genhtml_legend=1 00:05:01.498 --rc geninfo_all_blocks=1 00:05:01.498 --rc geninfo_unexecuted_blocks=1 00:05:01.498 00:05:01.498 ' 00:05:01.498 16:13:01 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:01.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.498 --rc genhtml_branch_coverage=1 00:05:01.498 --rc genhtml_function_coverage=1 00:05:01.498 --rc genhtml_legend=1 00:05:01.498 --rc geninfo_all_blocks=1 00:05:01.498 --rc geninfo_unexecuted_blocks=1 00:05:01.498 00:05:01.498 ' 00:05:01.498 16:13:01 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:01.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.498 --rc genhtml_branch_coverage=1 00:05:01.498 --rc genhtml_function_coverage=1 00:05:01.498 --rc genhtml_legend=1 00:05:01.498 --rc geninfo_all_blocks=1 00:05:01.498 --rc geninfo_unexecuted_blocks=1 00:05:01.498 00:05:01.498 ' 00:05:01.498 16:13:01 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:01.498 16:13:01 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:01.498 16:13:01 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:01.498 16:13:01 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:01.498 16:13:01 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:01.498 16:13:01 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:01.498 16:13:01 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:01.498 16:13:01 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:01.498 16:13:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:01.498 16:13:01 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3011367 00:05:01.498 16:13:01 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:01.498 16:13:01 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3011367 00:05:01.498 16:13:01 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 3011367 ']' 00:05:01.498 16:13:01 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.498 16:13:01 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:01.498 16:13:01 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.498 16:13:01 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:01.498 16:13:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:01.498 [2024-09-29 16:13:02.007796] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:01.498 [2024-09-29 16:13:02.007964] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3011367 ] 00:05:01.756 [2024-09-29 16:13:02.135506] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:02.013 [2024-09-29 16:13:02.387013] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.013 [2024-09-29 16:13:02.387017] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.946 16:13:03 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:02.946 16:13:03 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:02.946 16:13:03 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3011504 00:05:02.946 16:13:03 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:02.946 16:13:03 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:03.205 [ 00:05:03.205 "bdev_malloc_delete", 00:05:03.205 "bdev_malloc_create", 00:05:03.205 "bdev_null_resize", 00:05:03.205 "bdev_null_delete", 00:05:03.205 "bdev_null_create", 00:05:03.205 "bdev_nvme_cuse_unregister", 00:05:03.205 "bdev_nvme_cuse_register", 00:05:03.205 "bdev_opal_new_user", 00:05:03.205 "bdev_opal_set_lock_state", 00:05:03.205 "bdev_opal_delete", 00:05:03.205 "bdev_opal_get_info", 00:05:03.205 "bdev_opal_create", 00:05:03.205 "bdev_nvme_opal_revert", 00:05:03.205 "bdev_nvme_opal_init", 00:05:03.205 "bdev_nvme_send_cmd", 00:05:03.205 "bdev_nvme_set_keys", 00:05:03.205 "bdev_nvme_get_path_iostat", 00:05:03.205 "bdev_nvme_get_mdns_discovery_info", 00:05:03.205 "bdev_nvme_stop_mdns_discovery", 00:05:03.205 "bdev_nvme_start_mdns_discovery", 00:05:03.205 "bdev_nvme_set_multipath_policy", 00:05:03.205 "bdev_nvme_set_preferred_path", 00:05:03.205 "bdev_nvme_get_io_paths", 00:05:03.205 "bdev_nvme_remove_error_injection", 00:05:03.205 "bdev_nvme_add_error_injection", 00:05:03.205 "bdev_nvme_get_discovery_info", 00:05:03.205 "bdev_nvme_stop_discovery", 00:05:03.205 "bdev_nvme_start_discovery", 00:05:03.205 "bdev_nvme_get_controller_health_info", 00:05:03.205 "bdev_nvme_disable_controller", 00:05:03.205 "bdev_nvme_enable_controller", 00:05:03.205 "bdev_nvme_reset_controller", 00:05:03.205 "bdev_nvme_get_transport_statistics", 00:05:03.205 "bdev_nvme_apply_firmware", 00:05:03.205 "bdev_nvme_detach_controller", 00:05:03.205 "bdev_nvme_get_controllers", 00:05:03.205 "bdev_nvme_attach_controller", 00:05:03.205 "bdev_nvme_set_hotplug", 00:05:03.205 "bdev_nvme_set_options", 00:05:03.205 "bdev_passthru_delete", 00:05:03.205 "bdev_passthru_create", 00:05:03.205 "bdev_lvol_set_parent_bdev", 00:05:03.205 "bdev_lvol_set_parent", 00:05:03.205 "bdev_lvol_check_shallow_copy", 00:05:03.205 "bdev_lvol_start_shallow_copy", 00:05:03.205 "bdev_lvol_grow_lvstore", 00:05:03.205 "bdev_lvol_get_lvols", 00:05:03.205 "bdev_lvol_get_lvstores", 00:05:03.205 "bdev_lvol_delete", 00:05:03.205 "bdev_lvol_set_read_only", 00:05:03.205 "bdev_lvol_resize", 00:05:03.205 "bdev_lvol_decouple_parent", 00:05:03.205 "bdev_lvol_inflate", 00:05:03.205 "bdev_lvol_rename", 00:05:03.205 "bdev_lvol_clone_bdev", 00:05:03.205 "bdev_lvol_clone", 00:05:03.205 "bdev_lvol_snapshot", 00:05:03.205 "bdev_lvol_create", 00:05:03.205 "bdev_lvol_delete_lvstore", 00:05:03.205 "bdev_lvol_rename_lvstore", 00:05:03.205 "bdev_lvol_create_lvstore", 00:05:03.205 "bdev_raid_set_options", 00:05:03.205 "bdev_raid_remove_base_bdev", 00:05:03.205 "bdev_raid_add_base_bdev", 00:05:03.205 "bdev_raid_delete", 00:05:03.205 "bdev_raid_create", 00:05:03.205 "bdev_raid_get_bdevs", 00:05:03.205 "bdev_error_inject_error", 00:05:03.205 "bdev_error_delete", 00:05:03.205 "bdev_error_create", 00:05:03.205 "bdev_split_delete", 00:05:03.205 "bdev_split_create", 00:05:03.205 "bdev_delay_delete", 00:05:03.205 "bdev_delay_create", 00:05:03.205 "bdev_delay_update_latency", 00:05:03.205 "bdev_zone_block_delete", 00:05:03.205 "bdev_zone_block_create", 00:05:03.205 "blobfs_create", 00:05:03.205 "blobfs_detect", 00:05:03.205 "blobfs_set_cache_size", 00:05:03.205 "bdev_aio_delete", 00:05:03.205 "bdev_aio_rescan", 00:05:03.205 "bdev_aio_create", 00:05:03.205 "bdev_ftl_set_property", 00:05:03.205 "bdev_ftl_get_properties", 00:05:03.205 "bdev_ftl_get_stats", 00:05:03.205 "bdev_ftl_unmap", 00:05:03.205 "bdev_ftl_unload", 00:05:03.205 "bdev_ftl_delete", 00:05:03.205 "bdev_ftl_load", 00:05:03.205 "bdev_ftl_create", 00:05:03.205 "bdev_virtio_attach_controller", 00:05:03.205 "bdev_virtio_scsi_get_devices", 00:05:03.205 "bdev_virtio_detach_controller", 00:05:03.205 "bdev_virtio_blk_set_hotplug", 00:05:03.205 "bdev_iscsi_delete", 00:05:03.205 "bdev_iscsi_create", 00:05:03.205 "bdev_iscsi_set_options", 00:05:03.205 "accel_error_inject_error", 00:05:03.205 "ioat_scan_accel_module", 00:05:03.205 "dsa_scan_accel_module", 00:05:03.205 "iaa_scan_accel_module", 00:05:03.205 "keyring_file_remove_key", 00:05:03.205 "keyring_file_add_key", 00:05:03.205 "keyring_linux_set_options", 00:05:03.205 "fsdev_aio_delete", 00:05:03.205 "fsdev_aio_create", 00:05:03.205 "iscsi_get_histogram", 00:05:03.205 "iscsi_enable_histogram", 00:05:03.205 "iscsi_set_options", 00:05:03.205 "iscsi_get_auth_groups", 00:05:03.205 "iscsi_auth_group_remove_secret", 00:05:03.205 "iscsi_auth_group_add_secret", 00:05:03.205 "iscsi_delete_auth_group", 00:05:03.205 "iscsi_create_auth_group", 00:05:03.205 "iscsi_set_discovery_auth", 00:05:03.205 "iscsi_get_options", 00:05:03.205 "iscsi_target_node_request_logout", 00:05:03.205 "iscsi_target_node_set_redirect", 00:05:03.205 "iscsi_target_node_set_auth", 00:05:03.205 "iscsi_target_node_add_lun", 00:05:03.205 "iscsi_get_stats", 00:05:03.205 "iscsi_get_connections", 00:05:03.205 "iscsi_portal_group_set_auth", 00:05:03.205 "iscsi_start_portal_group", 00:05:03.205 "iscsi_delete_portal_group", 00:05:03.205 "iscsi_create_portal_group", 00:05:03.205 "iscsi_get_portal_groups", 00:05:03.205 "iscsi_delete_target_node", 00:05:03.205 "iscsi_target_node_remove_pg_ig_maps", 00:05:03.205 "iscsi_target_node_add_pg_ig_maps", 00:05:03.205 "iscsi_create_target_node", 00:05:03.205 "iscsi_get_target_nodes", 00:05:03.205 "iscsi_delete_initiator_group", 00:05:03.205 "iscsi_initiator_group_remove_initiators", 00:05:03.205 "iscsi_initiator_group_add_initiators", 00:05:03.205 "iscsi_create_initiator_group", 00:05:03.205 "iscsi_get_initiator_groups", 00:05:03.205 "nvmf_set_crdt", 00:05:03.205 "nvmf_set_config", 00:05:03.205 "nvmf_set_max_subsystems", 00:05:03.206 "nvmf_stop_mdns_prr", 00:05:03.206 "nvmf_publish_mdns_prr", 00:05:03.206 "nvmf_subsystem_get_listeners", 00:05:03.206 "nvmf_subsystem_get_qpairs", 00:05:03.206 "nvmf_subsystem_get_controllers", 00:05:03.206 "nvmf_get_stats", 00:05:03.206 "nvmf_get_transports", 00:05:03.206 "nvmf_create_transport", 00:05:03.206 "nvmf_get_targets", 00:05:03.206 "nvmf_delete_target", 00:05:03.206 "nvmf_create_target", 00:05:03.206 "nvmf_subsystem_allow_any_host", 00:05:03.206 "nvmf_subsystem_set_keys", 00:05:03.206 "nvmf_subsystem_remove_host", 00:05:03.206 "nvmf_subsystem_add_host", 00:05:03.206 "nvmf_ns_remove_host", 00:05:03.206 "nvmf_ns_add_host", 00:05:03.206 "nvmf_subsystem_remove_ns", 00:05:03.206 "nvmf_subsystem_set_ns_ana_group", 00:05:03.206 "nvmf_subsystem_add_ns", 00:05:03.206 "nvmf_subsystem_listener_set_ana_state", 00:05:03.206 "nvmf_discovery_get_referrals", 00:05:03.206 "nvmf_discovery_remove_referral", 00:05:03.206 "nvmf_discovery_add_referral", 00:05:03.206 "nvmf_subsystem_remove_listener", 00:05:03.206 "nvmf_subsystem_add_listener", 00:05:03.206 "nvmf_delete_subsystem", 00:05:03.206 "nvmf_create_subsystem", 00:05:03.206 "nvmf_get_subsystems", 00:05:03.206 "env_dpdk_get_mem_stats", 00:05:03.206 "nbd_get_disks", 00:05:03.206 "nbd_stop_disk", 00:05:03.206 "nbd_start_disk", 00:05:03.206 "ublk_recover_disk", 00:05:03.206 "ublk_get_disks", 00:05:03.206 "ublk_stop_disk", 00:05:03.206 "ublk_start_disk", 00:05:03.206 "ublk_destroy_target", 00:05:03.206 "ublk_create_target", 00:05:03.206 "virtio_blk_create_transport", 00:05:03.206 "virtio_blk_get_transports", 00:05:03.206 "vhost_controller_set_coalescing", 00:05:03.206 "vhost_get_controllers", 00:05:03.206 "vhost_delete_controller", 00:05:03.206 "vhost_create_blk_controller", 00:05:03.206 "vhost_scsi_controller_remove_target", 00:05:03.206 "vhost_scsi_controller_add_target", 00:05:03.206 "vhost_start_scsi_controller", 00:05:03.206 "vhost_create_scsi_controller", 00:05:03.206 "thread_set_cpumask", 00:05:03.206 "scheduler_set_options", 00:05:03.206 "framework_get_governor", 00:05:03.206 "framework_get_scheduler", 00:05:03.206 "framework_set_scheduler", 00:05:03.206 "framework_get_reactors", 00:05:03.206 "thread_get_io_channels", 00:05:03.206 "thread_get_pollers", 00:05:03.206 "thread_get_stats", 00:05:03.206 "framework_monitor_context_switch", 00:05:03.206 "spdk_kill_instance", 00:05:03.206 "log_enable_timestamps", 00:05:03.206 "log_get_flags", 00:05:03.206 "log_clear_flag", 00:05:03.206 "log_set_flag", 00:05:03.206 "log_get_level", 00:05:03.206 "log_set_level", 00:05:03.206 "log_get_print_level", 00:05:03.206 "log_set_print_level", 00:05:03.206 "framework_enable_cpumask_locks", 00:05:03.206 "framework_disable_cpumask_locks", 00:05:03.206 "framework_wait_init", 00:05:03.206 "framework_start_init", 00:05:03.206 "scsi_get_devices", 00:05:03.206 "bdev_get_histogram", 00:05:03.206 "bdev_enable_histogram", 00:05:03.206 "bdev_set_qos_limit", 00:05:03.206 "bdev_set_qd_sampling_period", 00:05:03.206 "bdev_get_bdevs", 00:05:03.206 "bdev_reset_iostat", 00:05:03.206 "bdev_get_iostat", 00:05:03.206 "bdev_examine", 00:05:03.206 "bdev_wait_for_examine", 00:05:03.206 "bdev_set_options", 00:05:03.206 "accel_get_stats", 00:05:03.206 "accel_set_options", 00:05:03.206 "accel_set_driver", 00:05:03.206 "accel_crypto_key_destroy", 00:05:03.206 "accel_crypto_keys_get", 00:05:03.206 "accel_crypto_key_create", 00:05:03.206 "accel_assign_opc", 00:05:03.206 "accel_get_module_info", 00:05:03.206 "accel_get_opc_assignments", 00:05:03.206 "vmd_rescan", 00:05:03.206 "vmd_remove_device", 00:05:03.206 "vmd_enable", 00:05:03.206 "sock_get_default_impl", 00:05:03.206 "sock_set_default_impl", 00:05:03.206 "sock_impl_set_options", 00:05:03.206 "sock_impl_get_options", 00:05:03.206 "iobuf_get_stats", 00:05:03.206 "iobuf_set_options", 00:05:03.206 "keyring_get_keys", 00:05:03.206 "framework_get_pci_devices", 00:05:03.206 "framework_get_config", 00:05:03.206 "framework_get_subsystems", 00:05:03.206 "fsdev_set_opts", 00:05:03.206 "fsdev_get_opts", 00:05:03.206 "trace_get_info", 00:05:03.206 "trace_get_tpoint_group_mask", 00:05:03.206 "trace_disable_tpoint_group", 00:05:03.206 "trace_enable_tpoint_group", 00:05:03.206 "trace_clear_tpoint_mask", 00:05:03.206 "trace_set_tpoint_mask", 00:05:03.206 "notify_get_notifications", 00:05:03.206 "notify_get_types", 00:05:03.206 "spdk_get_version", 00:05:03.206 "rpc_get_methods" 00:05:03.206 ] 00:05:03.206 16:13:03 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:03.206 16:13:03 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:03.206 16:13:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:03.206 16:13:03 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:03.206 16:13:03 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3011367 00:05:03.206 16:13:03 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 3011367 ']' 00:05:03.206 16:13:03 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 3011367 00:05:03.206 16:13:03 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:03.206 16:13:03 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:03.206 16:13:03 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3011367 00:05:03.206 16:13:03 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:03.206 16:13:03 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:03.206 16:13:03 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3011367' 00:05:03.206 killing process with pid 3011367 00:05:03.206 16:13:03 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 3011367 00:05:03.206 16:13:03 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 3011367 00:05:05.732 00:05:05.732 real 0m4.455s 00:05:05.732 user 0m7.930s 00:05:05.732 sys 0m0.677s 00:05:05.732 16:13:06 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:05.732 16:13:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:05.732 ************************************ 00:05:05.732 END TEST spdkcli_tcp 00:05:05.732 ************************************ 00:05:05.732 16:13:06 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:05.732 16:13:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:05.732 16:13:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:05.732 16:13:06 -- common/autotest_common.sh@10 -- # set +x 00:05:05.732 ************************************ 00:05:05.732 START TEST dpdk_mem_utility 00:05:05.732 ************************************ 00:05:05.732 16:13:06 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:05.990 * Looking for test storage... 00:05:05.990 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:05.990 16:13:06 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:05.990 16:13:06 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:05:05.990 16:13:06 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:05.990 16:13:06 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:05.990 16:13:06 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:05.990 16:13:06 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:05.990 16:13:06 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:05.990 16:13:06 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.990 16:13:06 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:05.990 16:13:06 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:05.990 16:13:06 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:05.990 16:13:06 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:05.990 16:13:06 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:05.990 16:13:06 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:05.990 16:13:06 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:05.990 16:13:06 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:05.990 16:13:06 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:05.990 16:13:06 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:05.990 16:13:06 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.990 16:13:06 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:05.990 16:13:06 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:05.990 16:13:06 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.990 16:13:06 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:05.990 16:13:06 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:05.990 16:13:06 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:05.990 16:13:06 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:05.990 16:13:06 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.990 16:13:06 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:05.990 16:13:06 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:05.990 16:13:06 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:05.990 16:13:06 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:05.990 16:13:06 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:05.990 16:13:06 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.990 16:13:06 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:05.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.990 --rc genhtml_branch_coverage=1 00:05:05.990 --rc genhtml_function_coverage=1 00:05:05.990 --rc genhtml_legend=1 00:05:05.990 --rc geninfo_all_blocks=1 00:05:05.990 --rc geninfo_unexecuted_blocks=1 00:05:05.990 00:05:05.990 ' 00:05:05.990 16:13:06 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:05.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.990 --rc genhtml_branch_coverage=1 00:05:05.990 --rc genhtml_function_coverage=1 00:05:05.990 --rc genhtml_legend=1 00:05:05.990 --rc geninfo_all_blocks=1 00:05:05.990 --rc geninfo_unexecuted_blocks=1 00:05:05.990 00:05:05.990 ' 00:05:05.990 16:13:06 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:05.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.990 --rc genhtml_branch_coverage=1 00:05:05.990 --rc genhtml_function_coverage=1 00:05:05.990 --rc genhtml_legend=1 00:05:05.990 --rc geninfo_all_blocks=1 00:05:05.990 --rc geninfo_unexecuted_blocks=1 00:05:05.990 00:05:05.990 ' 00:05:05.990 16:13:06 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:05.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.990 --rc genhtml_branch_coverage=1 00:05:05.990 --rc genhtml_function_coverage=1 00:05:05.990 --rc genhtml_legend=1 00:05:05.990 --rc geninfo_all_blocks=1 00:05:05.990 --rc geninfo_unexecuted_blocks=1 00:05:05.990 00:05:05.990 ' 00:05:05.990 16:13:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:05.990 16:13:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3011973 00:05:05.990 16:13:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:05.990 16:13:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3011973 00:05:05.990 16:13:06 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 3011973 ']' 00:05:05.990 16:13:06 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.990 16:13:06 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:05.990 16:13:06 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.990 16:13:06 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:05.990 16:13:06 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:05.991 [2024-09-29 16:13:06.508645] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:05.991 [2024-09-29 16:13:06.508815] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3011973 ] 00:05:06.248 [2024-09-29 16:13:06.641456] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.505 [2024-09-29 16:13:06.896904] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.441 16:13:07 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:07.441 16:13:07 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:07.441 16:13:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:07.441 16:13:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:07.441 16:13:07 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.441 16:13:07 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:07.441 { 00:05:07.441 "filename": "/tmp/spdk_mem_dump.txt" 00:05:07.441 } 00:05:07.441 16:13:07 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.441 16:13:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:07.441 DPDK memory size 866.000000 MiB in 1 heap(s) 00:05:07.441 1 heaps totaling size 866.000000 MiB 00:05:07.441 size: 866.000000 MiB heap id: 0 00:05:07.441 end heaps---------- 00:05:07.441 9 mempools totaling size 642.649841 MiB 00:05:07.441 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:07.441 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:07.441 size: 92.545471 MiB name: bdev_io_3011973 00:05:07.441 size: 51.011292 MiB name: evtpool_3011973 00:05:07.441 size: 50.003479 MiB name: msgpool_3011973 00:05:07.441 size: 36.509338 MiB name: fsdev_io_3011973 00:05:07.441 size: 21.763794 MiB name: PDU_Pool 00:05:07.441 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:07.441 size: 0.026123 MiB name: Session_Pool 00:05:07.441 end mempools------- 00:05:07.441 6 memzones totaling size 4.142822 MiB 00:05:07.441 size: 1.000366 MiB name: RG_ring_0_3011973 00:05:07.441 size: 1.000366 MiB name: RG_ring_1_3011973 00:05:07.441 size: 1.000366 MiB name: RG_ring_4_3011973 00:05:07.441 size: 1.000366 MiB name: RG_ring_5_3011973 00:05:07.441 size: 0.125366 MiB name: RG_ring_2_3011973 00:05:07.441 size: 0.015991 MiB name: RG_ring_3_3011973 00:05:07.441 end memzones------- 00:05:07.441 16:13:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:07.441 heap id: 0 total size: 866.000000 MiB number of busy elements: 44 number of free elements: 20 00:05:07.441 list of free elements. size: 19.979797 MiB 00:05:07.442 element at address: 0x200000400000 with size: 1.999451 MiB 00:05:07.442 element at address: 0x200000800000 with size: 1.996887 MiB 00:05:07.442 element at address: 0x200009600000 with size: 1.995972 MiB 00:05:07.442 element at address: 0x20000d800000 with size: 1.995972 MiB 00:05:07.442 element at address: 0x200007000000 with size: 1.991028 MiB 00:05:07.442 element at address: 0x20001bf00040 with size: 0.999939 MiB 00:05:07.442 element at address: 0x20001c300040 with size: 0.999939 MiB 00:05:07.442 element at address: 0x20001c400000 with size: 0.999329 MiB 00:05:07.442 element at address: 0x200035000000 with size: 0.994324 MiB 00:05:07.442 element at address: 0x20001bc00000 with size: 0.959900 MiB 00:05:07.442 element at address: 0x20001c700040 with size: 0.937256 MiB 00:05:07.442 element at address: 0x200000200000 with size: 0.840942 MiB 00:05:07.442 element at address: 0x20001de00000 with size: 0.583191 MiB 00:05:07.442 element at address: 0x200003e00000 with size: 0.495544 MiB 00:05:07.442 element at address: 0x20001c000000 with size: 0.491150 MiB 00:05:07.442 element at address: 0x20001c800000 with size: 0.485657 MiB 00:05:07.442 element at address: 0x200015e00000 with size: 0.446167 MiB 00:05:07.442 element at address: 0x20002b200000 with size: 0.411072 MiB 00:05:07.442 element at address: 0x200003a00000 with size: 0.355042 MiB 00:05:07.442 element at address: 0x20000d7ff040 with size: 0.001038 MiB 00:05:07.442 list of standard malloc elements. size: 199.221497 MiB 00:05:07.442 element at address: 0x20000d9fef80 with size: 132.000183 MiB 00:05:07.442 element at address: 0x2000097fef80 with size: 64.000183 MiB 00:05:07.442 element at address: 0x20001bdfff80 with size: 1.000183 MiB 00:05:07.442 element at address: 0x20001c1fff80 with size: 1.000183 MiB 00:05:07.442 element at address: 0x20001c5fff80 with size: 1.000183 MiB 00:05:07.442 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:07.442 element at address: 0x20001c7eff40 with size: 0.062683 MiB 00:05:07.442 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:07.442 element at address: 0x200015dff040 with size: 0.000427 MiB 00:05:07.442 element at address: 0x200015dffa00 with size: 0.000366 MiB 00:05:07.442 element at address: 0x2000002d7480 with size: 0.000244 MiB 00:05:07.442 element at address: 0x2000002d7580 with size: 0.000244 MiB 00:05:07.442 element at address: 0x2000002d7680 with size: 0.000244 MiB 00:05:07.442 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:05:07.442 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:05:07.442 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:07.442 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:07.442 element at address: 0x200003a7f2c0 with size: 0.000244 MiB 00:05:07.442 element at address: 0x200003a7f3c0 with size: 0.000244 MiB 00:05:07.442 element at address: 0x200003aff700 with size: 0.000244 MiB 00:05:07.442 element at address: 0x200003aff980 with size: 0.000244 MiB 00:05:07.442 element at address: 0x200003affa80 with size: 0.000244 MiB 00:05:07.442 element at address: 0x200003eff000 with size: 0.000244 MiB 00:05:07.442 element at address: 0x20000d7ff480 with size: 0.000244 MiB 00:05:07.442 element at address: 0x20000d7ff580 with size: 0.000244 MiB 00:05:07.442 element at address: 0x20000d7ff680 with size: 0.000244 MiB 00:05:07.442 element at address: 0x20000d7ff780 with size: 0.000244 MiB 00:05:07.442 element at address: 0x20000d7ff880 with size: 0.000244 MiB 00:05:07.442 element at address: 0x20000d7ff980 with size: 0.000244 MiB 00:05:07.442 element at address: 0x20000d7ffc00 with size: 0.000244 MiB 00:05:07.442 element at address: 0x20000d7ffd00 with size: 0.000244 MiB 00:05:07.442 element at address: 0x20000d7ffe00 with size: 0.000244 MiB 00:05:07.442 element at address: 0x20000d7fff00 with size: 0.000244 MiB 00:05:07.442 element at address: 0x200015dff200 with size: 0.000244 MiB 00:05:07.442 element at address: 0x200015dff300 with size: 0.000244 MiB 00:05:07.442 element at address: 0x200015dff400 with size: 0.000244 MiB 00:05:07.442 element at address: 0x200015dff500 with size: 0.000244 MiB 00:05:07.442 element at address: 0x200015dff600 with size: 0.000244 MiB 00:05:07.442 element at address: 0x200015dff700 with size: 0.000244 MiB 00:05:07.442 element at address: 0x200015dff800 with size: 0.000244 MiB 00:05:07.442 element at address: 0x200015dff900 with size: 0.000244 MiB 00:05:07.442 element at address: 0x200015dffb80 with size: 0.000244 MiB 00:05:07.442 element at address: 0x200015dffc80 with size: 0.000244 MiB 00:05:07.442 element at address: 0x200015dfff00 with size: 0.000244 MiB 00:05:07.442 list of memzone associated elements. size: 646.798706 MiB 00:05:07.442 element at address: 0x20001de954c0 with size: 211.416809 MiB 00:05:07.442 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:07.442 element at address: 0x20002b26ff80 with size: 157.562622 MiB 00:05:07.442 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:07.442 element at address: 0x200015ff4740 with size: 92.045105 MiB 00:05:07.442 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_3011973_0 00:05:07.442 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:05:07.442 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3011973_0 00:05:07.442 element at address: 0x200003fff340 with size: 48.003113 MiB 00:05:07.442 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3011973_0 00:05:07.442 element at address: 0x2000071fdb40 with size: 36.008972 MiB 00:05:07.442 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_3011973_0 00:05:07.442 element at address: 0x20001c9be900 with size: 20.255615 MiB 00:05:07.442 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:07.442 element at address: 0x2000351feb00 with size: 18.005127 MiB 00:05:07.442 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:07.442 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:05:07.442 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3011973 00:05:07.442 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:05:07.442 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3011973 00:05:07.442 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:07.442 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3011973 00:05:07.442 element at address: 0x20001c0fde00 with size: 1.008179 MiB 00:05:07.442 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:07.442 element at address: 0x20001c8bc780 with size: 1.008179 MiB 00:05:07.442 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:07.442 element at address: 0x20001bcfde00 with size: 1.008179 MiB 00:05:07.442 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:07.442 element at address: 0x200015ef25c0 with size: 1.008179 MiB 00:05:07.442 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:07.442 element at address: 0x200003eff100 with size: 1.000549 MiB 00:05:07.442 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3011973 00:05:07.442 element at address: 0x200003affb80 with size: 1.000549 MiB 00:05:07.442 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3011973 00:05:07.442 element at address: 0x20001c4ffd40 with size: 1.000549 MiB 00:05:07.442 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3011973 00:05:07.442 element at address: 0x2000350fe8c0 with size: 1.000549 MiB 00:05:07.442 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3011973 00:05:07.442 element at address: 0x200003a7f4c0 with size: 0.500549 MiB 00:05:07.442 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_3011973 00:05:07.442 element at address: 0x200003e7edc0 with size: 0.500549 MiB 00:05:07.442 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3011973 00:05:07.442 element at address: 0x20001c07dbc0 with size: 0.500549 MiB 00:05:07.442 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:07.442 element at address: 0x200015e72380 with size: 0.500549 MiB 00:05:07.442 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:07.442 element at address: 0x20001c87c540 with size: 0.250549 MiB 00:05:07.442 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:07.442 element at address: 0x200003a5f080 with size: 0.125549 MiB 00:05:07.442 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3011973 00:05:07.442 element at address: 0x20001bcf5bc0 with size: 0.031799 MiB 00:05:07.442 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:07.442 element at address: 0x20002b2693c0 with size: 0.023804 MiB 00:05:07.442 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:07.442 element at address: 0x200003a5ae40 with size: 0.016174 MiB 00:05:07.442 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3011973 00:05:07.442 element at address: 0x20002b26f540 with size: 0.002502 MiB 00:05:07.442 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:07.442 element at address: 0x2000002d7780 with size: 0.000366 MiB 00:05:07.442 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3011973 00:05:07.442 element at address: 0x200003aff800 with size: 0.000366 MiB 00:05:07.442 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_3011973 00:05:07.442 element at address: 0x200015dffd80 with size: 0.000366 MiB 00:05:07.442 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3011973 00:05:07.442 element at address: 0x20000d7ffa80 with size: 0.000366 MiB 00:05:07.442 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:07.442 16:13:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:07.442 16:13:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3011973 00:05:07.442 16:13:07 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 3011973 ']' 00:05:07.442 16:13:07 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 3011973 00:05:07.442 16:13:07 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:07.442 16:13:07 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:07.442 16:13:07 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3011973 00:05:07.701 16:13:08 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:07.701 16:13:08 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:07.701 16:13:08 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3011973' 00:05:07.701 killing process with pid 3011973 00:05:07.701 16:13:08 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 3011973 00:05:07.701 16:13:08 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 3011973 00:05:10.229 00:05:10.229 real 0m4.314s 00:05:10.229 user 0m4.274s 00:05:10.229 sys 0m0.675s 00:05:10.229 16:13:10 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:10.229 16:13:10 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:10.229 ************************************ 00:05:10.229 END TEST dpdk_mem_utility 00:05:10.230 ************************************ 00:05:10.230 16:13:10 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:10.230 16:13:10 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:10.230 16:13:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:10.230 16:13:10 -- common/autotest_common.sh@10 -- # set +x 00:05:10.230 ************************************ 00:05:10.230 START TEST event 00:05:10.230 ************************************ 00:05:10.230 16:13:10 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:10.230 * Looking for test storage... 00:05:10.230 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:10.230 16:13:10 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:10.230 16:13:10 event -- common/autotest_common.sh@1681 -- # lcov --version 00:05:10.230 16:13:10 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:10.230 16:13:10 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:10.230 16:13:10 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:10.230 16:13:10 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:10.230 16:13:10 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:10.230 16:13:10 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:10.230 16:13:10 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:10.230 16:13:10 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:10.230 16:13:10 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:10.230 16:13:10 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:10.230 16:13:10 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:10.230 16:13:10 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:10.230 16:13:10 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:10.230 16:13:10 event -- scripts/common.sh@344 -- # case "$op" in 00:05:10.230 16:13:10 event -- scripts/common.sh@345 -- # : 1 00:05:10.230 16:13:10 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:10.230 16:13:10 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:10.230 16:13:10 event -- scripts/common.sh@365 -- # decimal 1 00:05:10.230 16:13:10 event -- scripts/common.sh@353 -- # local d=1 00:05:10.230 16:13:10 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:10.230 16:13:10 event -- scripts/common.sh@355 -- # echo 1 00:05:10.230 16:13:10 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:10.230 16:13:10 event -- scripts/common.sh@366 -- # decimal 2 00:05:10.230 16:13:10 event -- scripts/common.sh@353 -- # local d=2 00:05:10.230 16:13:10 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:10.230 16:13:10 event -- scripts/common.sh@355 -- # echo 2 00:05:10.230 16:13:10 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:10.230 16:13:10 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:10.230 16:13:10 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:10.230 16:13:10 event -- scripts/common.sh@368 -- # return 0 00:05:10.230 16:13:10 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:10.230 16:13:10 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:10.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.230 --rc genhtml_branch_coverage=1 00:05:10.230 --rc genhtml_function_coverage=1 00:05:10.230 --rc genhtml_legend=1 00:05:10.230 --rc geninfo_all_blocks=1 00:05:10.230 --rc geninfo_unexecuted_blocks=1 00:05:10.230 00:05:10.230 ' 00:05:10.230 16:13:10 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:10.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.230 --rc genhtml_branch_coverage=1 00:05:10.230 --rc genhtml_function_coverage=1 00:05:10.230 --rc genhtml_legend=1 00:05:10.230 --rc geninfo_all_blocks=1 00:05:10.230 --rc geninfo_unexecuted_blocks=1 00:05:10.230 00:05:10.230 ' 00:05:10.230 16:13:10 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:10.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.230 --rc genhtml_branch_coverage=1 00:05:10.230 --rc genhtml_function_coverage=1 00:05:10.230 --rc genhtml_legend=1 00:05:10.230 --rc geninfo_all_blocks=1 00:05:10.230 --rc geninfo_unexecuted_blocks=1 00:05:10.230 00:05:10.230 ' 00:05:10.230 16:13:10 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:10.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.230 --rc genhtml_branch_coverage=1 00:05:10.230 --rc genhtml_function_coverage=1 00:05:10.230 --rc genhtml_legend=1 00:05:10.230 --rc geninfo_all_blocks=1 00:05:10.230 --rc geninfo_unexecuted_blocks=1 00:05:10.230 00:05:10.230 ' 00:05:10.230 16:13:10 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:10.230 16:13:10 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:10.230 16:13:10 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:10.230 16:13:10 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:10.230 16:13:10 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:10.230 16:13:10 event -- common/autotest_common.sh@10 -- # set +x 00:05:10.488 ************************************ 00:05:10.488 START TEST event_perf 00:05:10.488 ************************************ 00:05:10.488 16:13:10 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:10.488 Running I/O for 1 seconds...[2024-09-29 16:13:10.836637] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:10.488 [2024-09-29 16:13:10.836761] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3012575 ] 00:05:10.488 [2024-09-29 16:13:10.963898] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:10.747 [2024-09-29 16:13:11.226922] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:10.747 [2024-09-29 16:13:11.226994] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:10.747 [2024-09-29 16:13:11.227085] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.747 [2024-09-29 16:13:11.227095] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:05:12.121 Running I/O for 1 seconds... 00:05:12.121 lcore 0: 215478 00:05:12.121 lcore 1: 215477 00:05:12.121 lcore 2: 215478 00:05:12.121 lcore 3: 215478 00:05:12.121 done. 00:05:12.121 00:05:12.121 real 0m1.883s 00:05:12.121 user 0m4.709s 00:05:12.121 sys 0m0.159s 00:05:12.121 16:13:12 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:12.121 16:13:12 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:12.121 ************************************ 00:05:12.121 END TEST event_perf 00:05:12.121 ************************************ 00:05:12.379 16:13:12 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:12.379 16:13:12 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:12.379 16:13:12 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:12.379 16:13:12 event -- common/autotest_common.sh@10 -- # set +x 00:05:12.379 ************************************ 00:05:12.379 START TEST event_reactor 00:05:12.379 ************************************ 00:05:12.379 16:13:12 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:12.379 [2024-09-29 16:13:12.766463] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:12.379 [2024-09-29 16:13:12.766575] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3012856 ] 00:05:12.379 [2024-09-29 16:13:12.902641] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.637 [2024-09-29 16:13:13.158124] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.536 test_start 00:05:14.536 oneshot 00:05:14.536 tick 100 00:05:14.536 tick 100 00:05:14.536 tick 250 00:05:14.536 tick 100 00:05:14.536 tick 100 00:05:14.536 tick 100 00:05:14.536 tick 250 00:05:14.536 tick 500 00:05:14.536 tick 100 00:05:14.536 tick 100 00:05:14.536 tick 250 00:05:14.536 tick 100 00:05:14.536 tick 100 00:05:14.536 test_end 00:05:14.536 00:05:14.536 real 0m1.877s 00:05:14.536 user 0m1.704s 00:05:14.536 sys 0m0.163s 00:05:14.536 16:13:14 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:14.536 16:13:14 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:14.536 ************************************ 00:05:14.536 END TEST event_reactor 00:05:14.536 ************************************ 00:05:14.536 16:13:14 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:14.536 16:13:14 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:14.536 16:13:14 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:14.536 16:13:14 event -- common/autotest_common.sh@10 -- # set +x 00:05:14.536 ************************************ 00:05:14.536 START TEST event_reactor_perf 00:05:14.536 ************************************ 00:05:14.536 16:13:14 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:14.536 [2024-09-29 16:13:14.694201] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:14.536 [2024-09-29 16:13:14.694309] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3013017 ] 00:05:14.536 [2024-09-29 16:13:14.826879] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.536 [2024-09-29 16:13:15.083011] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.443 test_start 00:05:16.443 test_end 00:05:16.443 Performance: 261054 events per second 00:05:16.443 00:05:16.443 real 0m1.869s 00:05:16.443 user 0m1.701s 00:05:16.443 sys 0m0.158s 00:05:16.443 16:13:16 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:16.443 16:13:16 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:16.443 ************************************ 00:05:16.443 END TEST event_reactor_perf 00:05:16.443 ************************************ 00:05:16.443 16:13:16 event -- event/event.sh@49 -- # uname -s 00:05:16.443 16:13:16 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:16.443 16:13:16 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:16.443 16:13:16 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:16.443 16:13:16 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:16.443 16:13:16 event -- common/autotest_common.sh@10 -- # set +x 00:05:16.443 ************************************ 00:05:16.443 START TEST event_scheduler 00:05:16.443 ************************************ 00:05:16.443 16:13:16 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:16.444 * Looking for test storage... 00:05:16.444 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:16.444 16:13:16 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:16.444 16:13:16 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:05:16.444 16:13:16 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:16.444 16:13:16 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:16.444 16:13:16 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:16.444 16:13:16 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:16.444 16:13:16 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:16.444 16:13:16 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:16.444 16:13:16 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:16.444 16:13:16 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:16.444 16:13:16 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:16.444 16:13:16 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:16.444 16:13:16 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:16.444 16:13:16 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:16.444 16:13:16 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:16.444 16:13:16 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:16.444 16:13:16 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:16.444 16:13:16 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:16.444 16:13:16 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:16.444 16:13:16 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:16.444 16:13:16 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:16.444 16:13:16 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:16.444 16:13:16 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:16.444 16:13:16 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:16.444 16:13:16 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:16.444 16:13:16 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:16.444 16:13:16 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:16.444 16:13:16 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:16.444 16:13:16 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:16.444 16:13:16 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:16.444 16:13:16 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:16.444 16:13:16 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:16.444 16:13:16 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:16.444 16:13:16 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:16.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.444 --rc genhtml_branch_coverage=1 00:05:16.444 --rc genhtml_function_coverage=1 00:05:16.444 --rc genhtml_legend=1 00:05:16.444 --rc geninfo_all_blocks=1 00:05:16.444 --rc geninfo_unexecuted_blocks=1 00:05:16.444 00:05:16.444 ' 00:05:16.444 16:13:16 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:16.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.444 --rc genhtml_branch_coverage=1 00:05:16.444 --rc genhtml_function_coverage=1 00:05:16.444 --rc genhtml_legend=1 00:05:16.444 --rc geninfo_all_blocks=1 00:05:16.444 --rc geninfo_unexecuted_blocks=1 00:05:16.444 00:05:16.444 ' 00:05:16.444 16:13:16 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:16.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.444 --rc genhtml_branch_coverage=1 00:05:16.444 --rc genhtml_function_coverage=1 00:05:16.444 --rc genhtml_legend=1 00:05:16.444 --rc geninfo_all_blocks=1 00:05:16.444 --rc geninfo_unexecuted_blocks=1 00:05:16.444 00:05:16.444 ' 00:05:16.444 16:13:16 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:16.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.444 --rc genhtml_branch_coverage=1 00:05:16.444 --rc genhtml_function_coverage=1 00:05:16.444 --rc genhtml_legend=1 00:05:16.444 --rc geninfo_all_blocks=1 00:05:16.444 --rc geninfo_unexecuted_blocks=1 00:05:16.444 00:05:16.444 ' 00:05:16.444 16:13:16 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:16.444 16:13:16 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3013338 00:05:16.444 16:13:16 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:16.444 16:13:16 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:16.444 16:13:16 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3013338 00:05:16.444 16:13:16 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 3013338 ']' 00:05:16.444 16:13:16 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.444 16:13:16 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:16.444 16:13:16 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.444 16:13:16 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:16.444 16:13:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:16.444 [2024-09-29 16:13:16.794180] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:16.444 [2024-09-29 16:13:16.794321] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3013338 ] 00:05:16.444 [2024-09-29 16:13:16.918257] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:16.701 [2024-09-29 16:13:17.141284] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.701 [2024-09-29 16:13:17.141338] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:16.701 [2024-09-29 16:13:17.141395] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:16.701 [2024-09-29 16:13:17.141402] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:05:17.267 16:13:17 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:17.267 16:13:17 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:17.267 16:13:17 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:17.267 16:13:17 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.267 16:13:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:17.267 [2024-09-29 16:13:17.740218] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:17.267 [2024-09-29 16:13:17.740267] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:17.267 [2024-09-29 16:13:17.740299] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:17.267 [2024-09-29 16:13:17.740318] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:17.267 [2024-09-29 16:13:17.740337] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:17.267 16:13:17 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.267 16:13:17 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:17.267 16:13:17 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.267 16:13:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:17.525 [2024-09-29 16:13:18.047796] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:17.525 16:13:18 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.525 16:13:18 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:17.525 16:13:18 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:17.525 16:13:18 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:17.525 16:13:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:17.525 ************************************ 00:05:17.525 START TEST scheduler_create_thread 00:05:17.525 ************************************ 00:05:17.525 16:13:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:17.525 16:13:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:17.525 16:13:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.525 16:13:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.525 2 00:05:17.783 16:13:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.783 16:13:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:17.783 16:13:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.784 16:13:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.784 3 00:05:17.784 16:13:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.784 16:13:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:17.784 16:13:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.784 16:13:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.784 4 00:05:17.784 16:13:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.784 16:13:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:17.784 16:13:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.784 16:13:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.784 5 00:05:17.784 16:13:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.784 16:13:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:17.784 16:13:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.784 16:13:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.784 6 00:05:17.784 16:13:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.784 16:13:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:17.784 16:13:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.784 16:13:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.784 7 00:05:17.784 16:13:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.784 16:13:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:17.784 16:13:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.784 16:13:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.784 8 00:05:17.784 16:13:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.784 16:13:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:17.784 16:13:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.784 16:13:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.784 9 00:05:17.784 16:13:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.784 16:13:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:17.784 16:13:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.784 16:13:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.784 10 00:05:17.784 16:13:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.784 16:13:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:17.784 16:13:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.784 16:13:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.784 16:13:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.784 16:13:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:17.784 16:13:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:17.784 16:13:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.784 16:13:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.784 16:13:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.784 16:13:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:17.784 16:13:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.784 16:13:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.157 16:13:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.157 16:13:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:19.157 16:13:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:19.157 16:13:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.157 16:13:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.529 16:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.529 00:05:20.529 real 0m2.623s 00:05:20.529 user 0m0.012s 00:05:20.529 sys 0m0.003s 00:05:20.529 16:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:20.529 16:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.529 ************************************ 00:05:20.529 END TEST scheduler_create_thread 00:05:20.529 ************************************ 00:05:20.529 16:13:20 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:20.529 16:13:20 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3013338 00:05:20.529 16:13:20 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 3013338 ']' 00:05:20.529 16:13:20 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 3013338 00:05:20.529 16:13:20 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:20.529 16:13:20 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:20.529 16:13:20 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3013338 00:05:20.529 16:13:20 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:20.529 16:13:20 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:20.529 16:13:20 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3013338' 00:05:20.529 killing process with pid 3013338 00:05:20.529 16:13:20 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 3013338 00:05:20.529 16:13:20 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 3013338 00:05:20.792 [2024-09-29 16:13:21.180929] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:22.169 00:05:22.169 real 0m5.757s 00:05:22.169 user 0m10.004s 00:05:22.169 sys 0m0.492s 00:05:22.169 16:13:22 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:22.169 16:13:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:22.169 ************************************ 00:05:22.169 END TEST event_scheduler 00:05:22.169 ************************************ 00:05:22.169 16:13:22 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:22.169 16:13:22 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:22.169 16:13:22 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:22.169 16:13:22 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:22.169 16:13:22 event -- common/autotest_common.sh@10 -- # set +x 00:05:22.169 ************************************ 00:05:22.169 START TEST app_repeat 00:05:22.169 ************************************ 00:05:22.169 16:13:22 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:22.169 16:13:22 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.169 16:13:22 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.169 16:13:22 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:22.169 16:13:22 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:22.169 16:13:22 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:22.169 16:13:22 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:22.169 16:13:22 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:22.169 16:13:22 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3014057 00:05:22.169 16:13:22 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:22.169 16:13:22 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:22.169 16:13:22 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3014057' 00:05:22.169 Process app_repeat pid: 3014057 00:05:22.169 16:13:22 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:22.169 16:13:22 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:22.169 spdk_app_start Round 0 00:05:22.169 16:13:22 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3014057 /var/tmp/spdk-nbd.sock 00:05:22.169 16:13:22 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3014057 ']' 00:05:22.169 16:13:22 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:22.169 16:13:22 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:22.169 16:13:22 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:22.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:22.169 16:13:22 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:22.169 16:13:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:22.169 [2024-09-29 16:13:22.440952] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:22.169 [2024-09-29 16:13:22.441129] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3014057 ] 00:05:22.169 [2024-09-29 16:13:22.573736] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:22.426 [2024-09-29 16:13:22.835097] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.426 [2024-09-29 16:13:22.835101] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:22.992 16:13:23 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:22.992 16:13:23 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:22.992 16:13:23 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:23.250 Malloc0 00:05:23.250 16:13:23 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:23.814 Malloc1 00:05:23.814 16:13:24 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:23.814 16:13:24 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.814 16:13:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:23.814 16:13:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:23.814 16:13:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.814 16:13:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:23.814 16:13:24 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:23.814 16:13:24 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.814 16:13:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:23.814 16:13:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:23.814 16:13:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.814 16:13:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:23.814 16:13:24 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:23.814 16:13:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:23.814 16:13:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.814 16:13:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:24.072 /dev/nbd0 00:05:24.072 16:13:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:24.072 16:13:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:24.072 16:13:24 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:24.072 16:13:24 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:24.072 16:13:24 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:24.072 16:13:24 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:24.072 16:13:24 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:24.072 16:13:24 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:24.072 16:13:24 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:24.072 16:13:24 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:24.072 16:13:24 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:24.072 1+0 records in 00:05:24.072 1+0 records out 00:05:24.072 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00024826 s, 16.5 MB/s 00:05:24.072 16:13:24 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:24.072 16:13:24 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:24.072 16:13:24 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:24.072 16:13:24 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:24.072 16:13:24 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:24.072 16:13:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:24.072 16:13:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:24.072 16:13:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:24.330 /dev/nbd1 00:05:24.330 16:13:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:24.330 16:13:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:24.330 16:13:24 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:24.330 16:13:24 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:24.330 16:13:24 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:24.330 16:13:24 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:24.330 16:13:24 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:24.330 16:13:24 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:24.330 16:13:24 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:24.330 16:13:24 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:24.330 16:13:24 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:24.330 1+0 records in 00:05:24.330 1+0 records out 00:05:24.330 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000184085 s, 22.3 MB/s 00:05:24.330 16:13:24 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:24.330 16:13:24 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:24.330 16:13:24 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:24.330 16:13:24 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:24.330 16:13:24 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:24.330 16:13:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:24.330 16:13:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:24.330 16:13:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:24.330 16:13:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.330 16:13:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:24.587 16:13:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:24.587 { 00:05:24.587 "nbd_device": "/dev/nbd0", 00:05:24.587 "bdev_name": "Malloc0" 00:05:24.587 }, 00:05:24.587 { 00:05:24.587 "nbd_device": "/dev/nbd1", 00:05:24.587 "bdev_name": "Malloc1" 00:05:24.587 } 00:05:24.587 ]' 00:05:24.587 16:13:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:24.587 { 00:05:24.587 "nbd_device": "/dev/nbd0", 00:05:24.587 "bdev_name": "Malloc0" 00:05:24.587 }, 00:05:24.587 { 00:05:24.587 "nbd_device": "/dev/nbd1", 00:05:24.587 "bdev_name": "Malloc1" 00:05:24.587 } 00:05:24.587 ]' 00:05:24.587 16:13:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:24.587 16:13:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:24.587 /dev/nbd1' 00:05:24.587 16:13:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:24.587 /dev/nbd1' 00:05:24.587 16:13:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:24.588 16:13:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:24.588 16:13:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:24.588 16:13:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:24.588 16:13:25 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:24.588 16:13:25 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:24.588 16:13:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.588 16:13:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:24.588 16:13:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:24.588 16:13:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:24.588 16:13:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:24.588 16:13:25 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:24.588 256+0 records in 00:05:24.588 256+0 records out 00:05:24.588 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00500352 s, 210 MB/s 00:05:24.588 16:13:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:24.588 16:13:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:24.588 256+0 records in 00:05:24.588 256+0 records out 00:05:24.588 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0243435 s, 43.1 MB/s 00:05:24.588 16:13:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:24.588 16:13:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:24.845 256+0 records in 00:05:24.845 256+0 records out 00:05:24.845 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.029837 s, 35.1 MB/s 00:05:24.845 16:13:25 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:24.845 16:13:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.845 16:13:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:24.845 16:13:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:24.845 16:13:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:24.845 16:13:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:24.845 16:13:25 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:24.845 16:13:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:24.845 16:13:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:24.845 16:13:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:24.845 16:13:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:24.845 16:13:25 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:24.845 16:13:25 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:24.845 16:13:25 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.845 16:13:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.845 16:13:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:24.845 16:13:25 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:24.845 16:13:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:24.845 16:13:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:25.102 16:13:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:25.102 16:13:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:25.102 16:13:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:25.102 16:13:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:25.102 16:13:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:25.102 16:13:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:25.102 16:13:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:25.102 16:13:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:25.102 16:13:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:25.102 16:13:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:25.359 16:13:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:25.359 16:13:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:25.359 16:13:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:25.359 16:13:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:25.359 16:13:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:25.359 16:13:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:25.359 16:13:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:25.359 16:13:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:25.359 16:13:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:25.359 16:13:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.359 16:13:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:25.617 16:13:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:25.617 16:13:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:25.617 16:13:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:25.617 16:13:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:25.617 16:13:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:25.617 16:13:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:25.617 16:13:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:25.617 16:13:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:25.617 16:13:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:25.617 16:13:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:25.617 16:13:26 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:25.617 16:13:26 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:25.617 16:13:26 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:26.181 16:13:26 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:27.553 [2024-09-29 16:13:27.924633] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:27.810 [2024-09-29 16:13:28.173730] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:27.810 [2024-09-29 16:13:28.173733] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.068 [2024-09-29 16:13:28.390317] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:28.068 [2024-09-29 16:13:28.390394] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:29.000 16:13:29 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:29.000 16:13:29 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:29.000 spdk_app_start Round 1 00:05:29.000 16:13:29 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3014057 /var/tmp/spdk-nbd.sock 00:05:29.000 16:13:29 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3014057 ']' 00:05:29.000 16:13:29 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:29.000 16:13:29 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:29.000 16:13:29 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:29.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:29.000 16:13:29 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:29.000 16:13:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:29.257 16:13:29 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:29.257 16:13:29 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:29.257 16:13:29 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:29.821 Malloc0 00:05:29.821 16:13:30 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:30.078 Malloc1 00:05:30.078 16:13:30 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:30.078 16:13:30 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.078 16:13:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:30.078 16:13:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:30.078 16:13:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.078 16:13:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:30.078 16:13:30 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:30.078 16:13:30 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.078 16:13:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:30.078 16:13:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:30.078 16:13:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.078 16:13:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:30.078 16:13:30 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:30.078 16:13:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:30.078 16:13:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.078 16:13:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:30.336 /dev/nbd0 00:05:30.336 16:13:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:30.336 16:13:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:30.336 16:13:30 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:30.336 16:13:30 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:30.336 16:13:30 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:30.336 16:13:30 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:30.336 16:13:30 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:30.336 16:13:30 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:30.336 16:13:30 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:30.336 16:13:30 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:30.336 16:13:30 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:30.336 1+0 records in 00:05:30.336 1+0 records out 00:05:30.336 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000202979 s, 20.2 MB/s 00:05:30.336 16:13:30 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:30.336 16:13:30 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:30.336 16:13:30 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:30.336 16:13:30 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:30.336 16:13:30 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:30.336 16:13:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:30.336 16:13:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.336 16:13:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:30.900 /dev/nbd1 00:05:30.900 16:13:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:30.900 16:13:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:30.900 16:13:31 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:30.900 16:13:31 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:30.900 16:13:31 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:30.900 16:13:31 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:30.900 16:13:31 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:30.900 16:13:31 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:30.900 16:13:31 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:30.900 16:13:31 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:30.900 16:13:31 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:30.900 1+0 records in 00:05:30.900 1+0 records out 00:05:30.900 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000202656 s, 20.2 MB/s 00:05:30.900 16:13:31 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:30.900 16:13:31 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:30.900 16:13:31 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:30.900 16:13:31 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:30.900 16:13:31 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:30.900 16:13:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:30.900 16:13:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.900 16:13:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:30.900 16:13:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.900 16:13:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:30.900 16:13:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:30.900 { 00:05:30.900 "nbd_device": "/dev/nbd0", 00:05:30.900 "bdev_name": "Malloc0" 00:05:30.900 }, 00:05:30.900 { 00:05:30.900 "nbd_device": "/dev/nbd1", 00:05:30.900 "bdev_name": "Malloc1" 00:05:30.900 } 00:05:30.900 ]' 00:05:30.900 16:13:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:30.900 { 00:05:30.900 "nbd_device": "/dev/nbd0", 00:05:30.900 "bdev_name": "Malloc0" 00:05:30.900 }, 00:05:30.900 { 00:05:30.900 "nbd_device": "/dev/nbd1", 00:05:30.900 "bdev_name": "Malloc1" 00:05:30.900 } 00:05:30.900 ]' 00:05:30.900 16:13:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:31.157 16:13:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:31.157 /dev/nbd1' 00:05:31.157 16:13:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:31.157 /dev/nbd1' 00:05:31.157 16:13:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:31.157 16:13:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:31.157 16:13:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:31.157 16:13:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:31.157 16:13:31 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:31.157 16:13:31 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:31.157 16:13:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.157 16:13:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:31.157 16:13:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:31.157 16:13:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:31.157 16:13:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:31.157 16:13:31 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:31.157 256+0 records in 00:05:31.157 256+0 records out 00:05:31.157 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00513358 s, 204 MB/s 00:05:31.157 16:13:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:31.157 16:13:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:31.157 256+0 records in 00:05:31.157 256+0 records out 00:05:31.157 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0243672 s, 43.0 MB/s 00:05:31.157 16:13:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:31.157 16:13:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:31.157 256+0 records in 00:05:31.157 256+0 records out 00:05:31.157 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0299161 s, 35.1 MB/s 00:05:31.157 16:13:31 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:31.157 16:13:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.157 16:13:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:31.157 16:13:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:31.157 16:13:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:31.157 16:13:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:31.157 16:13:31 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:31.157 16:13:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:31.157 16:13:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:31.157 16:13:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:31.157 16:13:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:31.157 16:13:31 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:31.157 16:13:31 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:31.157 16:13:31 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.157 16:13:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.157 16:13:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:31.157 16:13:31 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:31.157 16:13:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:31.157 16:13:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:31.414 16:13:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:31.414 16:13:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:31.414 16:13:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:31.414 16:13:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:31.414 16:13:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:31.414 16:13:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:31.414 16:13:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:31.414 16:13:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:31.414 16:13:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:31.414 16:13:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:31.671 16:13:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:31.671 16:13:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:31.671 16:13:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:31.671 16:13:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:31.671 16:13:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:31.671 16:13:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:31.671 16:13:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:31.671 16:13:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:31.671 16:13:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:31.671 16:13:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.671 16:13:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:31.928 16:13:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:31.928 16:13:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:31.928 16:13:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:31.928 16:13:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:32.186 16:13:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:32.186 16:13:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:32.186 16:13:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:32.186 16:13:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:32.186 16:13:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:32.186 16:13:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:32.186 16:13:32 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:32.186 16:13:32 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:32.186 16:13:32 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:32.443 16:13:32 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:33.818 [2024-09-29 16:13:34.316860] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:34.109 [2024-09-29 16:13:34.571899] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.109 [2024-09-29 16:13:34.571900] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.390 [2024-09-29 16:13:34.789333] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:34.390 [2024-09-29 16:13:34.789420] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:35.762 16:13:35 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:35.762 16:13:35 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:35.762 spdk_app_start Round 2 00:05:35.762 16:13:35 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3014057 /var/tmp/spdk-nbd.sock 00:05:35.762 16:13:35 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3014057 ']' 00:05:35.762 16:13:35 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:35.762 16:13:35 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:35.762 16:13:35 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:35.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:35.762 16:13:35 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:35.762 16:13:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:35.762 16:13:36 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:35.762 16:13:36 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:35.762 16:13:36 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:36.019 Malloc0 00:05:36.019 16:13:36 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:36.584 Malloc1 00:05:36.584 16:13:36 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:36.584 16:13:36 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.584 16:13:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:36.584 16:13:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:36.584 16:13:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.584 16:13:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:36.584 16:13:36 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:36.584 16:13:36 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.584 16:13:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:36.584 16:13:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:36.584 16:13:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.584 16:13:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:36.584 16:13:36 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:36.584 16:13:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:36.584 16:13:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.584 16:13:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:36.842 /dev/nbd0 00:05:36.842 16:13:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:36.842 16:13:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:36.842 16:13:37 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:36.842 16:13:37 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:36.842 16:13:37 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:36.842 16:13:37 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:36.842 16:13:37 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:36.842 16:13:37 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:36.842 16:13:37 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:36.842 16:13:37 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:36.842 16:13:37 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:36.842 1+0 records in 00:05:36.843 1+0 records out 00:05:36.843 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000283529 s, 14.4 MB/s 00:05:36.843 16:13:37 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:36.843 16:13:37 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:36.843 16:13:37 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:36.843 16:13:37 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:36.843 16:13:37 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:36.843 16:13:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:36.843 16:13:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.843 16:13:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:37.101 /dev/nbd1 00:05:37.101 16:13:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:37.101 16:13:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:37.101 16:13:37 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:37.101 16:13:37 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:37.101 16:13:37 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:37.101 16:13:37 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:37.101 16:13:37 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:37.101 16:13:37 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:37.101 16:13:37 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:37.101 16:13:37 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:37.101 16:13:37 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:37.101 1+0 records in 00:05:37.101 1+0 records out 00:05:37.101 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000210254 s, 19.5 MB/s 00:05:37.101 16:13:37 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:37.101 16:13:37 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:37.101 16:13:37 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:37.101 16:13:37 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:37.101 16:13:37 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:37.101 16:13:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:37.101 16:13:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:37.101 16:13:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:37.101 16:13:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.101 16:13:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:37.359 16:13:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:37.359 { 00:05:37.359 "nbd_device": "/dev/nbd0", 00:05:37.359 "bdev_name": "Malloc0" 00:05:37.359 }, 00:05:37.359 { 00:05:37.359 "nbd_device": "/dev/nbd1", 00:05:37.359 "bdev_name": "Malloc1" 00:05:37.359 } 00:05:37.359 ]' 00:05:37.359 16:13:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:37.359 { 00:05:37.359 "nbd_device": "/dev/nbd0", 00:05:37.359 "bdev_name": "Malloc0" 00:05:37.359 }, 00:05:37.359 { 00:05:37.359 "nbd_device": "/dev/nbd1", 00:05:37.359 "bdev_name": "Malloc1" 00:05:37.359 } 00:05:37.359 ]' 00:05:37.359 16:13:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:37.359 16:13:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:37.359 /dev/nbd1' 00:05:37.359 16:13:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:37.359 /dev/nbd1' 00:05:37.359 16:13:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:37.359 16:13:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:37.359 16:13:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:37.359 16:13:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:37.359 16:13:37 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:37.359 16:13:37 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:37.359 16:13:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.359 16:13:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:37.359 16:13:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:37.359 16:13:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:37.359 16:13:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:37.359 16:13:37 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:37.359 256+0 records in 00:05:37.359 256+0 records out 00:05:37.359 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00511188 s, 205 MB/s 00:05:37.359 16:13:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:37.359 16:13:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:37.359 256+0 records in 00:05:37.359 256+0 records out 00:05:37.359 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0245933 s, 42.6 MB/s 00:05:37.359 16:13:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:37.359 16:13:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:37.617 256+0 records in 00:05:37.617 256+0 records out 00:05:37.617 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0306474 s, 34.2 MB/s 00:05:37.617 16:13:37 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:37.617 16:13:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.617 16:13:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:37.617 16:13:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:37.617 16:13:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:37.617 16:13:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:37.617 16:13:37 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:37.617 16:13:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:37.617 16:13:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:37.617 16:13:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:37.617 16:13:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:37.617 16:13:37 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:37.617 16:13:37 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:37.617 16:13:37 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.617 16:13:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.617 16:13:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:37.617 16:13:37 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:37.617 16:13:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:37.617 16:13:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:37.876 16:13:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:37.876 16:13:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:37.876 16:13:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:37.876 16:13:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:37.876 16:13:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:37.876 16:13:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:37.876 16:13:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:37.876 16:13:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:37.876 16:13:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:37.876 16:13:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:38.134 16:13:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:38.134 16:13:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:38.134 16:13:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:38.134 16:13:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:38.134 16:13:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:38.134 16:13:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:38.134 16:13:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:38.134 16:13:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:38.134 16:13:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:38.134 16:13:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.134 16:13:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:38.392 16:13:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:38.392 16:13:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:38.392 16:13:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:38.392 16:13:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:38.392 16:13:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:38.392 16:13:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:38.392 16:13:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:38.392 16:13:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:38.392 16:13:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:38.392 16:13:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:38.392 16:13:38 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:38.392 16:13:38 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:38.392 16:13:38 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:38.956 16:13:39 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:40.330 [2024-09-29 16:13:40.686509] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:40.587 [2024-09-29 16:13:40.935842] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.587 [2024-09-29 16:13:40.935845] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.587 [2024-09-29 16:13:41.149732] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:40.587 [2024-09-29 16:13:41.149806] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:41.959 16:13:42 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3014057 /var/tmp/spdk-nbd.sock 00:05:41.959 16:13:42 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3014057 ']' 00:05:41.959 16:13:42 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:41.959 16:13:42 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:41.959 16:13:42 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:41.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:41.959 16:13:42 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:41.959 16:13:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:42.217 16:13:42 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:42.217 16:13:42 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:42.217 16:13:42 event.app_repeat -- event/event.sh@39 -- # killprocess 3014057 00:05:42.217 16:13:42 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 3014057 ']' 00:05:42.217 16:13:42 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 3014057 00:05:42.217 16:13:42 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:05:42.217 16:13:42 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:42.217 16:13:42 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3014057 00:05:42.217 16:13:42 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:42.217 16:13:42 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:42.217 16:13:42 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3014057' 00:05:42.217 killing process with pid 3014057 00:05:42.217 16:13:42 event.app_repeat -- common/autotest_common.sh@969 -- # kill 3014057 00:05:42.217 16:13:42 event.app_repeat -- common/autotest_common.sh@974 -- # wait 3014057 00:05:43.590 spdk_app_start is called in Round 0. 00:05:43.590 Shutdown signal received, stop current app iteration 00:05:43.590 Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 reinitialization... 00:05:43.590 spdk_app_start is called in Round 1. 00:05:43.590 Shutdown signal received, stop current app iteration 00:05:43.590 Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 reinitialization... 00:05:43.590 spdk_app_start is called in Round 2. 00:05:43.590 Shutdown signal received, stop current app iteration 00:05:43.590 Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 reinitialization... 00:05:43.590 spdk_app_start is called in Round 3. 00:05:43.590 Shutdown signal received, stop current app iteration 00:05:43.590 16:13:43 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:43.590 16:13:43 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:43.590 00:05:43.590 real 0m21.437s 00:05:43.590 user 0m44.422s 00:05:43.590 sys 0m3.451s 00:05:43.590 16:13:43 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:43.590 16:13:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:43.590 ************************************ 00:05:43.590 END TEST app_repeat 00:05:43.590 ************************************ 00:05:43.590 16:13:43 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:43.590 16:13:43 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:43.590 16:13:43 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:43.590 16:13:43 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:43.590 16:13:43 event -- common/autotest_common.sh@10 -- # set +x 00:05:43.590 ************************************ 00:05:43.590 START TEST cpu_locks 00:05:43.590 ************************************ 00:05:43.590 16:13:43 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:43.590 * Looking for test storage... 00:05:43.590 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:43.590 16:13:43 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:43.590 16:13:43 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:05:43.590 16:13:43 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:43.590 16:13:44 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:43.590 16:13:44 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:43.590 16:13:44 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:43.590 16:13:44 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:43.590 16:13:44 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:43.590 16:13:44 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:43.590 16:13:44 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:43.590 16:13:44 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:43.590 16:13:44 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:43.590 16:13:44 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:43.590 16:13:44 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:43.590 16:13:44 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:43.590 16:13:44 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:43.590 16:13:44 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:43.590 16:13:44 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:43.590 16:13:44 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:43.590 16:13:44 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:43.590 16:13:44 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:43.590 16:13:44 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:43.590 16:13:44 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:43.590 16:13:44 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:43.590 16:13:44 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:43.590 16:13:44 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:43.590 16:13:44 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:43.590 16:13:44 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:43.590 16:13:44 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:43.590 16:13:44 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:43.590 16:13:44 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:43.590 16:13:44 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:43.590 16:13:44 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:43.590 16:13:44 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:43.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.590 --rc genhtml_branch_coverage=1 00:05:43.590 --rc genhtml_function_coverage=1 00:05:43.590 --rc genhtml_legend=1 00:05:43.590 --rc geninfo_all_blocks=1 00:05:43.590 --rc geninfo_unexecuted_blocks=1 00:05:43.590 00:05:43.590 ' 00:05:43.590 16:13:44 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:43.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.590 --rc genhtml_branch_coverage=1 00:05:43.590 --rc genhtml_function_coverage=1 00:05:43.590 --rc genhtml_legend=1 00:05:43.590 --rc geninfo_all_blocks=1 00:05:43.590 --rc geninfo_unexecuted_blocks=1 00:05:43.590 00:05:43.590 ' 00:05:43.590 16:13:44 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:43.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.590 --rc genhtml_branch_coverage=1 00:05:43.590 --rc genhtml_function_coverage=1 00:05:43.590 --rc genhtml_legend=1 00:05:43.590 --rc geninfo_all_blocks=1 00:05:43.590 --rc geninfo_unexecuted_blocks=1 00:05:43.590 00:05:43.590 ' 00:05:43.590 16:13:44 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:43.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.590 --rc genhtml_branch_coverage=1 00:05:43.590 --rc genhtml_function_coverage=1 00:05:43.590 --rc genhtml_legend=1 00:05:43.590 --rc geninfo_all_blocks=1 00:05:43.590 --rc geninfo_unexecuted_blocks=1 00:05:43.590 00:05:43.590 ' 00:05:43.590 16:13:44 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:43.590 16:13:44 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:43.590 16:13:44 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:43.590 16:13:44 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:43.590 16:13:44 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:43.590 16:13:44 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:43.590 16:13:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:43.590 ************************************ 00:05:43.590 START TEST default_locks 00:05:43.590 ************************************ 00:05:43.590 16:13:44 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:05:43.590 16:13:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3016820 00:05:43.590 16:13:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:43.590 16:13:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3016820 00:05:43.590 16:13:44 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 3016820 ']' 00:05:43.590 16:13:44 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.590 16:13:44 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:43.590 16:13:44 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.591 16:13:44 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:43.591 16:13:44 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:43.591 [2024-09-29 16:13:44.150264] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:43.591 [2024-09-29 16:13:44.150424] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3016820 ] 00:05:43.848 [2024-09-29 16:13:44.279423] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.105 [2024-09-29 16:13:44.521303] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.040 16:13:45 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:45.040 16:13:45 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:05:45.040 16:13:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3016820 00:05:45.040 16:13:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3016820 00:05:45.040 16:13:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:45.298 lslocks: write error 00:05:45.298 16:13:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3016820 00:05:45.298 16:13:45 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 3016820 ']' 00:05:45.298 16:13:45 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 3016820 00:05:45.298 16:13:45 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:05:45.298 16:13:45 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:45.298 16:13:45 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3016820 00:05:45.298 16:13:45 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:45.298 16:13:45 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:45.298 16:13:45 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3016820' 00:05:45.298 killing process with pid 3016820 00:05:45.298 16:13:45 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 3016820 00:05:45.298 16:13:45 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 3016820 00:05:47.828 16:13:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3016820 00:05:47.828 16:13:48 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:47.828 16:13:48 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3016820 00:05:47.828 16:13:48 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:47.828 16:13:48 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:47.828 16:13:48 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:47.828 16:13:48 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:47.828 16:13:48 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 3016820 00:05:47.828 16:13:48 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 3016820 ']' 00:05:47.828 16:13:48 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.828 16:13:48 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:47.828 16:13:48 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.828 16:13:48 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:47.828 16:13:48 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:47.828 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (3016820) - No such process 00:05:47.828 ERROR: process (pid: 3016820) is no longer running 00:05:47.828 16:13:48 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:47.828 16:13:48 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:05:47.828 16:13:48 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:47.828 16:13:48 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:47.828 16:13:48 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:47.828 16:13:48 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:47.828 16:13:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:47.828 16:13:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:47.828 16:13:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:47.828 16:13:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:47.828 00:05:47.828 real 0m4.243s 00:05:47.828 user 0m4.234s 00:05:47.828 sys 0m0.754s 00:05:47.828 16:13:48 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:47.828 16:13:48 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:47.828 ************************************ 00:05:47.828 END TEST default_locks 00:05:47.828 ************************************ 00:05:47.828 16:13:48 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:47.828 16:13:48 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:47.828 16:13:48 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:47.828 16:13:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:47.828 ************************************ 00:05:47.828 START TEST default_locks_via_rpc 00:05:47.828 ************************************ 00:05:47.828 16:13:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:05:47.829 16:13:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3017385 00:05:47.829 16:13:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:47.829 16:13:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3017385 00:05:47.829 16:13:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3017385 ']' 00:05:47.829 16:13:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.829 16:13:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:47.829 16:13:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.829 16:13:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:47.829 16:13:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.087 [2024-09-29 16:13:48.438752] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:48.087 [2024-09-29 16:13:48.438918] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3017385 ] 00:05:48.087 [2024-09-29 16:13:48.566237] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.346 [2024-09-29 16:13:48.817054] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.280 16:13:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:49.280 16:13:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:49.280 16:13:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:49.280 16:13:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:49.280 16:13:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.280 16:13:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:49.280 16:13:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:49.280 16:13:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:49.280 16:13:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:49.280 16:13:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:49.280 16:13:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:49.280 16:13:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:49.280 16:13:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.280 16:13:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:49.280 16:13:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3017385 00:05:49.280 16:13:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3017385 00:05:49.280 16:13:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:49.539 16:13:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3017385 00:05:49.539 16:13:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 3017385 ']' 00:05:49.539 16:13:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 3017385 00:05:49.539 16:13:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:05:49.539 16:13:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:49.539 16:13:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3017385 00:05:49.539 16:13:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:49.539 16:13:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:49.539 16:13:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3017385' 00:05:49.539 killing process with pid 3017385 00:05:49.539 16:13:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 3017385 00:05:49.539 16:13:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 3017385 00:05:52.821 00:05:52.821 real 0m4.322s 00:05:52.821 user 0m4.327s 00:05:52.821 sys 0m0.792s 00:05:52.821 16:13:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:52.821 16:13:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.821 ************************************ 00:05:52.821 END TEST default_locks_via_rpc 00:05:52.821 ************************************ 00:05:52.821 16:13:52 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:52.821 16:13:52 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:52.821 16:13:52 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:52.821 16:13:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:52.821 ************************************ 00:05:52.821 START TEST non_locking_app_on_locked_coremask 00:05:52.821 ************************************ 00:05:52.821 16:13:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:05:52.821 16:13:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3017941 00:05:52.821 16:13:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:52.821 16:13:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3017941 /var/tmp/spdk.sock 00:05:52.821 16:13:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3017941 ']' 00:05:52.821 16:13:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.821 16:13:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:52.821 16:13:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.821 16:13:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:52.821 16:13:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.821 [2024-09-29 16:13:52.815061] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:52.821 [2024-09-29 16:13:52.815194] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3017941 ] 00:05:52.821 [2024-09-29 16:13:52.947865] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.821 [2024-09-29 16:13:53.202358] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.754 16:13:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:53.754 16:13:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:53.754 16:13:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3018082 00:05:53.754 16:13:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:53.754 16:13:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3018082 /var/tmp/spdk2.sock 00:05:53.754 16:13:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3018082 ']' 00:05:53.754 16:13:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:53.754 16:13:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:53.754 16:13:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:53.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:53.754 16:13:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:53.754 16:13:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:53.754 [2024-09-29 16:13:54.230177] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:53.754 [2024-09-29 16:13:54.230319] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3018082 ] 00:05:54.011 [2024-09-29 16:13:54.428692] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:54.011 [2024-09-29 16:13:54.428769] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.576 [2024-09-29 16:13:54.941805] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.477 16:13:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:56.477 16:13:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:56.477 16:13:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3017941 00:05:56.477 16:13:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3017941 00:05:56.477 16:13:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:57.044 lslocks: write error 00:05:57.044 16:13:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3017941 00:05:57.044 16:13:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3017941 ']' 00:05:57.044 16:13:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 3017941 00:05:57.044 16:13:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:57.044 16:13:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:57.044 16:13:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3017941 00:05:57.044 16:13:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:57.044 16:13:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:57.044 16:13:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3017941' 00:05:57.044 killing process with pid 3017941 00:05:57.044 16:13:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 3017941 00:05:57.044 16:13:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 3017941 00:06:02.310 16:14:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3018082 00:06:02.310 16:14:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3018082 ']' 00:06:02.310 16:14:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 3018082 00:06:02.310 16:14:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:02.310 16:14:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:02.310 16:14:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3018082 00:06:02.568 16:14:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:02.568 16:14:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:02.568 16:14:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3018082' 00:06:02.568 killing process with pid 3018082 00:06:02.568 16:14:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 3018082 00:06:02.568 16:14:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 3018082 00:06:05.095 00:06:05.095 real 0m12.861s 00:06:05.095 user 0m13.265s 00:06:05.095 sys 0m1.576s 00:06:05.095 16:14:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.095 16:14:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:05.095 ************************************ 00:06:05.095 END TEST non_locking_app_on_locked_coremask 00:06:05.095 ************************************ 00:06:05.095 16:14:05 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:05.095 16:14:05 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:05.095 16:14:05 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.096 16:14:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:05.096 ************************************ 00:06:05.096 START TEST locking_app_on_unlocked_coremask 00:06:05.096 ************************************ 00:06:05.096 16:14:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:05.096 16:14:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3019452 00:06:05.096 16:14:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:05.096 16:14:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3019452 /var/tmp/spdk.sock 00:06:05.096 16:14:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3019452 ']' 00:06:05.096 16:14:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.096 16:14:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:05.096 16:14:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.096 16:14:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:05.096 16:14:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:05.354 [2024-09-29 16:14:05.729626] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:05.354 [2024-09-29 16:14:05.729794] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3019452 ] 00:06:05.354 [2024-09-29 16:14:05.863743] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:05.354 [2024-09-29 16:14:05.863806] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.611 [2024-09-29 16:14:06.124925] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.544 16:14:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:06.544 16:14:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:06.544 16:14:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3019707 00:06:06.544 16:14:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3019707 /var/tmp/spdk2.sock 00:06:06.544 16:14:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:06.544 16:14:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3019707 ']' 00:06:06.544 16:14:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:06.544 16:14:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:06.544 16:14:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:06.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:06.544 16:14:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:06.544 16:14:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.802 [2024-09-29 16:14:07.186565] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:06.802 [2024-09-29 16:14:07.186715] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3019707 ] 00:06:07.060 [2024-09-29 16:14:07.378955] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.626 [2024-09-29 16:14:07.907320] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.528 16:14:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:09.528 16:14:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:09.528 16:14:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3019707 00:06:09.528 16:14:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3019707 00:06:09.528 16:14:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:09.786 lslocks: write error 00:06:09.786 16:14:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3019452 00:06:09.786 16:14:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3019452 ']' 00:06:09.786 16:14:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 3019452 00:06:09.786 16:14:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:09.786 16:14:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:09.786 16:14:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3019452 00:06:10.043 16:14:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:10.043 16:14:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:10.043 16:14:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3019452' 00:06:10.043 killing process with pid 3019452 00:06:10.043 16:14:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 3019452 00:06:10.043 16:14:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 3019452 00:06:15.444 16:14:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3019707 00:06:15.444 16:14:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3019707 ']' 00:06:15.444 16:14:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 3019707 00:06:15.444 16:14:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:15.444 16:14:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:15.444 16:14:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3019707 00:06:15.444 16:14:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:15.444 16:14:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:15.444 16:14:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3019707' 00:06:15.444 killing process with pid 3019707 00:06:15.444 16:14:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 3019707 00:06:15.444 16:14:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 3019707 00:06:17.975 00:06:17.975 real 0m12.740s 00:06:17.975 user 0m13.036s 00:06:17.975 sys 0m1.533s 00:06:17.975 16:14:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:17.975 16:14:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:17.975 ************************************ 00:06:17.975 END TEST locking_app_on_unlocked_coremask 00:06:17.975 ************************************ 00:06:17.975 16:14:18 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:17.975 16:14:18 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:17.975 16:14:18 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:17.975 16:14:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:17.975 ************************************ 00:06:17.975 START TEST locking_app_on_locked_coremask 00:06:17.975 ************************************ 00:06:17.975 16:14:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:17.975 16:14:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3021073 00:06:17.975 16:14:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:17.975 16:14:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3021073 /var/tmp/spdk.sock 00:06:17.975 16:14:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3021073 ']' 00:06:17.975 16:14:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.975 16:14:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:17.975 16:14:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.975 16:14:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:17.975 16:14:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:17.975 [2024-09-29 16:14:18.518346] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:17.975 [2024-09-29 16:14:18.518479] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3021073 ] 00:06:18.234 [2024-09-29 16:14:18.651730] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.492 [2024-09-29 16:14:18.914490] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.427 16:14:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:19.427 16:14:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:19.427 16:14:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3021219 00:06:19.427 16:14:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:19.427 16:14:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3021219 /var/tmp/spdk2.sock 00:06:19.427 16:14:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:19.427 16:14:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3021219 /var/tmp/spdk2.sock 00:06:19.427 16:14:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:19.427 16:14:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.427 16:14:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:19.427 16:14:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.427 16:14:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 3021219 /var/tmp/spdk2.sock 00:06:19.427 16:14:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3021219 ']' 00:06:19.427 16:14:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:19.427 16:14:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:19.427 16:14:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:19.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:19.427 16:14:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:19.427 16:14:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.427 [2024-09-29 16:14:19.978881] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:19.427 [2024-09-29 16:14:19.979018] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3021219 ] 00:06:19.685 [2024-09-29 16:14:20.173458] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3021073 has claimed it. 00:06:19.685 [2024-09-29 16:14:20.173559] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:20.251 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (3021219) - No such process 00:06:20.251 ERROR: process (pid: 3021219) is no longer running 00:06:20.251 16:14:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:20.251 16:14:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:20.251 16:14:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:20.251 16:14:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:20.251 16:14:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:20.251 16:14:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:20.251 16:14:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3021073 00:06:20.251 16:14:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3021073 00:06:20.251 16:14:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:20.509 lslocks: write error 00:06:20.509 16:14:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3021073 00:06:20.509 16:14:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3021073 ']' 00:06:20.509 16:14:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 3021073 00:06:20.509 16:14:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:20.509 16:14:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:20.509 16:14:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3021073 00:06:20.509 16:14:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:20.509 16:14:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:20.509 16:14:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3021073' 00:06:20.509 killing process with pid 3021073 00:06:20.509 16:14:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 3021073 00:06:20.509 16:14:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 3021073 00:06:23.795 00:06:23.795 real 0m5.245s 00:06:23.795 user 0m5.521s 00:06:23.795 sys 0m0.983s 00:06:23.795 16:14:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:23.795 16:14:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.795 ************************************ 00:06:23.795 END TEST locking_app_on_locked_coremask 00:06:23.795 ************************************ 00:06:23.795 16:14:23 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:23.795 16:14:23 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:23.795 16:14:23 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:23.795 16:14:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:23.795 ************************************ 00:06:23.795 START TEST locking_overlapped_coremask 00:06:23.795 ************************************ 00:06:23.795 16:14:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:23.795 16:14:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3021660 00:06:23.795 16:14:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:23.795 16:14:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3021660 /var/tmp/spdk.sock 00:06:23.795 16:14:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 3021660 ']' 00:06:23.795 16:14:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.795 16:14:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:23.795 16:14:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.795 16:14:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:23.795 16:14:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.795 [2024-09-29 16:14:23.815187] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:23.795 [2024-09-29 16:14:23.815343] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3021660 ] 00:06:23.795 [2024-09-29 16:14:23.943356] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:23.795 [2024-09-29 16:14:24.196640] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.795 [2024-09-29 16:14:24.196718] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.795 [2024-09-29 16:14:24.196720] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:24.729 16:14:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:24.729 16:14:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:24.729 16:14:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3021918 00:06:24.729 16:14:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3021918 /var/tmp/spdk2.sock 00:06:24.729 16:14:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:24.729 16:14:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3021918 /var/tmp/spdk2.sock 00:06:24.729 16:14:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:24.730 16:14:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:24.730 16:14:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:24.730 16:14:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:24.730 16:14:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:24.730 16:14:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 3021918 /var/tmp/spdk2.sock 00:06:24.730 16:14:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 3021918 ']' 00:06:24.730 16:14:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:24.730 16:14:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:24.730 16:14:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:24.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:24.730 16:14:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:24.730 16:14:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:24.730 [2024-09-29 16:14:25.260402] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:24.730 [2024-09-29 16:14:25.260549] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3021918 ] 00:06:24.988 [2024-09-29 16:14:25.433923] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3021660 has claimed it. 00:06:24.988 [2024-09-29 16:14:25.434036] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:25.553 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (3021918) - No such process 00:06:25.553 ERROR: process (pid: 3021918) is no longer running 00:06:25.553 16:14:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:25.553 16:14:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:25.553 16:14:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:25.553 16:14:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:25.553 16:14:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:25.553 16:14:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:25.553 16:14:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:25.553 16:14:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:25.553 16:14:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:25.553 16:14:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:25.553 16:14:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3021660 00:06:25.553 16:14:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 3021660 ']' 00:06:25.553 16:14:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 3021660 00:06:25.553 16:14:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:25.553 16:14:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:25.553 16:14:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3021660 00:06:25.553 16:14:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:25.553 16:14:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:25.553 16:14:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3021660' 00:06:25.553 killing process with pid 3021660 00:06:25.553 16:14:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 3021660 00:06:25.553 16:14:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 3021660 00:06:28.080 00:06:28.080 real 0m4.597s 00:06:28.080 user 0m12.067s 00:06:28.080 sys 0m0.769s 00:06:28.080 16:14:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:28.080 16:14:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.080 ************************************ 00:06:28.080 END TEST locking_overlapped_coremask 00:06:28.080 ************************************ 00:06:28.080 16:14:28 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:28.080 16:14:28 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:28.080 16:14:28 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:28.080 16:14:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.080 ************************************ 00:06:28.080 START TEST locking_overlapped_coremask_via_rpc 00:06:28.080 ************************************ 00:06:28.080 16:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:28.080 16:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3022232 00:06:28.080 16:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:28.080 16:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3022232 /var/tmp/spdk.sock 00:06:28.080 16:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3022232 ']' 00:06:28.080 16:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.080 16:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:28.080 16:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.080 16:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:28.080 16:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.080 [2024-09-29 16:14:28.467564] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:28.080 [2024-09-29 16:14:28.467720] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3022232 ] 00:06:28.080 [2024-09-29 16:14:28.598104] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:28.080 [2024-09-29 16:14:28.598187] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:28.338 [2024-09-29 16:14:28.866165] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.338 [2024-09-29 16:14:28.866214] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.338 [2024-09-29 16:14:28.866226] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:29.711 16:14:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:29.711 16:14:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:29.711 16:14:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3022490 00:06:29.711 16:14:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3022490 /var/tmp/spdk2.sock 00:06:29.711 16:14:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:29.711 16:14:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3022490 ']' 00:06:29.711 16:14:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:29.711 16:14:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:29.711 16:14:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:29.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:29.711 16:14:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:29.711 16:14:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.711 [2024-09-29 16:14:29.939225] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:29.711 [2024-09-29 16:14:29.939364] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3022490 ] 00:06:29.711 [2024-09-29 16:14:30.123826] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:29.711 [2024-09-29 16:14:30.123901] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:30.277 [2024-09-29 16:14:30.619592] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:30.277 [2024-09-29 16:14:30.622742] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:30.277 [2024-09-29 16:14:30.622751] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:06:32.206 16:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:32.206 16:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:32.206 16:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:32.206 16:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.206 16:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.206 16:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.206 16:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:32.206 16:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:32.206 16:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:32.207 16:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:32.207 16:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:32.207 16:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:32.207 16:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:32.207 16:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:32.207 16:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.207 16:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.207 [2024-09-29 16:14:32.719861] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3022232 has claimed it. 00:06:32.207 request: 00:06:32.207 { 00:06:32.207 "method": "framework_enable_cpumask_locks", 00:06:32.207 "req_id": 1 00:06:32.207 } 00:06:32.207 Got JSON-RPC error response 00:06:32.207 response: 00:06:32.207 { 00:06:32.207 "code": -32603, 00:06:32.207 "message": "Failed to claim CPU core: 2" 00:06:32.207 } 00:06:32.207 16:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:32.207 16:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:32.207 16:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:32.207 16:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:32.207 16:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:32.207 16:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3022232 /var/tmp/spdk.sock 00:06:32.207 16:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3022232 ']' 00:06:32.207 16:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.207 16:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:32.207 16:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.207 16:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:32.207 16:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.773 16:14:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:32.773 16:14:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:32.773 16:14:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3022490 /var/tmp/spdk2.sock 00:06:32.773 16:14:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3022490 ']' 00:06:32.773 16:14:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:32.773 16:14:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:32.773 16:14:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:32.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:32.773 16:14:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:32.773 16:14:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.773 16:14:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:32.773 16:14:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:32.773 16:14:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:32.773 16:14:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:32.773 16:14:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:32.773 16:14:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:32.773 00:06:32.773 real 0m4.930s 00:06:32.773 user 0m1.704s 00:06:32.773 sys 0m0.289s 00:06:32.773 16:14:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:32.773 16:14:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.773 ************************************ 00:06:32.773 END TEST locking_overlapped_coremask_via_rpc 00:06:32.773 ************************************ 00:06:32.773 16:14:33 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:32.773 16:14:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3022232 ]] 00:06:32.773 16:14:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3022232 00:06:32.773 16:14:33 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 3022232 ']' 00:06:32.773 16:14:33 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 3022232 00:06:32.773 16:14:33 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:32.773 16:14:33 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:32.773 16:14:33 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3022232 00:06:33.031 16:14:33 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:33.031 16:14:33 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:33.031 16:14:33 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3022232' 00:06:33.031 killing process with pid 3022232 00:06:33.031 16:14:33 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 3022232 00:06:33.031 16:14:33 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 3022232 00:06:35.561 16:14:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3022490 ]] 00:06:35.561 16:14:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3022490 00:06:35.561 16:14:35 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 3022490 ']' 00:06:35.561 16:14:35 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 3022490 00:06:35.561 16:14:35 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:35.561 16:14:35 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:35.561 16:14:35 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3022490 00:06:35.561 16:14:35 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:35.561 16:14:35 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:35.561 16:14:35 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3022490' 00:06:35.561 killing process with pid 3022490 00:06:35.561 16:14:35 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 3022490 00:06:35.561 16:14:35 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 3022490 00:06:38.091 16:14:38 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:38.091 16:14:38 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:38.091 16:14:38 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3022232 ]] 00:06:38.091 16:14:38 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3022232 00:06:38.091 16:14:38 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 3022232 ']' 00:06:38.091 16:14:38 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 3022232 00:06:38.091 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3022232) - No such process 00:06:38.091 16:14:38 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 3022232 is not found' 00:06:38.091 Process with pid 3022232 is not found 00:06:38.091 16:14:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3022490 ]] 00:06:38.091 16:14:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3022490 00:06:38.091 16:14:38 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 3022490 ']' 00:06:38.091 16:14:38 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 3022490 00:06:38.091 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3022490) - No such process 00:06:38.091 16:14:38 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 3022490 is not found' 00:06:38.091 Process with pid 3022490 is not found 00:06:38.091 16:14:38 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:38.091 00:06:38.091 real 0m54.278s 00:06:38.091 user 1m30.716s 00:06:38.091 sys 0m8.102s 00:06:38.091 16:14:38 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:38.091 16:14:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:38.091 ************************************ 00:06:38.091 END TEST cpu_locks 00:06:38.091 ************************************ 00:06:38.091 00:06:38.091 real 1m27.556s 00:06:38.091 user 2m33.459s 00:06:38.091 sys 0m12.804s 00:06:38.091 16:14:38 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:38.091 16:14:38 event -- common/autotest_common.sh@10 -- # set +x 00:06:38.091 ************************************ 00:06:38.091 END TEST event 00:06:38.091 ************************************ 00:06:38.091 16:14:38 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:38.091 16:14:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:38.091 16:14:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:38.091 16:14:38 -- common/autotest_common.sh@10 -- # set +x 00:06:38.091 ************************************ 00:06:38.091 START TEST thread 00:06:38.091 ************************************ 00:06:38.091 16:14:38 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:38.091 * Looking for test storage... 00:06:38.091 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:38.091 16:14:38 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:38.091 16:14:38 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:06:38.091 16:14:38 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:38.091 16:14:38 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:38.091 16:14:38 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:38.091 16:14:38 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:38.091 16:14:38 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:38.091 16:14:38 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:38.091 16:14:38 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:38.091 16:14:38 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:38.091 16:14:38 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:38.091 16:14:38 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:38.091 16:14:38 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:38.091 16:14:38 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:38.091 16:14:38 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:38.091 16:14:38 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:38.091 16:14:38 thread -- scripts/common.sh@345 -- # : 1 00:06:38.091 16:14:38 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:38.091 16:14:38 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:38.091 16:14:38 thread -- scripts/common.sh@365 -- # decimal 1 00:06:38.091 16:14:38 thread -- scripts/common.sh@353 -- # local d=1 00:06:38.091 16:14:38 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:38.091 16:14:38 thread -- scripts/common.sh@355 -- # echo 1 00:06:38.091 16:14:38 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:38.091 16:14:38 thread -- scripts/common.sh@366 -- # decimal 2 00:06:38.091 16:14:38 thread -- scripts/common.sh@353 -- # local d=2 00:06:38.091 16:14:38 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:38.091 16:14:38 thread -- scripts/common.sh@355 -- # echo 2 00:06:38.091 16:14:38 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:38.091 16:14:38 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:38.091 16:14:38 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:38.091 16:14:38 thread -- scripts/common.sh@368 -- # return 0 00:06:38.091 16:14:38 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:38.091 16:14:38 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:38.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.091 --rc genhtml_branch_coverage=1 00:06:38.091 --rc genhtml_function_coverage=1 00:06:38.091 --rc genhtml_legend=1 00:06:38.091 --rc geninfo_all_blocks=1 00:06:38.091 --rc geninfo_unexecuted_blocks=1 00:06:38.091 00:06:38.091 ' 00:06:38.091 16:14:38 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:38.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.091 --rc genhtml_branch_coverage=1 00:06:38.091 --rc genhtml_function_coverage=1 00:06:38.091 --rc genhtml_legend=1 00:06:38.091 --rc geninfo_all_blocks=1 00:06:38.091 --rc geninfo_unexecuted_blocks=1 00:06:38.091 00:06:38.091 ' 00:06:38.091 16:14:38 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:38.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.091 --rc genhtml_branch_coverage=1 00:06:38.091 --rc genhtml_function_coverage=1 00:06:38.091 --rc genhtml_legend=1 00:06:38.091 --rc geninfo_all_blocks=1 00:06:38.091 --rc geninfo_unexecuted_blocks=1 00:06:38.091 00:06:38.091 ' 00:06:38.091 16:14:38 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:38.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.091 --rc genhtml_branch_coverage=1 00:06:38.091 --rc genhtml_function_coverage=1 00:06:38.091 --rc genhtml_legend=1 00:06:38.091 --rc geninfo_all_blocks=1 00:06:38.091 --rc geninfo_unexecuted_blocks=1 00:06:38.091 00:06:38.091 ' 00:06:38.091 16:14:38 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:38.091 16:14:38 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:38.091 16:14:38 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:38.091 16:14:38 thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.091 ************************************ 00:06:38.091 START TEST thread_poller_perf 00:06:38.091 ************************************ 00:06:38.091 16:14:38 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:38.091 [2024-09-29 16:14:38.432088] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:38.092 [2024-09-29 16:14:38.432215] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3023530 ] 00:06:38.092 [2024-09-29 16:14:38.564286] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.350 [2024-09-29 16:14:38.807774] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.350 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:39.725 ====================================== 00:06:39.725 busy:2714457449 (cyc) 00:06:39.725 total_run_count: 284000 00:06:39.725 tsc_hz: 2700000000 (cyc) 00:06:39.725 ====================================== 00:06:39.725 poller_cost: 9557 (cyc), 3539 (nsec) 00:06:39.725 00:06:39.725 real 0m1.860s 00:06:39.725 user 0m1.688s 00:06:39.725 sys 0m0.162s 00:06:39.725 16:14:40 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:39.725 16:14:40 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:39.725 ************************************ 00:06:39.725 END TEST thread_poller_perf 00:06:39.725 ************************************ 00:06:39.725 16:14:40 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:39.725 16:14:40 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:39.725 16:14:40 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:39.725 16:14:40 thread -- common/autotest_common.sh@10 -- # set +x 00:06:39.983 ************************************ 00:06:39.983 START TEST thread_poller_perf 00:06:39.983 ************************************ 00:06:39.983 16:14:40 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:39.983 [2024-09-29 16:14:40.342543] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:39.983 [2024-09-29 16:14:40.342692] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3023817 ] 00:06:39.983 [2024-09-29 16:14:40.477476] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.241 [2024-09-29 16:14:40.738475] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.241 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:41.614 ====================================== 00:06:41.614 busy:2705445621 (cyc) 00:06:41.614 total_run_count: 3750000 00:06:41.614 tsc_hz: 2700000000 (cyc) 00:06:41.614 ====================================== 00:06:41.614 poller_cost: 721 (cyc), 267 (nsec) 00:06:41.873 00:06:41.873 real 0m1.879s 00:06:41.873 user 0m1.699s 00:06:41.873 sys 0m0.170s 00:06:41.873 16:14:42 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:41.873 16:14:42 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:41.873 ************************************ 00:06:41.873 END TEST thread_poller_perf 00:06:41.873 ************************************ 00:06:41.873 16:14:42 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:41.873 00:06:41.873 real 0m3.970s 00:06:41.873 user 0m3.517s 00:06:41.873 sys 0m0.446s 00:06:41.873 16:14:42 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:41.873 16:14:42 thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.873 ************************************ 00:06:41.873 END TEST thread 00:06:41.873 ************************************ 00:06:41.873 16:14:42 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:41.873 16:14:42 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:41.873 16:14:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:41.873 16:14:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:41.873 16:14:42 -- common/autotest_common.sh@10 -- # set +x 00:06:41.873 ************************************ 00:06:41.873 START TEST app_cmdline 00:06:41.873 ************************************ 00:06:41.873 16:14:42 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:41.873 * Looking for test storage... 00:06:41.873 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:41.873 16:14:42 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:41.873 16:14:42 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:06:41.873 16:14:42 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:41.873 16:14:42 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:41.873 16:14:42 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:41.873 16:14:42 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:41.873 16:14:42 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:41.873 16:14:42 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:41.873 16:14:42 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:41.873 16:14:42 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:41.873 16:14:42 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:41.873 16:14:42 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:41.873 16:14:42 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:41.873 16:14:42 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:41.873 16:14:42 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:41.873 16:14:42 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:41.873 16:14:42 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:41.873 16:14:42 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:41.873 16:14:42 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:41.873 16:14:42 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:41.873 16:14:42 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:41.873 16:14:42 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:41.873 16:14:42 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:41.873 16:14:42 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:41.873 16:14:42 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:41.873 16:14:42 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:41.873 16:14:42 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:41.873 16:14:42 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:41.873 16:14:42 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:41.874 16:14:42 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:41.874 16:14:42 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:41.874 16:14:42 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:41.874 16:14:42 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:41.874 16:14:42 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:41.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.874 --rc genhtml_branch_coverage=1 00:06:41.874 --rc genhtml_function_coverage=1 00:06:41.874 --rc genhtml_legend=1 00:06:41.874 --rc geninfo_all_blocks=1 00:06:41.874 --rc geninfo_unexecuted_blocks=1 00:06:41.874 00:06:41.874 ' 00:06:41.874 16:14:42 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:41.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.874 --rc genhtml_branch_coverage=1 00:06:41.874 --rc genhtml_function_coverage=1 00:06:41.874 --rc genhtml_legend=1 00:06:41.874 --rc geninfo_all_blocks=1 00:06:41.874 --rc geninfo_unexecuted_blocks=1 00:06:41.874 00:06:41.874 ' 00:06:41.874 16:14:42 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:41.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.874 --rc genhtml_branch_coverage=1 00:06:41.874 --rc genhtml_function_coverage=1 00:06:41.874 --rc genhtml_legend=1 00:06:41.874 --rc geninfo_all_blocks=1 00:06:41.874 --rc geninfo_unexecuted_blocks=1 00:06:41.874 00:06:41.874 ' 00:06:41.874 16:14:42 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:41.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.874 --rc genhtml_branch_coverage=1 00:06:41.874 --rc genhtml_function_coverage=1 00:06:41.874 --rc genhtml_legend=1 00:06:41.874 --rc geninfo_all_blocks=1 00:06:41.874 --rc geninfo_unexecuted_blocks=1 00:06:41.874 00:06:41.874 ' 00:06:41.874 16:14:42 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:41.874 16:14:42 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3024147 00:06:41.874 16:14:42 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:41.874 16:14:42 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3024147 00:06:41.874 16:14:42 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 3024147 ']' 00:06:41.874 16:14:42 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.874 16:14:42 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:41.874 16:14:42 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.874 16:14:42 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:41.874 16:14:42 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:42.132 [2024-09-29 16:14:42.493182] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:42.132 [2024-09-29 16:14:42.493331] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3024147 ] 00:06:42.132 [2024-09-29 16:14:42.624523] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.391 [2024-09-29 16:14:42.880608] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.325 16:14:43 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:43.325 16:14:43 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:43.325 16:14:43 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:43.583 { 00:06:43.583 "version": "SPDK v25.01-pre git sha1 09cc66129", 00:06:43.583 "fields": { 00:06:43.583 "major": 25, 00:06:43.583 "minor": 1, 00:06:43.583 "patch": 0, 00:06:43.583 "suffix": "-pre", 00:06:43.583 "commit": "09cc66129" 00:06:43.583 } 00:06:43.583 } 00:06:43.583 16:14:44 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:43.583 16:14:44 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:43.583 16:14:44 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:43.583 16:14:44 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:43.583 16:14:44 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:43.583 16:14:44 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:43.583 16:14:44 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:43.583 16:14:44 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.583 16:14:44 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:43.583 16:14:44 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.583 16:14:44 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:43.583 16:14:44 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:43.583 16:14:44 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:43.583 16:14:44 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:43.583 16:14:44 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:43.583 16:14:44 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:43.583 16:14:44 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:43.583 16:14:44 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:43.583 16:14:44 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:43.583 16:14:44 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:43.583 16:14:44 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:43.583 16:14:44 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:43.583 16:14:44 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:43.583 16:14:44 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:44.149 request: 00:06:44.149 { 00:06:44.149 "method": "env_dpdk_get_mem_stats", 00:06:44.149 "req_id": 1 00:06:44.149 } 00:06:44.149 Got JSON-RPC error response 00:06:44.149 response: 00:06:44.149 { 00:06:44.149 "code": -32601, 00:06:44.149 "message": "Method not found" 00:06:44.149 } 00:06:44.149 16:14:44 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:44.149 16:14:44 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:44.149 16:14:44 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:44.149 16:14:44 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:44.149 16:14:44 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3024147 00:06:44.149 16:14:44 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 3024147 ']' 00:06:44.149 16:14:44 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 3024147 00:06:44.149 16:14:44 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:44.149 16:14:44 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:44.149 16:14:44 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3024147 00:06:44.149 16:14:44 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:44.149 16:14:44 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:44.149 16:14:44 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3024147' 00:06:44.149 killing process with pid 3024147 00:06:44.149 16:14:44 app_cmdline -- common/autotest_common.sh@969 -- # kill 3024147 00:06:44.149 16:14:44 app_cmdline -- common/autotest_common.sh@974 -- # wait 3024147 00:06:46.678 00:06:46.678 real 0m4.820s 00:06:46.678 user 0m5.257s 00:06:46.678 sys 0m0.709s 00:06:46.678 16:14:47 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:46.678 16:14:47 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:46.678 ************************************ 00:06:46.678 END TEST app_cmdline 00:06:46.678 ************************************ 00:06:46.678 16:14:47 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:46.678 16:14:47 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:46.678 16:14:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:46.678 16:14:47 -- common/autotest_common.sh@10 -- # set +x 00:06:46.678 ************************************ 00:06:46.678 START TEST version 00:06:46.678 ************************************ 00:06:46.678 16:14:47 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:46.678 * Looking for test storage... 00:06:46.678 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:46.678 16:14:47 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:46.678 16:14:47 version -- common/autotest_common.sh@1681 -- # lcov --version 00:06:46.678 16:14:47 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:46.937 16:14:47 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:46.937 16:14:47 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:46.937 16:14:47 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:46.937 16:14:47 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:46.937 16:14:47 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:46.937 16:14:47 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:46.937 16:14:47 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:46.937 16:14:47 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:46.937 16:14:47 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:46.937 16:14:47 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:46.937 16:14:47 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:46.937 16:14:47 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:46.937 16:14:47 version -- scripts/common.sh@344 -- # case "$op" in 00:06:46.937 16:14:47 version -- scripts/common.sh@345 -- # : 1 00:06:46.937 16:14:47 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:46.937 16:14:47 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:46.937 16:14:47 version -- scripts/common.sh@365 -- # decimal 1 00:06:46.937 16:14:47 version -- scripts/common.sh@353 -- # local d=1 00:06:46.937 16:14:47 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:46.937 16:14:47 version -- scripts/common.sh@355 -- # echo 1 00:06:46.937 16:14:47 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:46.937 16:14:47 version -- scripts/common.sh@366 -- # decimal 2 00:06:46.937 16:14:47 version -- scripts/common.sh@353 -- # local d=2 00:06:46.937 16:14:47 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:46.937 16:14:47 version -- scripts/common.sh@355 -- # echo 2 00:06:46.937 16:14:47 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:46.937 16:14:47 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:46.937 16:14:47 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:46.937 16:14:47 version -- scripts/common.sh@368 -- # return 0 00:06:46.937 16:14:47 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:46.937 16:14:47 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:46.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.937 --rc genhtml_branch_coverage=1 00:06:46.937 --rc genhtml_function_coverage=1 00:06:46.937 --rc genhtml_legend=1 00:06:46.937 --rc geninfo_all_blocks=1 00:06:46.937 --rc geninfo_unexecuted_blocks=1 00:06:46.937 00:06:46.937 ' 00:06:46.937 16:14:47 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:46.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.937 --rc genhtml_branch_coverage=1 00:06:46.937 --rc genhtml_function_coverage=1 00:06:46.937 --rc genhtml_legend=1 00:06:46.937 --rc geninfo_all_blocks=1 00:06:46.938 --rc geninfo_unexecuted_blocks=1 00:06:46.938 00:06:46.938 ' 00:06:46.938 16:14:47 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:46.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.938 --rc genhtml_branch_coverage=1 00:06:46.938 --rc genhtml_function_coverage=1 00:06:46.938 --rc genhtml_legend=1 00:06:46.938 --rc geninfo_all_blocks=1 00:06:46.938 --rc geninfo_unexecuted_blocks=1 00:06:46.938 00:06:46.938 ' 00:06:46.938 16:14:47 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:46.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.938 --rc genhtml_branch_coverage=1 00:06:46.938 --rc genhtml_function_coverage=1 00:06:46.938 --rc genhtml_legend=1 00:06:46.938 --rc geninfo_all_blocks=1 00:06:46.938 --rc geninfo_unexecuted_blocks=1 00:06:46.938 00:06:46.938 ' 00:06:46.938 16:14:47 version -- app/version.sh@17 -- # get_header_version major 00:06:46.938 16:14:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:46.938 16:14:47 version -- app/version.sh@14 -- # cut -f2 00:06:46.938 16:14:47 version -- app/version.sh@14 -- # tr -d '"' 00:06:46.938 16:14:47 version -- app/version.sh@17 -- # major=25 00:06:46.938 16:14:47 version -- app/version.sh@18 -- # get_header_version minor 00:06:46.938 16:14:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:46.938 16:14:47 version -- app/version.sh@14 -- # cut -f2 00:06:46.938 16:14:47 version -- app/version.sh@14 -- # tr -d '"' 00:06:46.938 16:14:47 version -- app/version.sh@18 -- # minor=1 00:06:46.938 16:14:47 version -- app/version.sh@19 -- # get_header_version patch 00:06:46.938 16:14:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:46.938 16:14:47 version -- app/version.sh@14 -- # cut -f2 00:06:46.938 16:14:47 version -- app/version.sh@14 -- # tr -d '"' 00:06:46.938 16:14:47 version -- app/version.sh@19 -- # patch=0 00:06:46.938 16:14:47 version -- app/version.sh@20 -- # get_header_version suffix 00:06:46.938 16:14:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:46.938 16:14:47 version -- app/version.sh@14 -- # cut -f2 00:06:46.938 16:14:47 version -- app/version.sh@14 -- # tr -d '"' 00:06:46.938 16:14:47 version -- app/version.sh@20 -- # suffix=-pre 00:06:46.938 16:14:47 version -- app/version.sh@22 -- # version=25.1 00:06:46.938 16:14:47 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:46.938 16:14:47 version -- app/version.sh@28 -- # version=25.1rc0 00:06:46.938 16:14:47 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:46.938 16:14:47 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:46.938 16:14:47 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:46.938 16:14:47 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:46.938 00:06:46.938 real 0m0.208s 00:06:46.938 user 0m0.156s 00:06:46.938 sys 0m0.077s 00:06:46.938 16:14:47 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:46.938 16:14:47 version -- common/autotest_common.sh@10 -- # set +x 00:06:46.938 ************************************ 00:06:46.938 END TEST version 00:06:46.938 ************************************ 00:06:46.938 16:14:47 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:46.938 16:14:47 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:46.938 16:14:47 -- spdk/autotest.sh@194 -- # uname -s 00:06:46.938 16:14:47 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:46.938 16:14:47 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:46.938 16:14:47 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:46.938 16:14:47 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:46.938 16:14:47 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:06:46.938 16:14:47 -- spdk/autotest.sh@256 -- # timing_exit lib 00:06:46.938 16:14:47 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:46.938 16:14:47 -- common/autotest_common.sh@10 -- # set +x 00:06:46.938 16:14:47 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:06:46.938 16:14:47 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:06:46.938 16:14:47 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:06:46.938 16:14:47 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:06:46.938 16:14:47 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:06:46.938 16:14:47 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:06:46.938 16:14:47 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:46.938 16:14:47 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:46.938 16:14:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:46.938 16:14:47 -- common/autotest_common.sh@10 -- # set +x 00:06:46.938 ************************************ 00:06:46.938 START TEST nvmf_tcp 00:06:46.938 ************************************ 00:06:46.938 16:14:47 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:46.938 * Looking for test storage... 00:06:46.938 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:46.938 16:14:47 nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:46.938 16:14:47 nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:06:46.938 16:14:47 nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:47.197 16:14:47 nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:47.197 16:14:47 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:47.197 16:14:47 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:47.197 16:14:47 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:47.197 16:14:47 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:47.197 16:14:47 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:47.197 16:14:47 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:47.197 16:14:47 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:47.197 16:14:47 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:47.197 16:14:47 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:47.197 16:14:47 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:47.197 16:14:47 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:47.197 16:14:47 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:47.197 16:14:47 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:06:47.197 16:14:47 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:47.197 16:14:47 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:47.197 16:14:47 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:47.197 16:14:47 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:06:47.197 16:14:47 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:47.197 16:14:47 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:06:47.197 16:14:47 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:47.197 16:14:47 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:47.197 16:14:47 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:06:47.197 16:14:47 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:47.197 16:14:47 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:06:47.197 16:14:47 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:47.197 16:14:47 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:47.197 16:14:47 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:47.197 16:14:47 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:06:47.197 16:14:47 nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:47.197 16:14:47 nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:47.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.197 --rc genhtml_branch_coverage=1 00:06:47.197 --rc genhtml_function_coverage=1 00:06:47.197 --rc genhtml_legend=1 00:06:47.197 --rc geninfo_all_blocks=1 00:06:47.197 --rc geninfo_unexecuted_blocks=1 00:06:47.197 00:06:47.197 ' 00:06:47.197 16:14:47 nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:47.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.197 --rc genhtml_branch_coverage=1 00:06:47.197 --rc genhtml_function_coverage=1 00:06:47.197 --rc genhtml_legend=1 00:06:47.197 --rc geninfo_all_blocks=1 00:06:47.198 --rc geninfo_unexecuted_blocks=1 00:06:47.198 00:06:47.198 ' 00:06:47.198 16:14:47 nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:47.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.198 --rc genhtml_branch_coverage=1 00:06:47.198 --rc genhtml_function_coverage=1 00:06:47.198 --rc genhtml_legend=1 00:06:47.198 --rc geninfo_all_blocks=1 00:06:47.198 --rc geninfo_unexecuted_blocks=1 00:06:47.198 00:06:47.198 ' 00:06:47.198 16:14:47 nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:47.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.198 --rc genhtml_branch_coverage=1 00:06:47.198 --rc genhtml_function_coverage=1 00:06:47.198 --rc genhtml_legend=1 00:06:47.198 --rc geninfo_all_blocks=1 00:06:47.198 --rc geninfo_unexecuted_blocks=1 00:06:47.198 00:06:47.198 ' 00:06:47.198 16:14:47 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:47.198 16:14:47 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:47.198 16:14:47 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:47.198 16:14:47 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:47.198 16:14:47 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:47.198 16:14:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:47.198 ************************************ 00:06:47.198 START TEST nvmf_target_core 00:06:47.198 ************************************ 00:06:47.198 16:14:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:47.198 * Looking for test storage... 00:06:47.198 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:47.198 16:14:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:47.198 16:14:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lcov --version 00:06:47.198 16:14:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:47.198 16:14:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:47.198 16:14:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:47.198 16:14:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:47.198 16:14:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:47.198 16:14:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:47.198 16:14:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:47.198 16:14:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:47.198 16:14:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:47.198 16:14:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:47.198 16:14:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:47.198 16:14:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:47.198 16:14:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:47.198 16:14:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:47.198 16:14:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:47.198 16:14:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:47.198 16:14:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:47.198 16:14:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:47.198 16:14:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:47.198 16:14:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:47.198 16:14:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:47.198 16:14:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:47.198 16:14:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:47.198 16:14:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:47.198 16:14:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:47.198 16:14:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:47.198 16:14:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:47.198 16:14:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:47.198 16:14:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:47.198 16:14:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:47.198 16:14:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:47.198 16:14:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:47.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.198 --rc genhtml_branch_coverage=1 00:06:47.198 --rc genhtml_function_coverage=1 00:06:47.198 --rc genhtml_legend=1 00:06:47.198 --rc geninfo_all_blocks=1 00:06:47.198 --rc geninfo_unexecuted_blocks=1 00:06:47.198 00:06:47.198 ' 00:06:47.198 16:14:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:47.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.198 --rc genhtml_branch_coverage=1 00:06:47.198 --rc genhtml_function_coverage=1 00:06:47.198 --rc genhtml_legend=1 00:06:47.198 --rc geninfo_all_blocks=1 00:06:47.198 --rc geninfo_unexecuted_blocks=1 00:06:47.198 00:06:47.198 ' 00:06:47.198 16:14:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:47.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.198 --rc genhtml_branch_coverage=1 00:06:47.198 --rc genhtml_function_coverage=1 00:06:47.198 --rc genhtml_legend=1 00:06:47.198 --rc geninfo_all_blocks=1 00:06:47.198 --rc geninfo_unexecuted_blocks=1 00:06:47.198 00:06:47.198 ' 00:06:47.198 16:14:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:47.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.198 --rc genhtml_branch_coverage=1 00:06:47.198 --rc genhtml_function_coverage=1 00:06:47.198 --rc genhtml_legend=1 00:06:47.198 --rc geninfo_all_blocks=1 00:06:47.198 --rc geninfo_unexecuted_blocks=1 00:06:47.198 00:06:47.198 ' 00:06:47.198 16:14:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:47.198 16:14:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:47.198 16:14:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:47.198 16:14:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:47.198 16:14:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:47.198 16:14:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:47.198 16:14:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:47.198 16:14:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:47.198 16:14:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:47.198 16:14:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:47.198 16:14:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:47.198 16:14:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:47.198 16:14:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:47.198 16:14:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:47.198 16:14:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:47.198 16:14:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:47.198 16:14:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:47.198 16:14:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:47.198 16:14:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:47.198 16:14:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:47.198 16:14:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:47.198 16:14:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:47.198 16:14:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:47.198 16:14:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:47.198 16:14:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:47.198 16:14:47 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.198 16:14:47 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.198 16:14:47 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.198 16:14:47 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:47.198 16:14:47 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.198 16:14:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:47.198 16:14:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:47.198 16:14:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:47.198 16:14:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:47.199 16:14:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:47.199 16:14:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:47.199 16:14:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:47.199 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:47.199 16:14:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:47.199 16:14:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:47.199 16:14:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:47.199 16:14:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:47.199 16:14:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:47.199 16:14:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:47.199 16:14:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:47.199 16:14:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:47.199 16:14:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:47.199 16:14:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:47.199 ************************************ 00:06:47.199 START TEST nvmf_abort 00:06:47.199 ************************************ 00:06:47.199 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:47.458 * Looking for test storage... 00:06:47.458 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:47.458 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:47.458 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # lcov --version 00:06:47.458 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:47.458 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:47.458 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:47.458 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:47.458 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:47.458 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:06:47.458 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:06:47.458 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:06:47.458 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:06:47.458 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:06:47.458 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:06:47.458 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:06:47.458 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:47.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.459 --rc genhtml_branch_coverage=1 00:06:47.459 --rc genhtml_function_coverage=1 00:06:47.459 --rc genhtml_legend=1 00:06:47.459 --rc geninfo_all_blocks=1 00:06:47.459 --rc geninfo_unexecuted_blocks=1 00:06:47.459 00:06:47.459 ' 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:47.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.459 --rc genhtml_branch_coverage=1 00:06:47.459 --rc genhtml_function_coverage=1 00:06:47.459 --rc genhtml_legend=1 00:06:47.459 --rc geninfo_all_blocks=1 00:06:47.459 --rc geninfo_unexecuted_blocks=1 00:06:47.459 00:06:47.459 ' 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:47.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.459 --rc genhtml_branch_coverage=1 00:06:47.459 --rc genhtml_function_coverage=1 00:06:47.459 --rc genhtml_legend=1 00:06:47.459 --rc geninfo_all_blocks=1 00:06:47.459 --rc geninfo_unexecuted_blocks=1 00:06:47.459 00:06:47.459 ' 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:47.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.459 --rc genhtml_branch_coverage=1 00:06:47.459 --rc genhtml_function_coverage=1 00:06:47.459 --rc genhtml_legend=1 00:06:47.459 --rc geninfo_all_blocks=1 00:06:47.459 --rc geninfo_unexecuted_blocks=1 00:06:47.459 00:06:47.459 ' 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:47.459 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@472 -- # prepare_net_devs 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@434 -- # local -g is_hw=no 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@436 -- # remove_spdk_ns 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:06:47.459 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:49.993 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:49.993 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ up == up ]] 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:49.993 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ up == up ]] 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:49.993 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # is_hw=yes 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:49.993 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:49.993 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:49.993 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:49.993 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:49.993 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:49.993 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:49.993 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:49.994 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:49.994 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:49.994 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:49.994 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.384 ms 00:06:49.994 00:06:49.994 --- 10.0.0.2 ping statistics --- 00:06:49.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:49.994 rtt min/avg/max/mdev = 0.384/0.384/0.384/0.000 ms 00:06:49.994 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:49.994 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:49.994 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:06:49.994 00:06:49.994 --- 10.0.0.1 ping statistics --- 00:06:49.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:49.994 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:06:49.994 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:49.994 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # return 0 00:06:49.994 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:06:49.994 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:49.994 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:06:49.994 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:06:49.994 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:49.994 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:06:49.994 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:06:49.994 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:49.994 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:06:49.994 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:49.994 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:49.994 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@505 -- # nvmfpid=3026632 00:06:49.994 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:49.994 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@506 -- # waitforlisten 3026632 00:06:49.994 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 3026632 ']' 00:06:49.994 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.994 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:49.994 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.994 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:49.994 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:49.994 [2024-09-29 16:14:50.253564] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:49.994 [2024-09-29 16:14:50.253776] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:49.994 [2024-09-29 16:14:50.397749] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:50.251 [2024-09-29 16:14:50.658606] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:50.251 [2024-09-29 16:14:50.658696] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:50.251 [2024-09-29 16:14:50.658725] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:50.251 [2024-09-29 16:14:50.658749] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:50.251 [2024-09-29 16:14:50.658768] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:50.251 [2024-09-29 16:14:50.658914] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:50.251 [2024-09-29 16:14:50.658995] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.251 [2024-09-29 16:14:50.659000] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:50.817 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:50.817 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:06:50.817 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:06:50.817 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:50.817 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:50.817 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:50.817 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:50.817 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.817 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:50.817 [2024-09-29 16:14:51.289610] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:50.817 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.817 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:50.817 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.817 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:50.817 Malloc0 00:06:50.817 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.817 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:50.817 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.817 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:51.075 Delay0 00:06:51.075 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.076 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:51.076 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.076 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:51.076 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.076 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:51.076 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.076 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:51.076 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.076 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:51.076 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.076 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:51.076 [2024-09-29 16:14:51.400837] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:51.076 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.076 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:51.076 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.076 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:51.076 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.076 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:51.076 [2024-09-29 16:14:51.609854] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:53.672 Initializing NVMe Controllers 00:06:53.672 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:53.672 controller IO queue size 128 less than required 00:06:53.672 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:53.672 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:53.673 Initialization complete. Launching workers. 00:06:53.673 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 20510 00:06:53.673 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 20571, failed to submit 66 00:06:53.673 success 20510, unsuccessful 61, failed 0 00:06:53.673 16:14:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:53.673 16:14:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.673 16:14:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:53.673 16:14:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.673 16:14:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:53.673 16:14:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:53.673 16:14:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # nvmfcleanup 00:06:53.673 16:14:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:06:53.673 16:14:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:53.673 16:14:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:06:53.673 16:14:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:53.673 16:14:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:53.673 rmmod nvme_tcp 00:06:53.673 rmmod nvme_fabrics 00:06:53.673 rmmod nvme_keyring 00:06:53.673 16:14:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:53.673 16:14:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:06:53.673 16:14:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:06:53.673 16:14:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@513 -- # '[' -n 3026632 ']' 00:06:53.673 16:14:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@514 -- # killprocess 3026632 00:06:53.673 16:14:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 3026632 ']' 00:06:53.673 16:14:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 3026632 00:06:53.673 16:14:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:06:53.673 16:14:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:53.673 16:14:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3026632 00:06:53.673 16:14:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:53.673 16:14:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:53.673 16:14:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3026632' 00:06:53.673 killing process with pid 3026632 00:06:53.673 16:14:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 3026632 00:06:53.673 16:14:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 3026632 00:06:55.047 16:14:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:06:55.047 16:14:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:06:55.047 16:14:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:06:55.047 16:14:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:06:55.047 16:14:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@787 -- # iptables-save 00:06:55.047 16:14:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:06:55.047 16:14:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@787 -- # iptables-restore 00:06:55.047 16:14:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:55.047 16:14:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:55.047 16:14:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:55.047 16:14:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:55.047 16:14:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:56.953 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:56.953 00:06:56.953 real 0m9.542s 00:06:56.953 user 0m15.422s 00:06:56.953 sys 0m2.919s 00:06:56.953 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:56.953 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:56.953 ************************************ 00:06:56.953 END TEST nvmf_abort 00:06:56.953 ************************************ 00:06:56.953 16:14:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:56.953 16:14:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:56.953 16:14:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:56.953 16:14:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:56.953 ************************************ 00:06:56.953 START TEST nvmf_ns_hotplug_stress 00:06:56.953 ************************************ 00:06:56.953 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:56.953 * Looking for test storage... 00:06:56.953 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:56.953 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:56.953 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:06:56.953 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:56.953 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:56.953 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:56.953 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:56.953 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:56.953 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:56.953 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:56.953 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:56.953 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:56.953 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:56.953 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:56.953 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:56.953 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:56.953 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:56.953 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:56.953 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:56.953 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:56.953 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:56.953 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:56.953 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:56.953 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:56.953 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:56.953 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:56.953 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:56.953 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:56.953 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:56.953 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:56.953 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:56.953 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:56.953 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:56.953 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:56.953 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:56.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.953 --rc genhtml_branch_coverage=1 00:06:56.953 --rc genhtml_function_coverage=1 00:06:56.953 --rc genhtml_legend=1 00:06:56.953 --rc geninfo_all_blocks=1 00:06:56.953 --rc geninfo_unexecuted_blocks=1 00:06:56.953 00:06:56.953 ' 00:06:56.953 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:56.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.953 --rc genhtml_branch_coverage=1 00:06:56.953 --rc genhtml_function_coverage=1 00:06:56.953 --rc genhtml_legend=1 00:06:56.953 --rc geninfo_all_blocks=1 00:06:56.953 --rc geninfo_unexecuted_blocks=1 00:06:56.953 00:06:56.953 ' 00:06:56.953 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:56.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.953 --rc genhtml_branch_coverage=1 00:06:56.953 --rc genhtml_function_coverage=1 00:06:56.953 --rc genhtml_legend=1 00:06:56.954 --rc geninfo_all_blocks=1 00:06:56.954 --rc geninfo_unexecuted_blocks=1 00:06:56.954 00:06:56.954 ' 00:06:56.954 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:56.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.954 --rc genhtml_branch_coverage=1 00:06:56.954 --rc genhtml_function_coverage=1 00:06:56.954 --rc genhtml_legend=1 00:06:56.954 --rc geninfo_all_blocks=1 00:06:56.954 --rc geninfo_unexecuted_blocks=1 00:06:56.954 00:06:56.954 ' 00:06:56.954 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:56.954 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:56.954 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:56.954 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:56.954 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:56.954 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:56.954 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:56.954 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:56.954 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:56.954 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:56.954 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:56.954 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:56.954 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:56.954 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:56.954 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:57.212 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:57.212 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:57.212 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:57.212 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:57.212 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:57.212 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:57.212 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:57.212 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:57.212 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.212 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.213 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.213 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:57.213 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.213 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:57.213 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:57.213 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:57.213 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:57.213 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:57.213 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:57.213 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:57.213 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:57.213 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:57.213 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:57.213 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:57.213 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:57.213 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:57.213 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:06:57.213 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:57.213 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # prepare_net_devs 00:06:57.213 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@434 -- # local -g is_hw=no 00:06:57.213 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # remove_spdk_ns 00:06:57.213 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:57.213 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:57.213 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:57.213 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:06:57.213 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:06:57.213 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:06:57.213 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:59.117 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:59.117 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:06:59.117 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:59.117 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:59.117 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:59.117 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:59.117 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:59.117 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:06:59.117 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:59.117 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:06:59.117 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:06:59.117 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:06:59.117 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:06:59.117 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:06:59.117 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:06:59.117 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:59.117 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:59.117 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:59.117 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:59.117 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:59.117 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:59.117 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:59.117 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:59.117 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:59.117 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:59.117 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:59.117 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:06:59.117 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:06:59.117 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:06:59.117 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:06:59.117 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:06:59.117 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:06:59.117 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:06:59.117 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:59.117 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:59.117 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:06:59.117 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:06:59.117 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:59.117 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:59.117 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:06:59.117 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:06:59.117 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:59.117 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:59.117 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:06:59.117 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:06:59.117 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:59.117 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:59.117 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:06:59.117 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:06:59.117 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:06:59.117 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:06:59.117 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:06:59.118 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:59.118 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:06:59.118 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:59.118 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:06:59.118 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:06:59.118 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:59.118 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:59.118 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:59.118 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:06:59.118 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:06:59.118 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:59.118 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:06:59.118 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:59.118 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:06:59.118 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:06:59.118 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:59.118 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:59.118 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:59.118 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:06:59.118 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:06:59.118 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # is_hw=yes 00:06:59.118 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:06:59.118 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:06:59.118 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:06:59.118 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:59.118 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:59.118 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:59.118 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:59.118 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:59.118 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:59.118 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:59.118 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:59.118 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:59.118 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:59.118 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:59.118 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:59.118 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:59.118 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:59.118 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:59.376 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:59.377 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:59.377 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:59.377 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:59.377 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:59.377 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:59.377 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:59.377 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:59.377 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:59.377 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.332 ms 00:06:59.377 00:06:59.377 --- 10.0.0.2 ping statistics --- 00:06:59.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:59.377 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:06:59.377 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:59.377 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:59.377 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:06:59.377 00:06:59.377 --- 10.0.0.1 ping statistics --- 00:06:59.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:59.377 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:06:59.377 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:59.377 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # return 0 00:06:59.377 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:06:59.377 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:59.377 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:06:59.377 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:06:59.377 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:59.377 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:06:59.377 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:06:59.377 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:59.377 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:06:59.377 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:59.377 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:59.377 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # nvmfpid=3029141 00:06:59.377 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:59.377 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # waitforlisten 3029141 00:06:59.377 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 3029141 ']' 00:06:59.377 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.377 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:59.377 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.377 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:59.377 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:59.377 [2024-09-29 16:14:59.901995] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:59.377 [2024-09-29 16:14:59.902170] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:59.635 [2024-09-29 16:15:00.065977] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:59.893 [2024-09-29 16:15:00.338823] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:59.893 [2024-09-29 16:15:00.338904] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:59.893 [2024-09-29 16:15:00.338930] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:59.893 [2024-09-29 16:15:00.338955] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:59.893 [2024-09-29 16:15:00.338986] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:59.893 [2024-09-29 16:15:00.339123] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:59.893 [2024-09-29 16:15:00.339170] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:59.893 [2024-09-29 16:15:00.339176] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:00.460 16:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:00.460 16:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:07:00.460 16:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:07:00.460 16:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:00.460 16:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:00.460 16:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:00.460 16:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:00.460 16:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:00.717 [2024-09-29 16:15:01.202278] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:00.717 16:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:00.975 16:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:01.233 [2024-09-29 16:15:01.776484] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:01.490 16:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:01.749 16:15:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:02.007 Malloc0 00:07:02.007 16:15:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:02.265 Delay0 00:07:02.265 16:15:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.523 16:15:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:02.781 NULL1 00:07:02.781 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:03.039 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3029805 00:07:03.039 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3029805 00:07:03.039 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.039 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:03.298 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.555 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:03.556 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:03.812 true 00:07:03.812 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3029805 00:07:03.812 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.069 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:04.327 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:04.327 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:04.892 true 00:07:04.892 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3029805 00:07:04.892 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.892 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.459 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:05.459 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:05.459 true 00:07:05.459 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3029805 00:07:05.459 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.391 Read completed with error (sct=0, sc=11) 00:07:06.391 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.957 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:06.957 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:06.957 true 00:07:06.957 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3029805 00:07:06.957 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.523 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.781 16:15:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:07.781 16:15:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:08.039 true 00:07:08.039 16:15:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3029805 00:07:08.039 16:15:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.297 16:15:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:08.556 16:15:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:08.556 16:15:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:08.813 true 00:07:08.813 16:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3029805 00:07:08.813 16:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.745 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:09.745 16:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:09.745 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:10.003 16:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:10.003 16:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:10.261 true 00:07:10.261 16:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3029805 00:07:10.261 16:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.519 16:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:10.776 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:10.776 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:11.034 true 00:07:11.034 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3029805 00:07:11.034 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.293 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:11.551 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:11.551 16:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:11.809 true 00:07:11.809 16:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3029805 00:07:11.809 16:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.743 16:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:12.743 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:12.743 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:13.001 16:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:13.001 16:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:13.259 true 00:07:13.259 16:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3029805 00:07:13.259 16:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.517 16:15:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:13.775 16:15:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:13.775 16:15:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:14.033 true 00:07:14.033 16:15:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3029805 00:07:14.033 16:15:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.964 16:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:14.964 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:15.221 16:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:15.221 16:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:15.479 true 00:07:15.479 16:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3029805 00:07:15.479 16:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.736 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:15.993 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:15.993 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:16.558 true 00:07:16.558 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3029805 00:07:16.558 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:16.558 16:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:16.815 16:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:16.815 16:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:17.071 true 00:07:17.327 16:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3029805 00:07:17.327 16:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.255 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:18.255 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:18.255 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:18.512 true 00:07:18.769 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3029805 00:07:18.769 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.026 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:19.284 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:19.284 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:19.542 true 00:07:19.542 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3029805 00:07:19.542 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.800 16:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:20.058 16:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:20.058 16:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:20.316 true 00:07:20.316 16:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3029805 00:07:20.316 16:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.250 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:21.508 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:21.508 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:21.766 true 00:07:21.766 16:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3029805 00:07:21.766 16:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.024 16:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:22.281 16:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:22.281 16:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:22.540 true 00:07:22.540 16:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3029805 00:07:22.540 16:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.798 16:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:23.057 16:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:23.057 16:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:23.361 true 00:07:23.361 16:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3029805 00:07:23.361 16:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.317 16:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:24.575 16:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:24.575 16:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:24.833 true 00:07:24.833 16:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3029805 00:07:24.833 16:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:25.091 16:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:25.349 16:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:25.349 16:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:25.606 true 00:07:25.862 16:15:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3029805 00:07:25.862 16:15:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.120 16:15:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:26.378 16:15:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:26.378 16:15:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:26.637 true 00:07:26.637 16:15:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3029805 00:07:26.637 16:15:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.570 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:27.570 16:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:27.827 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:27.827 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:28.085 true 00:07:28.085 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3029805 00:07:28.085 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:28.343 16:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:28.601 16:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:28.601 16:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:28.858 true 00:07:28.858 16:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3029805 00:07:28.858 16:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.116 16:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:29.374 16:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:29.374 16:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:29.633 true 00:07:29.633 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3029805 00:07:29.633 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:30.568 16:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:30.825 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:31.083 16:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:31.083 16:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:31.341 true 00:07:31.341 16:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3029805 00:07:31.341 16:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.599 16:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:31.857 16:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:31.857 16:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:32.115 true 00:07:32.115 16:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3029805 00:07:32.115 16:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:32.373 16:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:32.631 16:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:32.631 16:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:32.889 true 00:07:32.889 16:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3029805 00:07:32.889 16:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.824 Initializing NVMe Controllers 00:07:33.824 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:33.824 Controller IO queue size 128, less than required. 00:07:33.824 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:33.824 Controller IO queue size 128, less than required. 00:07:33.824 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:33.824 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:33.824 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:33.824 Initialization complete. Launching workers. 00:07:33.824 ======================================================== 00:07:33.824 Latency(us) 00:07:33.824 Device Information : IOPS MiB/s Average min max 00:07:33.824 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 276.46 0.13 169154.28 4402.20 1033587.16 00:07:33.824 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 5911.13 2.89 21587.24 4345.35 484287.75 00:07:33.824 ======================================================== 00:07:33.824 Total : 6187.59 3.02 28180.61 4345.35 1033587.16 00:07:33.824 00:07:33.824 16:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:34.082 16:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:07:34.082 16:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:07:34.340 true 00:07:34.340 16:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3029805 00:07:34.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3029805) - No such process 00:07:34.340 16:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3029805 00:07:34.340 16:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:34.598 16:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:34.856 16:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:34.856 16:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:34.856 16:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:34.856 16:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:34.856 16:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:35.115 null0 00:07:35.115 16:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:35.115 16:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:35.115 16:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:35.374 null1 00:07:35.374 16:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:35.374 16:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:35.374 16:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:35.632 null2 00:07:35.632 16:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:35.632 16:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:35.632 16:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:35.890 null3 00:07:35.890 16:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:35.890 16:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:35.890 16:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:36.148 null4 00:07:36.148 16:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:36.148 16:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:36.148 16:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:36.405 null5 00:07:36.405 16:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:36.405 16:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:36.405 16:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:36.662 null6 00:07:36.662 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:36.662 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:36.662 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:36.921 null7 00:07:36.921 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:36.921 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:36.921 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:36.921 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:36.921 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:36.921 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:36.921 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:36.921 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:36.921 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:36.921 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:36.921 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.921 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:36.921 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:36.921 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:36.921 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:36.921 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:36.921 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:36.921 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:36.921 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.921 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:36.921 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:36.921 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:36.921 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:36.921 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:36.921 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:36.921 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:36.921 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.921 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:36.921 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:36.921 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:36.921 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:36.921 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:36.922 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:36.922 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:36.922 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.922 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:36.922 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:36.922 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:36.922 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:36.922 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:36.922 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:36.922 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:36.922 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.922 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:36.922 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:36.922 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:36.922 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:36.922 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:36.922 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:36.922 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:36.922 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.922 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:36.922 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:36.922 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:36.922 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:36.922 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:36.922 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:36.922 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:36.922 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.922 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:36.922 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:36.922 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:36.922 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:36.922 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:36.922 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:36.922 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:36.922 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3034386 3034387 3034389 3034391 3034393 3034395 3034397 3034399 00:07:36.922 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.922 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:37.180 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:37.180 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:37.180 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:37.438 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:37.438 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:37.438 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:37.438 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:37.438 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:37.696 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.696 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.696 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:37.696 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.696 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.696 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:37.696 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.696 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.696 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:37.696 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.696 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.696 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:37.696 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.696 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.696 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:37.696 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.696 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.696 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:37.696 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.696 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.696 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:37.696 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.696 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.696 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:37.954 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:37.955 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:37.955 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:37.955 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:37.955 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:37.955 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:37.955 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:37.955 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.213 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.213 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.213 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:38.213 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.213 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.213 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:38.213 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.213 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.213 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:38.213 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.213 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.213 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.213 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:38.213 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.213 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:38.213 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.213 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.213 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:38.213 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.213 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.213 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:38.213 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.213 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.213 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:38.471 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:38.471 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:38.472 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:38.472 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:38.472 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:38.472 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:38.472 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:38.472 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.730 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.730 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.730 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:38.730 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.730 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.730 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:38.730 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.730 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.730 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:38.730 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.730 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.730 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:38.730 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.730 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.730 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:38.730 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.730 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.730 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:38.730 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.730 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.730 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:38.730 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.730 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.730 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:38.988 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:39.246 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:39.246 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:39.246 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:39.246 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:39.246 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:39.246 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:39.246 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.504 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.504 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.504 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:39.504 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.504 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.504 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:39.504 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.504 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.504 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:39.504 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.504 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.504 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:39.504 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.504 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.504 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:39.504 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.504 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.504 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:39.504 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.504 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.504 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:39.504 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.504 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.504 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:39.762 16:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:39.762 16:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:39.762 16:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:39.762 16:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:39.762 16:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:39.762 16:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:39.762 16:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:39.762 16:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.021 16:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.021 16:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.021 16:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:40.021 16:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.021 16:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.021 16:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:40.021 16:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.021 16:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.021 16:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:40.021 16:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.021 16:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.021 16:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:40.021 16:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.021 16:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.021 16:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:40.021 16:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.021 16:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.021 16:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:40.021 16:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.021 16:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.021 16:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:40.021 16:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.021 16:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.021 16:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:40.280 16:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:40.280 16:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:40.280 16:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:40.280 16:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:40.280 16:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:40.280 16:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:40.280 16:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.280 16:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:40.583 16:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.583 16:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.583 16:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.583 16:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.583 16:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:40.583 16:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:40.583 16:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.583 16:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.583 16:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:40.583 16:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.583 16:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.583 16:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:40.583 16:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.583 16:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.583 16:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:40.583 16:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.583 16:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.583 16:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:40.583 16:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.583 16:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.583 16:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:40.583 16:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.583 16:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.583 16:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:40.841 16:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:40.841 16:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:40.841 16:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:41.098 16:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:41.098 16:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:41.098 16:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.098 16:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:41.098 16:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:41.355 16:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.355 16:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.355 16:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:41.355 16:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.355 16:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.355 16:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:41.355 16:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.355 16:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.355 16:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.355 16:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.355 16:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:41.355 16:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:41.355 16:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.355 16:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.355 16:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:41.355 16:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.355 16:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.355 16:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:41.355 16:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.355 16:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.356 16:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:41.356 16:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.356 16:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.356 16:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:41.614 16:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:41.614 16:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:41.614 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:41.614 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:41.614 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:41.614 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:41.615 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.615 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:41.873 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.873 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.873 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:41.873 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.873 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.873 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:41.873 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.873 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.873 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:41.873 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.873 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.873 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.873 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.873 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:41.873 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:41.873 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.873 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.873 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:41.873 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.873 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.873 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:41.873 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.873 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.873 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:42.131 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:42.131 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:42.131 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:42.131 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:42.131 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:42.131 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:42.131 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:42.131 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:42.389 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.389 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.389 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:42.389 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.389 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.389 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:42.389 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.389 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.389 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:42.389 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.389 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.389 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:42.389 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.389 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.389 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:42.389 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.389 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.389 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:42.389 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.389 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.389 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:42.389 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.389 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.389 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:42.646 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:42.646 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:42.646 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:42.903 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:42.903 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:42.903 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:42.903 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:42.903 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:43.161 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:43.161 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:43.161 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:43.161 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:43.161 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:43.161 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:43.161 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:43.161 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:43.161 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:43.161 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:43.161 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:43.161 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:43.161 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:43.161 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:43.161 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:43.161 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:43.161 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:43.161 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:43.161 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # nvmfcleanup 00:07:43.161 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:07:43.161 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:43.161 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:07:43.161 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:43.161 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:43.161 rmmod nvme_tcp 00:07:43.161 rmmod nvme_fabrics 00:07:43.161 rmmod nvme_keyring 00:07:43.161 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:43.161 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:07:43.161 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:07:43.161 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@513 -- # '[' -n 3029141 ']' 00:07:43.161 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # killprocess 3029141 00:07:43.161 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 3029141 ']' 00:07:43.161 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 3029141 00:07:43.161 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:07:43.161 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:43.161 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3029141 00:07:43.161 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:43.161 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:43.161 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3029141' 00:07:43.161 killing process with pid 3029141 00:07:43.161 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 3029141 00:07:43.161 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 3029141 00:07:44.534 16:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:07:44.534 16:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:07:44.534 16:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:07:44.534 16:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:07:44.534 16:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # iptables-save 00:07:44.534 16:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:07:44.534 16:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # iptables-restore 00:07:44.534 16:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:44.534 16:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:44.534 16:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:44.534 16:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:44.534 16:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:47.068 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:47.068 00:07:47.068 real 0m49.663s 00:07:47.068 user 3m46.474s 00:07:47.068 sys 0m16.515s 00:07:47.068 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:47.068 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:47.068 ************************************ 00:07:47.068 END TEST nvmf_ns_hotplug_stress 00:07:47.068 ************************************ 00:07:47.068 16:15:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:47.068 16:15:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:47.068 16:15:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:47.068 16:15:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:47.068 ************************************ 00:07:47.068 START TEST nvmf_delete_subsystem 00:07:47.068 ************************************ 00:07:47.068 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:47.068 * Looking for test storage... 00:07:47.068 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:47.068 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:47.068 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lcov --version 00:07:47.068 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:47.068 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:47.068 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:47.068 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:47.068 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:47.068 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:07:47.068 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:07:47.068 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:07:47.068 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:07:47.068 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:07:47.068 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:07:47.068 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:07:47.068 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:47.068 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:07:47.068 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:07:47.068 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:47.068 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:47.068 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:07:47.068 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:07:47.068 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:47.068 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:07:47.068 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:07:47.068 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:07:47.068 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:07:47.068 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:47.068 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:07:47.068 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:07:47.068 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:47.068 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:47.068 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:07:47.068 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:47.068 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:47.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.068 --rc genhtml_branch_coverage=1 00:07:47.068 --rc genhtml_function_coverage=1 00:07:47.068 --rc genhtml_legend=1 00:07:47.068 --rc geninfo_all_blocks=1 00:07:47.068 --rc geninfo_unexecuted_blocks=1 00:07:47.068 00:07:47.068 ' 00:07:47.068 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:47.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.068 --rc genhtml_branch_coverage=1 00:07:47.068 --rc genhtml_function_coverage=1 00:07:47.068 --rc genhtml_legend=1 00:07:47.068 --rc geninfo_all_blocks=1 00:07:47.068 --rc geninfo_unexecuted_blocks=1 00:07:47.068 00:07:47.068 ' 00:07:47.069 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:47.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.069 --rc genhtml_branch_coverage=1 00:07:47.069 --rc genhtml_function_coverage=1 00:07:47.069 --rc genhtml_legend=1 00:07:47.069 --rc geninfo_all_blocks=1 00:07:47.069 --rc geninfo_unexecuted_blocks=1 00:07:47.069 00:07:47.069 ' 00:07:47.069 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:47.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.069 --rc genhtml_branch_coverage=1 00:07:47.069 --rc genhtml_function_coverage=1 00:07:47.069 --rc genhtml_legend=1 00:07:47.069 --rc geninfo_all_blocks=1 00:07:47.069 --rc geninfo_unexecuted_blocks=1 00:07:47.069 00:07:47.069 ' 00:07:47.069 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:47.069 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:47.069 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:47.069 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:47.069 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:47.069 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:47.069 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:47.069 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:47.069 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:47.069 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:47.069 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:47.069 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:47.069 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:47.069 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:47.069 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:47.069 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:47.069 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:47.069 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:47.069 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:47.069 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:07:47.069 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:47.069 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:47.069 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:47.069 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.069 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.069 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.069 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:47.069 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.069 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:07:47.069 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:47.069 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:47.069 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:47.069 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:47.069 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:47.069 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:47.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:47.069 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:47.069 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:47.069 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:47.069 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:47.069 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:07:47.069 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:47.069 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # prepare_net_devs 00:07:47.069 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@434 -- # local -g is_hw=no 00:07:47.069 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # remove_spdk_ns 00:07:47.069 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:47.069 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:47.069 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:47.069 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:07:47.069 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:07:47.069 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:07:47.069 16:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:48.971 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:48.971 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:07:48.971 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:48.971 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:48.971 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:48.971 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:48.971 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:48.971 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:07:48.971 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:48.971 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:07:48.971 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:48.972 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:48.972 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:48.972 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:48.972 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # is_hw=yes 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:48.972 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:48.972 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:07:48.972 00:07:48.972 --- 10.0.0.2 ping statistics --- 00:07:48.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.972 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:48.972 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:48.972 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:07:48.972 00:07:48.972 --- 10.0.0.1 ping statistics --- 00:07:48.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.972 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # return 0 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:07:48.972 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:48.973 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:07:48.973 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:07:48.973 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:48.973 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:07:48.973 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:48.973 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:48.973 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # nvmfpid=3037418 00:07:48.973 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:48.973 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # waitforlisten 3037418 00:07:48.973 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 3037418 ']' 00:07:48.973 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.973 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:48.973 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.973 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:48.973 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:49.231 [2024-09-29 16:15:49.556030] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:49.231 [2024-09-29 16:15:49.556170] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:49.231 [2024-09-29 16:15:49.694651] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:49.488 [2024-09-29 16:15:49.953053] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:49.489 [2024-09-29 16:15:49.953146] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:49.489 [2024-09-29 16:15:49.953171] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:49.489 [2024-09-29 16:15:49.953197] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:49.489 [2024-09-29 16:15:49.953228] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:49.489 [2024-09-29 16:15:49.953374] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.489 [2024-09-29 16:15:49.953380] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:50.055 16:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:50.055 16:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:07:50.055 16:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:07:50.055 16:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:50.055 16:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:50.055 16:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:50.055 16:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:50.055 16:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.055 16:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:50.055 [2024-09-29 16:15:50.566219] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:50.055 16:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.055 16:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:50.055 16:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.055 16:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:50.055 16:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.055 16:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:50.055 16:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.055 16:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:50.055 [2024-09-29 16:15:50.584164] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:50.055 16:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.055 16:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:50.055 16:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.055 16:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:50.055 NULL1 00:07:50.055 16:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.055 16:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:50.055 16:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.055 16:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:50.055 Delay0 00:07:50.055 16:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.055 16:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.055 16:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.055 16:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:50.055 16:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.055 16:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3037571 00:07:50.055 16:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:50.055 16:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:50.314 [2024-09-29 16:15:50.708429] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:52.213 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:52.213 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.213 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:52.471 Read completed with error (sct=0, sc=8) 00:07:52.471 Read completed with error (sct=0, sc=8) 00:07:52.471 starting I/O failed: -6 00:07:52.471 Write completed with error (sct=0, sc=8) 00:07:52.471 Read completed with error (sct=0, sc=8) 00:07:52.471 Read completed with error (sct=0, sc=8) 00:07:52.471 Write completed with error (sct=0, sc=8) 00:07:52.471 starting I/O failed: -6 00:07:52.471 Write completed with error (sct=0, sc=8) 00:07:52.471 Read completed with error (sct=0, sc=8) 00:07:52.471 Read completed with error (sct=0, sc=8) 00:07:52.471 Read completed with error (sct=0, sc=8) 00:07:52.471 starting I/O failed: -6 00:07:52.471 Read completed with error (sct=0, sc=8) 00:07:52.471 Read completed with error (sct=0, sc=8) 00:07:52.471 Write completed with error (sct=0, sc=8) 00:07:52.471 Write completed with error (sct=0, sc=8) 00:07:52.471 starting I/O failed: -6 00:07:52.471 Read completed with error (sct=0, sc=8) 00:07:52.471 Read completed with error (sct=0, sc=8) 00:07:52.471 Write completed with error (sct=0, sc=8) 00:07:52.471 Write completed with error (sct=0, sc=8) 00:07:52.471 starting I/O failed: -6 00:07:52.471 Read completed with error (sct=0, sc=8) 00:07:52.471 Read completed with error (sct=0, sc=8) 00:07:52.471 Read completed with error (sct=0, sc=8) 00:07:52.471 Read completed with error (sct=0, sc=8) 00:07:52.471 starting I/O failed: -6 00:07:52.471 Read completed with error (sct=0, sc=8) 00:07:52.471 Write completed with error (sct=0, sc=8) 00:07:52.471 Write completed with error (sct=0, sc=8) 00:07:52.471 Read completed with error (sct=0, sc=8) 00:07:52.471 starting I/O failed: -6 00:07:52.471 Write completed with error (sct=0, sc=8) 00:07:52.471 Read completed with error (sct=0, sc=8) 00:07:52.471 Write completed with error (sct=0, sc=8) 00:07:52.471 Read completed with error (sct=0, sc=8) 00:07:52.471 starting I/O failed: -6 00:07:52.471 Read completed with error (sct=0, sc=8) 00:07:52.471 Write completed with error (sct=0, sc=8) 00:07:52.471 Read completed with error (sct=0, sc=8) 00:07:52.471 Read completed with error (sct=0, sc=8) 00:07:52.471 starting I/O failed: -6 00:07:52.471 Read completed with error (sct=0, sc=8) 00:07:52.471 Read completed with error (sct=0, sc=8) 00:07:52.471 Read completed with error (sct=0, sc=8) 00:07:52.471 Read completed with error (sct=0, sc=8) 00:07:52.471 starting I/O failed: -6 00:07:52.471 Read completed with error (sct=0, sc=8) 00:07:52.471 Read completed with error (sct=0, sc=8) 00:07:52.471 Read completed with error (sct=0, sc=8) 00:07:52.471 Read completed with error (sct=0, sc=8) 00:07:52.471 starting I/O failed: -6 00:07:52.471 Read completed with error (sct=0, sc=8) 00:07:52.471 Read completed with error (sct=0, sc=8) 00:07:52.471 Read completed with error (sct=0, sc=8) 00:07:52.471 [2024-09-29 16:15:52.807646] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016600 is same with the state(6) to be set 00:07:52.471 Read completed with error (sct=0, sc=8) 00:07:52.471 Write completed with error (sct=0, sc=8) 00:07:52.471 Write completed with error (sct=0, sc=8) 00:07:52.471 starting I/O failed: -6 00:07:52.471 Read completed with error (sct=0, sc=8) 00:07:52.471 Write completed with error (sct=0, sc=8) 00:07:52.471 Write completed with error (sct=0, sc=8) 00:07:52.471 Write completed with error (sct=0, sc=8) 00:07:52.471 starting I/O failed: -6 00:07:52.471 Write completed with error (sct=0, sc=8) 00:07:52.471 Read completed with error (sct=0, sc=8) 00:07:52.471 Read completed with error (sct=0, sc=8) 00:07:52.471 Read completed with error (sct=0, sc=8) 00:07:52.471 starting I/O failed: -6 00:07:52.471 Read completed with error (sct=0, sc=8) 00:07:52.471 Read completed with error (sct=0, sc=8) 00:07:52.471 Read completed with error (sct=0, sc=8) 00:07:52.471 Read completed with error (sct=0, sc=8) 00:07:52.471 starting I/O failed: -6 00:07:52.471 Write completed with error (sct=0, sc=8) 00:07:52.471 Read completed with error (sct=0, sc=8) 00:07:52.472 Write completed with error (sct=0, sc=8) 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 starting I/O failed: -6 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 starting I/O failed: -6 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 Write completed with error (sct=0, sc=8) 00:07:52.472 Write completed with error (sct=0, sc=8) 00:07:52.472 Write completed with error (sct=0, sc=8) 00:07:52.472 starting I/O failed: -6 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 Write completed with error (sct=0, sc=8) 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 starting I/O failed: -6 00:07:52.472 Write completed with error (sct=0, sc=8) 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 starting I/O failed: -6 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 [2024-09-29 16:15:52.808873] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020100 is same with the state(6) to be set 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 Write completed with error (sct=0, sc=8) 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 Write completed with error (sct=0, sc=8) 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 Write completed with error (sct=0, sc=8) 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 Write completed with error (sct=0, sc=8) 00:07:52.472 Write completed with error (sct=0, sc=8) 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 Write completed with error (sct=0, sc=8) 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 Write completed with error (sct=0, sc=8) 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 Write completed with error (sct=0, sc=8) 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 Write completed with error (sct=0, sc=8) 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 Write completed with error (sct=0, sc=8) 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 Write completed with error (sct=0, sc=8) 00:07:52.472 Write completed with error (sct=0, sc=8) 00:07:52.472 Write completed with error (sct=0, sc=8) 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 Write completed with error (sct=0, sc=8) 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 Write completed with error (sct=0, sc=8) 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 Write completed with error (sct=0, sc=8) 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 Write completed with error (sct=0, sc=8) 00:07:52.472 Write completed with error (sct=0, sc=8) 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 Write completed with error (sct=0, sc=8) 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 Write completed with error (sct=0, sc=8) 00:07:52.472 Write completed with error (sct=0, sc=8) 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 Write completed with error (sct=0, sc=8) 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 Write completed with error (sct=0, sc=8) 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 Write completed with error (sct=0, sc=8) 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 Write completed with error (sct=0, sc=8) 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 Write completed with error (sct=0, sc=8) 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 Write completed with error (sct=0, sc=8) 00:07:52.472 Write completed with error (sct=0, sc=8) 00:07:52.472 Write completed with error (sct=0, sc=8) 00:07:52.472 Write completed with error (sct=0, sc=8) 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 Write completed with error (sct=0, sc=8) 00:07:52.472 Write completed with error (sct=0, sc=8) 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 Read completed with error (sct=0, sc=8) 00:07:52.472 Write completed with error (sct=0, sc=8) 00:07:52.472 [2024-09-29 16:15:52.810281] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001fe80 is same with the state(6) to be set 00:07:53.408 [2024-09-29 16:15:53.767202] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000015c00 is same with the state(6) to be set 00:07:53.408 Read completed with error (sct=0, sc=8) 00:07:53.408 Read completed with error (sct=0, sc=8) 00:07:53.408 Read completed with error (sct=0, sc=8) 00:07:53.408 Write completed with error (sct=0, sc=8) 00:07:53.408 Read completed with error (sct=0, sc=8) 00:07:53.408 Read completed with error (sct=0, sc=8) 00:07:53.408 Write completed with error (sct=0, sc=8) 00:07:53.408 Read completed with error (sct=0, sc=8) 00:07:53.408 Read completed with error (sct=0, sc=8) 00:07:53.408 Read completed with error (sct=0, sc=8) 00:07:53.408 Read completed with error (sct=0, sc=8) 00:07:53.408 Write completed with error (sct=0, sc=8) 00:07:53.408 Read completed with error (sct=0, sc=8) 00:07:53.408 Write completed with error (sct=0, sc=8) 00:07:53.408 Read completed with error (sct=0, sc=8) 00:07:53.408 Read completed with error (sct=0, sc=8) 00:07:53.408 Read completed with error (sct=0, sc=8) 00:07:53.408 Read completed with error (sct=0, sc=8) 00:07:53.408 Write completed with error (sct=0, sc=8) 00:07:53.408 Read completed with error (sct=0, sc=8) 00:07:53.408 Read completed with error (sct=0, sc=8) 00:07:53.408 Read completed with error (sct=0, sc=8) 00:07:53.408 Read completed with error (sct=0, sc=8) 00:07:53.408 Write completed with error (sct=0, sc=8) 00:07:53.408 [2024-09-29 16:15:53.813374] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016880 is same with the state(6) to be set 00:07:53.408 Read completed with error (sct=0, sc=8) 00:07:53.408 Read completed with error (sct=0, sc=8) 00:07:53.408 Write completed with error (sct=0, sc=8) 00:07:53.408 Read completed with error (sct=0, sc=8) 00:07:53.408 Read completed with error (sct=0, sc=8) 00:07:53.408 Write completed with error (sct=0, sc=8) 00:07:53.408 Write completed with error (sct=0, sc=8) 00:07:53.408 Read completed with error (sct=0, sc=8) 00:07:53.408 Read completed with error (sct=0, sc=8) 00:07:53.408 Read completed with error (sct=0, sc=8) 00:07:53.408 Read completed with error (sct=0, sc=8) 00:07:53.408 Read completed with error (sct=0, sc=8) 00:07:53.408 Read completed with error (sct=0, sc=8) 00:07:53.408 Read completed with error (sct=0, sc=8) 00:07:53.408 Read completed with error (sct=0, sc=8) 00:07:53.408 Read completed with error (sct=0, sc=8) 00:07:53.408 Write completed with error (sct=0, sc=8) 00:07:53.408 Write completed with error (sct=0, sc=8) 00:07:53.408 Read completed with error (sct=0, sc=8) 00:07:53.408 Read completed with error (sct=0, sc=8) 00:07:53.408 Read completed with error (sct=0, sc=8) 00:07:53.408 Write completed with error (sct=0, sc=8) 00:07:53.408 Read completed with error (sct=0, sc=8) 00:07:53.408 Write completed with error (sct=0, sc=8) 00:07:53.408 [2024-09-29 16:15:53.814559] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016b00 is same with the state(6) to be set 00:07:53.408 Write completed with error (sct=0, sc=8) 00:07:53.408 Write completed with error (sct=0, sc=8) 00:07:53.408 Read completed with error (sct=0, sc=8) 00:07:53.408 Write completed with error (sct=0, sc=8) 00:07:53.408 Read completed with error (sct=0, sc=8) 00:07:53.408 Read completed with error (sct=0, sc=8) 00:07:53.408 Read completed with error (sct=0, sc=8) 00:07:53.408 Write completed with error (sct=0, sc=8) 00:07:53.408 Write completed with error (sct=0, sc=8) 00:07:53.408 Read completed with error (sct=0, sc=8) 00:07:53.408 Read completed with error (sct=0, sc=8) 00:07:53.408 Read completed with error (sct=0, sc=8) 00:07:53.408 Read completed with error (sct=0, sc=8) 00:07:53.408 Read completed with error (sct=0, sc=8) 00:07:53.408 Read completed with error (sct=0, sc=8) 00:07:53.408 Read completed with error (sct=0, sc=8) 00:07:53.408 Write completed with error (sct=0, sc=8) 00:07:53.408 Read completed with error (sct=0, sc=8) 00:07:53.408 Read completed with error (sct=0, sc=8) 00:07:53.408 Read completed with error (sct=0, sc=8) 00:07:53.408 Read completed with error (sct=0, sc=8) 00:07:53.408 Read completed with error (sct=0, sc=8) 00:07:53.408 [2024-09-29 16:15:53.815189] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020380 is same with the state(6) to be set 00:07:53.408 Read completed with error (sct=0, sc=8) 00:07:53.408 Read completed with error (sct=0, sc=8) 00:07:53.408 Read completed with error (sct=0, sc=8) 00:07:53.408 Read completed with error (sct=0, sc=8) 00:07:53.408 Read completed with error (sct=0, sc=8) 00:07:53.408 Read completed with error (sct=0, sc=8) 00:07:53.408 Read completed with error (sct=0, sc=8) 00:07:53.408 Write completed with error (sct=0, sc=8) 00:07:53.408 Write completed with error (sct=0, sc=8) 00:07:53.408 Read completed with error (sct=0, sc=8) 00:07:53.408 Read completed with error (sct=0, sc=8) 00:07:53.408 Write completed with error (sct=0, sc=8) 00:07:53.408 Read completed with error (sct=0, sc=8) 00:07:53.408 Read completed with error (sct=0, sc=8) 00:07:53.408 Read completed with error (sct=0, sc=8) 00:07:53.408 Read completed with error (sct=0, sc=8) 00:07:53.408 Read completed with error (sct=0, sc=8) 00:07:53.408 Read completed with error (sct=0, sc=8) 00:07:53.408 Read completed with error (sct=0, sc=8) 00:07:53.408 Read completed with error (sct=0, sc=8) 00:07:53.408 Read completed with error (sct=0, sc=8) 00:07:53.408 Write completed with error (sct=0, sc=8) 00:07:53.408 Read completed with error (sct=0, sc=8) 00:07:53.408 Read completed with error (sct=0, sc=8) 00:07:53.408 Read completed with error (sct=0, sc=8) 00:07:53.408 [2024-09-29 16:15:53.816768] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016380 is same with the state(6) to be set 00:07:53.408 16:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.408 Initializing NVMe Controllers 00:07:53.408 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:53.408 Controller IO queue size 128, less than required. 00:07:53.408 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:53.408 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:53.408 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:53.408 Initialization complete. Launching workers. 00:07:53.408 ======================================================== 00:07:53.408 Latency(us) 00:07:53.408 Device Information : IOPS MiB/s Average min max 00:07:53.408 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 169.47 0.08 970932.59 1074.58 1046168.10 00:07:53.408 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 152.62 0.07 911105.98 664.90 2007132.69 00:07:53.408 ======================================================== 00:07:53.408 Total : 322.10 0.16 942583.98 664.90 2007132.69 00:07:53.408 00:07:53.408 16:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:53.408 16:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3037571 00:07:53.408 [2024-09-29 16:15:53.821572] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000015c00 16:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:53.408 (9): Bad file descriptor 00:07:53.408 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:54.048 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:54.048 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3037571 00:07:54.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3037571) - No such process 00:07:54.048 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3037571 00:07:54.048 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:07:54.048 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3037571 00:07:54.048 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:07:54.048 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:54.048 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:07:54.048 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:54.048 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 3037571 00:07:54.048 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:07:54.048 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:54.048 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:54.048 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:54.048 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:54.048 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.048 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:54.048 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.048 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:54.048 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.048 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:54.048 [2024-09-29 16:15:54.342471] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:54.048 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.048 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:54.048 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.048 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:54.048 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.048 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3037987 00:07:54.048 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:54.048 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:54.048 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3037987 00:07:54.048 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:54.048 [2024-09-29 16:15:54.452145] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:54.307 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:54.307 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3037987 00:07:54.307 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:54.873 16:15:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:54.873 16:15:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3037987 00:07:54.873 16:15:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:55.438 16:15:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:55.438 16:15:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3037987 00:07:55.438 16:15:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:56.003 16:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:56.003 16:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3037987 00:07:56.003 16:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:56.568 16:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:56.568 16:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3037987 00:07:56.568 16:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:56.827 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:56.827 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3037987 00:07:56.827 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:57.084 Initializing NVMe Controllers 00:07:57.084 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:57.084 Controller IO queue size 128, less than required. 00:07:57.084 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:57.084 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:57.084 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:57.085 Initialization complete. Launching workers. 00:07:57.085 ======================================================== 00:07:57.085 Latency(us) 00:07:57.085 Device Information : IOPS MiB/s Average min max 00:07:57.085 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1006404.62 1000304.44 1043147.78 00:07:57.085 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005764.20 1000251.36 1016777.92 00:07:57.085 ======================================================== 00:07:57.085 Total : 256.00 0.12 1006084.41 1000251.36 1043147.78 00:07:57.085 00:07:57.343 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:57.343 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3037987 00:07:57.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3037987) - No such process 00:07:57.343 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3037987 00:07:57.343 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:57.343 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:57.343 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # nvmfcleanup 00:07:57.343 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:57.343 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:57.343 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:57.343 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:57.343 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:57.343 rmmod nvme_tcp 00:07:57.343 rmmod nvme_fabrics 00:07:57.601 rmmod nvme_keyring 00:07:57.601 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:57.601 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:57.601 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:57.601 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@513 -- # '[' -n 3037418 ']' 00:07:57.601 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # killprocess 3037418 00:07:57.601 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 3037418 ']' 00:07:57.601 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 3037418 00:07:57.601 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:07:57.601 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:57.601 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3037418 00:07:57.601 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:57.601 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:57.601 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3037418' 00:07:57.601 killing process with pid 3037418 00:07:57.601 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 3037418 00:07:57.601 16:15:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 3037418 00:07:58.976 16:15:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:07:58.976 16:15:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:07:58.976 16:15:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:07:58.976 16:15:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:07:58.976 16:15:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # iptables-save 00:07:58.976 16:15:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:07:58.976 16:15:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # iptables-restore 00:07:58.976 16:15:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:58.976 16:15:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:58.976 16:15:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:58.976 16:15:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:58.976 16:15:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:00.879 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:00.879 00:08:00.879 real 0m14.288s 00:08:00.879 user 0m30.789s 00:08:00.879 sys 0m3.232s 00:08:00.879 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:00.879 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:00.879 ************************************ 00:08:00.879 END TEST nvmf_delete_subsystem 00:08:00.879 ************************************ 00:08:00.879 16:16:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:00.879 16:16:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:00.879 16:16:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:00.879 16:16:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:00.879 ************************************ 00:08:00.879 START TEST nvmf_host_management 00:08:00.879 ************************************ 00:08:00.879 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:01.139 * Looking for test storage... 00:08:01.139 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:01.139 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:01.139 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:08:01.139 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:01.139 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:01.139 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:01.139 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:01.139 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:01.139 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:01.139 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:01.139 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:01.139 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:01.139 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:01.139 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:01.139 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:01.139 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:01.139 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:01.139 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:01.139 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:01.139 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:01.139 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:01.139 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:01.139 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:01.139 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:01.139 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:01.139 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:01.139 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:01.139 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:01.139 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:01.139 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:01.139 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:01.139 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:01.139 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:01.139 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:01.139 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:01.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.139 --rc genhtml_branch_coverage=1 00:08:01.139 --rc genhtml_function_coverage=1 00:08:01.139 --rc genhtml_legend=1 00:08:01.139 --rc geninfo_all_blocks=1 00:08:01.139 --rc geninfo_unexecuted_blocks=1 00:08:01.139 00:08:01.139 ' 00:08:01.139 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:01.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.139 --rc genhtml_branch_coverage=1 00:08:01.139 --rc genhtml_function_coverage=1 00:08:01.139 --rc genhtml_legend=1 00:08:01.139 --rc geninfo_all_blocks=1 00:08:01.139 --rc geninfo_unexecuted_blocks=1 00:08:01.139 00:08:01.139 ' 00:08:01.139 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:01.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.139 --rc genhtml_branch_coverage=1 00:08:01.139 --rc genhtml_function_coverage=1 00:08:01.139 --rc genhtml_legend=1 00:08:01.139 --rc geninfo_all_blocks=1 00:08:01.139 --rc geninfo_unexecuted_blocks=1 00:08:01.139 00:08:01.139 ' 00:08:01.140 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:01.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.140 --rc genhtml_branch_coverage=1 00:08:01.140 --rc genhtml_function_coverage=1 00:08:01.140 --rc genhtml_legend=1 00:08:01.140 --rc geninfo_all_blocks=1 00:08:01.140 --rc geninfo_unexecuted_blocks=1 00:08:01.140 00:08:01.140 ' 00:08:01.140 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:01.140 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:01.140 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:01.140 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:01.140 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:01.140 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:01.140 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:01.140 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:01.140 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:01.140 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:01.140 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:01.140 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:01.140 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:01.140 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:01.140 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:01.140 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:01.140 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:01.140 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:01.140 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:01.140 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:01.140 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:01.140 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:01.140 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:01.140 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.140 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.140 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.140 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:01.140 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.140 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:01.140 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:01.140 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:01.140 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:01.140 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:01.140 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:01.140 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:01.140 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:01.140 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:01.140 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:01.140 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:01.140 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:01.140 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:01.140 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:01.140 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:08:01.140 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:01.140 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # prepare_net_devs 00:08:01.140 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@434 -- # local -g is_hw=no 00:08:01.140 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # remove_spdk_ns 00:08:01.140 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:01.140 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:01.140 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:01.140 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:08:01.140 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:08:01.140 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:08:01.140 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:03.045 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:03.045 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:08:03.045 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:03.045 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:03.045 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:03.045 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:03.045 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:03.045 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:08:03.045 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:03.045 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:08:03.045 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:08:03.045 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:08:03.045 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:08:03.045 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:08:03.045 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:08:03.045 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:03.045 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:03.045 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:03.045 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:03.045 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:03.045 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:03.045 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:03.045 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:03.045 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:03.045 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:03.045 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:03.045 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:08:03.045 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:08:03.045 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:08:03.045 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:08:03.045 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:08:03.045 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:08:03.045 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:08:03.045 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:03.045 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:03.045 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:08:03.045 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:08:03.045 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:03.045 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:03.045 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:08:03.045 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:08:03.045 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:03.045 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:03.045 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:08:03.045 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:08:03.045 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:03.045 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:03.045 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:08:03.045 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:08:03.045 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:08:03.045 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:08:03.045 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:03.045 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:03.045 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:08:03.045 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:03.045 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ up == up ]] 00:08:03.045 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:03.045 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:03.045 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:03.045 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:03.045 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:03.045 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:03.045 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:03.045 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:08:03.045 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:03.045 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ up == up ]] 00:08:03.045 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:03.045 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:03.045 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:03.045 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:03.045 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:03.045 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:08:03.045 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # is_hw=yes 00:08:03.045 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:08:03.045 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:08:03.045 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:08:03.045 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:03.045 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:03.046 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:03.046 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:03.046 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:03.046 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:03.046 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:03.046 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:03.046 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:03.046 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:03.046 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:03.046 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:03.046 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:03.046 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:03.046 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:03.304 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:03.304 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:03.304 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:03.304 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:03.304 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:03.304 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:03.304 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:03.304 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:03.304 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:03.304 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:08:03.304 00:08:03.304 --- 10.0.0.2 ping statistics --- 00:08:03.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:03.304 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:08:03.304 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:03.304 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:03.304 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:08:03.304 00:08:03.304 --- 10.0.0.1 ping statistics --- 00:08:03.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:03.304 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:08:03.304 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:03.304 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # return 0 00:08:03.304 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:08:03.304 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:03.304 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:08:03.304 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:08:03.304 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:03.304 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:08:03.304 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:08:03.304 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:03.304 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:03.304 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:03.304 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:08:03.304 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:03.304 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:03.304 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # nvmfpid=3040477 00:08:03.304 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:03.304 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # waitforlisten 3040477 00:08:03.304 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 3040477 ']' 00:08:03.304 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.304 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:03.304 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.304 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:03.304 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:03.304 [2024-09-29 16:16:03.817811] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:03.304 [2024-09-29 16:16:03.817974] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:03.562 [2024-09-29 16:16:03.955492] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:03.819 [2024-09-29 16:16:04.211935] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:03.819 [2024-09-29 16:16:04.212023] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:03.819 [2024-09-29 16:16:04.212048] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:03.819 [2024-09-29 16:16:04.212072] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:03.819 [2024-09-29 16:16:04.212092] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:03.819 [2024-09-29 16:16:04.212245] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:03.819 [2024-09-29 16:16:04.212342] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:08:03.819 [2024-09-29 16:16:04.212401] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:03.819 [2024-09-29 16:16:04.212407] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:08:04.384 16:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:04.384 16:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:04.384 16:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:08:04.384 16:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:04.384 16:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:04.384 16:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:04.384 16:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:04.384 16:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.384 16:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:04.384 [2024-09-29 16:16:04.804469] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:04.384 16:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.384 16:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:04.384 16:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:04.384 16:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:04.384 16:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:04.384 16:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:04.384 16:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:04.384 16:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.384 16:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:04.384 Malloc0 00:08:04.384 [2024-09-29 16:16:04.915599] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:04.384 16:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.384 16:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:04.384 16:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:04.384 16:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:04.641 16:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3040654 00:08:04.641 16:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3040654 /var/tmp/bdevperf.sock 00:08:04.641 16:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 3040654 ']' 00:08:04.641 16:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:04.641 16:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:04.641 16:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:04.641 16:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:04.641 16:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:08:04.641 16:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:04.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:04.641 16:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:08:04.641 16:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:04.641 16:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:08:04.641 16:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:04.641 16:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:08:04.641 { 00:08:04.641 "params": { 00:08:04.641 "name": "Nvme$subsystem", 00:08:04.641 "trtype": "$TEST_TRANSPORT", 00:08:04.641 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:04.641 "adrfam": "ipv4", 00:08:04.641 "trsvcid": "$NVMF_PORT", 00:08:04.641 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:04.641 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:04.641 "hdgst": ${hdgst:-false}, 00:08:04.641 "ddgst": ${ddgst:-false} 00:08:04.641 }, 00:08:04.641 "method": "bdev_nvme_attach_controller" 00:08:04.641 } 00:08:04.641 EOF 00:08:04.641 )") 00:08:04.641 16:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:08:04.641 16:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:08:04.641 16:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:08:04.641 16:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:08:04.641 "params": { 00:08:04.641 "name": "Nvme0", 00:08:04.641 "trtype": "tcp", 00:08:04.641 "traddr": "10.0.0.2", 00:08:04.641 "adrfam": "ipv4", 00:08:04.641 "trsvcid": "4420", 00:08:04.641 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:04.641 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:04.641 "hdgst": false, 00:08:04.641 "ddgst": false 00:08:04.641 }, 00:08:04.641 "method": "bdev_nvme_attach_controller" 00:08:04.641 }' 00:08:04.641 [2024-09-29 16:16:05.041319] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:04.641 [2024-09-29 16:16:05.041457] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3040654 ] 00:08:04.641 [2024-09-29 16:16:05.177163] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.898 [2024-09-29 16:16:05.416116] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.462 Running I/O for 10 seconds... 00:08:05.462 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:05.462 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:05.462 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:05.462 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.462 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:05.462 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.462 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:05.462 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:05.462 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:05.462 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:05.462 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:05.462 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:05.462 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:05.462 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:05.462 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:05.463 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:05.463 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.463 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:05.721 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.721 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=131 00:08:05.721 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 131 -ge 100 ']' 00:08:05.721 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:05.721 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:05.721 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:05.721 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:05.721 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.722 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:05.722 [2024-09-29 16:16:06.059392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:05.722 [2024-09-29 16:16:06.059472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:05.722 [2024-09-29 16:16:06.059495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:05.722 [2024-09-29 16:16:06.059523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:05.722 [2024-09-29 16:16:06.059543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:05.722 [2024-09-29 16:16:06.059560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:05.722 [2024-09-29 16:16:06.059577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:05.722 [2024-09-29 16:16:06.059595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:05.722 [2024-09-29 16:16:06.059613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:05.722 [2024-09-29 16:16:06.059630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:05.722 [2024-09-29 16:16:06.059648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:05.722 [2024-09-29 16:16:06.059682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:05.722 [2024-09-29 16:16:06.059703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:05.722 [2024-09-29 16:16:06.059721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:05.722 [2024-09-29 16:16:06.059739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:05.722 [2024-09-29 16:16:06.059756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:05.722 [2024-09-29 16:16:06.059774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:05.722 [2024-09-29 16:16:06.059792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:05.722 [2024-09-29 16:16:06.059810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:05.722 [2024-09-29 16:16:06.059827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:05.722 [2024-09-29 16:16:06.059844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:05.722 [2024-09-29 16:16:06.059861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:05.722 [2024-09-29 16:16:06.059879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:05.722 [2024-09-29 16:16:06.059896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:05.722 [2024-09-29 16:16:06.059914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:05.722 [2024-09-29 16:16:06.059931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:05.722 [2024-09-29 16:16:06.059949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:05.722 [2024-09-29 16:16:06.059973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:05.722 [2024-09-29 16:16:06.059991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:05.722 [2024-09-29 16:16:06.060008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:05.722 [2024-09-29 16:16:06.060030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:05.722 [2024-09-29 16:16:06.060049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:05.722 [2024-09-29 16:16:06.060066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:05.722 [2024-09-29 16:16:06.060084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:05.722 [2024-09-29 16:16:06.060101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:05.722 [2024-09-29 16:16:06.060119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:05.722 [2024-09-29 16:16:06.060136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:05.722 [2024-09-29 16:16:06.060153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:05.722 [2024-09-29 16:16:06.060173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:05.722 [2024-09-29 16:16:06.061513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.722 [2024-09-29 16:16:06.061592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:05.722 [2024-09-29 16:16:06.061637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.722 [2024-09-29 16:16:06.061670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:05.722 [2024-09-29 16:16:06.061707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.722 [2024-09-29 16:16:06.061729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:05.722 [2024-09-29 16:16:06.061754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.722 [2024-09-29 16:16:06.061776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:05.722 [2024-09-29 16:16:06.061800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.722 [2024-09-29 16:16:06.061822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:05.722 [2024-09-29 16:16:06.061847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.722 [2024-09-29 16:16:06.061870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:05.722 [2024-09-29 16:16:06.061913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.722 [2024-09-29 16:16:06.061935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:05.722 [2024-09-29 16:16:06.061970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.722 [2024-09-29 16:16:06.061992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:05.722 [2024-09-29 16:16:06.062022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.722 [2024-09-29 16:16:06.062045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:05.722 [2024-09-29 16:16:06.062069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.722 [2024-09-29 16:16:06.062091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:05.722 [2024-09-29 16:16:06.062116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.722 [2024-09-29 16:16:06.062139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:05.722 [2024-09-29 16:16:06.062163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.722 [2024-09-29 16:16:06.062185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:05.722 [2024-09-29 16:16:06.062209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.722 [2024-09-29 16:16:06.062231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:05.722 [2024-09-29 16:16:06.062255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.722 [2024-09-29 16:16:06.062277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:05.722 [2024-09-29 16:16:06.062300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.722 [2024-09-29 16:16:06.062321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:05.722 [2024-09-29 16:16:06.062345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.722 [2024-09-29 16:16:06.062367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:05.722 [2024-09-29 16:16:06.062391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.722 [2024-09-29 16:16:06.062412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:05.722 [2024-09-29 16:16:06.062436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.722 [2024-09-29 16:16:06.062457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:05.722 [2024-09-29 16:16:06.062481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.722 [2024-09-29 16:16:06.062503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:05.723 [2024-09-29 16:16:06.062527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.723 [2024-09-29 16:16:06.062548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:05.723 [2024-09-29 16:16:06.062571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.723 [2024-09-29 16:16:06.062597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:05.723 [2024-09-29 16:16:06.062621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.723 [2024-09-29 16:16:06.062643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:05.723 [2024-09-29 16:16:06.062683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.723 [2024-09-29 16:16:06.062706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:05.723 [2024-09-29 16:16:06.062731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.723 [2024-09-29 16:16:06.062753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:05.723 [2024-09-29 16:16:06.062776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.723 [2024-09-29 16:16:06.062797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:05.723 [2024-09-29 16:16:06.062820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.723 [2024-09-29 16:16:06.062841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:05.723 [2024-09-29 16:16:06.062865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.723 [2024-09-29 16:16:06.062886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:05.723 [2024-09-29 16:16:06.062910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.723 [2024-09-29 16:16:06.062931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:05.723 [2024-09-29 16:16:06.062954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.723 [2024-09-29 16:16:06.062982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:05.723 [2024-09-29 16:16:06.063006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.723 [2024-09-29 16:16:06.063027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:05.723 [2024-09-29 16:16:06.063051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.723 [2024-09-29 16:16:06.063072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:05.723 [2024-09-29 16:16:06.063095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.723 [2024-09-29 16:16:06.063117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:05.723 [2024-09-29 16:16:06.063140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.723 [2024-09-29 16:16:06.063161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:05.723 [2024-09-29 16:16:06.063189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.723 [2024-09-29 16:16:06.063211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:05.723 [2024-09-29 16:16:06.063235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.723 [2024-09-29 16:16:06.063256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:05.723 [2024-09-29 16:16:06.063280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.723 [2024-09-29 16:16:06.063301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:05.723 [2024-09-29 16:16:06.063325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.723 [2024-09-29 16:16:06.063346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:05.723 [2024-09-29 16:16:06.063369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.723 [2024-09-29 16:16:06.063390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:05.723 [2024-09-29 16:16:06.063413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.723 [2024-09-29 16:16:06.063435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:05.723 [2024-09-29 16:16:06.063458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.723 [2024-09-29 16:16:06.063479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:05.723 [2024-09-29 16:16:06.063503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.723 [2024-09-29 16:16:06.063524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:05.723 [2024-09-29 16:16:06.063547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.723 [2024-09-29 16:16:06.063569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:05.723 [2024-09-29 16:16:06.063593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.723 [2024-09-29 16:16:06.063614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:05.723 [2024-09-29 16:16:06.063638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.723 [2024-09-29 16:16:06.063664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:05.723 [2024-09-29 16:16:06.063697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.723 [2024-09-29 16:16:06.063719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:05.723 [2024-09-29 16:16:06.063742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.723 [2024-09-29 16:16:06.063768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:05.723 [2024-09-29 16:16:06.063792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.723 [2024-09-29 16:16:06.063814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:05.723 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.723 [2024-09-29 16:16:06.063838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.723 [2024-09-29 16:16:06.063859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:05.723 [2024-09-29 16:16:06.063883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.723 [2024-09-29 16:16:06.063904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:05.723 [2024-09-29 16:16:06.063927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.723 [2024-09-29 16:16:06.063958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:05.723 [2024-09-29 16:16:06.063982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.723 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:05.723 [2024-09-29 16:16:06.064002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:05.723 [2024-09-29 16:16:06.064027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.723 [2024-09-29 16:16:06.064048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:05.723 [2024-09-29 16:16:06.064072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.723 [2024-09-29 16:16:06.064093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:05.723 [2024-09-29 16:16:06.064116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.723 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.723 [2024-09-29 16:16:06.064138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:05.723 [2024-09-29 16:16:06.064161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.723 [2024-09-29 16:16:06.064182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:05.723 [2024-09-29 16:16:06.064206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.724 [2024-09-29 16:16:06.064227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:05.724 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:05.724 [2024-09-29 16:16:06.064250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.724 [2024-09-29 16:16:06.064276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:05.724 [2024-09-29 16:16:06.064301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.724 [2024-09-29 16:16:06.064322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:05.724 [2024-09-29 16:16:06.064346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.724 [2024-09-29 16:16:06.064367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:05.724 [2024-09-29 16:16:06.064390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.724 [2024-09-29 16:16:06.064412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:05.724 [2024-09-29 16:16:06.064435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.724 [2024-09-29 16:16:06.064456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:05.724 [2024-09-29 16:16:06.064479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.724 [2024-09-29 16:16:06.064500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:05.724 [2024-09-29 16:16:06.064524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.724 [2024-09-29 16:16:06.064546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:05.724 [2024-09-29 16:16:06.064569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:05.724 [2024-09-29 16:16:06.064590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:05.724 [2024-09-29 16:16:06.064644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:08:05.724 [2024-09-29 16:16:06.064948] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f2f00 was disconnected and freed. reset controller. 00:08:05.724 [2024-09-29 16:16:06.065063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:05.724 [2024-09-29 16:16:06.065094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:05.724 [2024-09-29 16:16:06.065125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:08:05.724 [2024-09-29 16:16:06.065147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:05.724 [2024-09-29 16:16:06.065169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:08:05.724 [2024-09-29 16:16:06.065189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:05.724 [2024-09-29 16:16:06.065211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:08:05.724 [2024-09-29 16:16:06.065230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:05.724 [2024-09-29 16:16:06.065255] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:08:05.724 [2024-09-29 16:16:06.066441] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:08:05.724 task offset: 24576 on job bdev=Nvme0n1 fails 00:08:05.724 00:08:05.724 Latency(us) 00:08:05.724 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:05.724 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:05.724 Job: Nvme0n1 ended in about 0.17 seconds with error 00:08:05.724 Verification LBA range: start 0x0 length 0x400 00:08:05.724 Nvme0n1 : 0.17 1163.49 72.72 387.83 0.00 38534.26 4514.70 41166.32 00:08:05.724 =================================================================================================================== 00:08:05.724 Total : 1163.49 72.72 387.83 0.00 38534.26 4514.70 41166.32 00:08:05.724 [2024-09-29 16:16:06.071377] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:05.724 [2024-09-29 16:16:06.071419] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:08:05.724 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.724 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:05.724 [2024-09-29 16:16:06.132874] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:06.656 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3040654 00:08:06.656 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:06.656 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:06.656 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:06.656 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:08:06.656 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:08:06.656 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:08:06.656 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:08:06.656 { 00:08:06.656 "params": { 00:08:06.656 "name": "Nvme$subsystem", 00:08:06.656 "trtype": "$TEST_TRANSPORT", 00:08:06.656 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:06.656 "adrfam": "ipv4", 00:08:06.656 "trsvcid": "$NVMF_PORT", 00:08:06.656 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:06.656 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:06.656 "hdgst": ${hdgst:-false}, 00:08:06.656 "ddgst": ${ddgst:-false} 00:08:06.656 }, 00:08:06.656 "method": "bdev_nvme_attach_controller" 00:08:06.656 } 00:08:06.656 EOF 00:08:06.656 )") 00:08:06.656 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:08:06.656 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:08:06.656 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:08:06.656 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:08:06.656 "params": { 00:08:06.656 "name": "Nvme0", 00:08:06.656 "trtype": "tcp", 00:08:06.656 "traddr": "10.0.0.2", 00:08:06.656 "adrfam": "ipv4", 00:08:06.656 "trsvcid": "4420", 00:08:06.656 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:06.656 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:06.656 "hdgst": false, 00:08:06.656 "ddgst": false 00:08:06.656 }, 00:08:06.656 "method": "bdev_nvme_attach_controller" 00:08:06.656 }' 00:08:06.656 [2024-09-29 16:16:07.161693] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:06.656 [2024-09-29 16:16:07.161820] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3040927 ] 00:08:06.913 [2024-09-29 16:16:07.291980] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.171 [2024-09-29 16:16:07.532727] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.736 Running I/O for 1 seconds... 00:08:08.667 1344.00 IOPS, 84.00 MiB/s 00:08:08.667 Latency(us) 00:08:08.667 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:08.667 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:08.667 Verification LBA range: start 0x0 length 0x400 00:08:08.667 Nvme0n1 : 1.05 1335.05 83.44 0.00 0.00 45447.24 12281.93 50875.35 00:08:08.667 =================================================================================================================== 00:08:08.667 Total : 1335.05 83.44 0.00 0.00 45447.24 12281.93 50875.35 00:08:09.600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 3040654 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:08:09.600 16:16:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:09.600 16:16:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:09.600 16:16:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:09.600 16:16:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:09.600 16:16:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:09.600 16:16:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # nvmfcleanup 00:08:09.600 16:16:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:09.600 16:16:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:09.600 16:16:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:09.600 16:16:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:09.600 16:16:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:09.600 rmmod nvme_tcp 00:08:09.858 rmmod nvme_fabrics 00:08:09.858 rmmod nvme_keyring 00:08:09.858 16:16:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:09.858 16:16:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:09.858 16:16:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:09.858 16:16:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@513 -- # '[' -n 3040477 ']' 00:08:09.858 16:16:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # killprocess 3040477 00:08:09.858 16:16:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 3040477 ']' 00:08:09.858 16:16:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 3040477 00:08:09.858 16:16:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:08:09.858 16:16:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:09.858 16:16:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3040477 00:08:09.858 16:16:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:09.858 16:16:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:09.858 16:16:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3040477' 00:08:09.858 killing process with pid 3040477 00:08:09.858 16:16:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 3040477 00:08:09.858 16:16:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 3040477 00:08:11.233 [2024-09-29 16:16:11.570069] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:11.233 16:16:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:08:11.233 16:16:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:08:11.233 16:16:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:08:11.233 16:16:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:08:11.233 16:16:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-save 00:08:11.233 16:16:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:08:11.233 16:16:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-restore 00:08:11.233 16:16:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:11.233 16:16:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:11.233 16:16:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:11.233 16:16:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:11.233 16:16:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:13.768 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:13.769 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:13.769 00:08:13.769 real 0m12.316s 00:08:13.769 user 0m33.680s 00:08:13.769 sys 0m3.110s 00:08:13.769 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:13.769 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:13.769 ************************************ 00:08:13.769 END TEST nvmf_host_management 00:08:13.769 ************************************ 00:08:13.769 16:16:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:13.769 16:16:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:13.769 16:16:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:13.769 16:16:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:13.769 ************************************ 00:08:13.769 START TEST nvmf_lvol 00:08:13.769 ************************************ 00:08:13.769 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:13.769 * Looking for test storage... 00:08:13.769 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:13.769 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:13.769 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:08:13.769 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:13.769 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:13.769 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:13.769 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:13.769 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:13.769 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:13.769 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:13.769 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:13.769 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:13.769 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:13.769 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:13.769 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:13.769 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:13.769 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:13.769 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:13.769 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:13.769 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:13.769 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:13.769 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:13.769 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:13.769 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:13.769 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:13.769 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:13.769 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:13.769 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:13.769 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:13.769 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:13.769 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:13.769 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:13.769 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:13.769 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:13.769 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:13.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.769 --rc genhtml_branch_coverage=1 00:08:13.769 --rc genhtml_function_coverage=1 00:08:13.769 --rc genhtml_legend=1 00:08:13.769 --rc geninfo_all_blocks=1 00:08:13.769 --rc geninfo_unexecuted_blocks=1 00:08:13.769 00:08:13.769 ' 00:08:13.769 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:13.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.769 --rc genhtml_branch_coverage=1 00:08:13.769 --rc genhtml_function_coverage=1 00:08:13.769 --rc genhtml_legend=1 00:08:13.769 --rc geninfo_all_blocks=1 00:08:13.769 --rc geninfo_unexecuted_blocks=1 00:08:13.769 00:08:13.769 ' 00:08:13.769 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:13.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.769 --rc genhtml_branch_coverage=1 00:08:13.769 --rc genhtml_function_coverage=1 00:08:13.769 --rc genhtml_legend=1 00:08:13.769 --rc geninfo_all_blocks=1 00:08:13.769 --rc geninfo_unexecuted_blocks=1 00:08:13.769 00:08:13.769 ' 00:08:13.769 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:13.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.769 --rc genhtml_branch_coverage=1 00:08:13.769 --rc genhtml_function_coverage=1 00:08:13.769 --rc genhtml_legend=1 00:08:13.769 --rc geninfo_all_blocks=1 00:08:13.769 --rc geninfo_unexecuted_blocks=1 00:08:13.769 00:08:13.769 ' 00:08:13.769 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:13.769 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:13.769 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:13.769 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:13.769 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:13.769 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:13.769 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:13.769 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:13.769 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:13.769 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:13.769 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:13.769 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:13.769 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:13.769 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:13.769 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:13.769 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:13.769 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:13.769 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:13.769 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:13.769 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:13.769 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:13.770 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:13.770 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:13.770 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.770 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.770 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.770 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:13.770 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.770 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:13.770 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:13.770 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:13.770 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:13.770 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:13.770 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:13.770 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:13.770 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:13.770 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:13.770 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:13.770 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:13.770 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:13.770 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:13.770 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:13.770 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:13.770 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:13.770 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:13.770 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:08:13.770 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:13.770 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # prepare_net_devs 00:08:13.770 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@434 -- # local -g is_hw=no 00:08:13.770 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # remove_spdk_ns 00:08:13.770 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:13.770 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:13.770 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:13.770 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:08:13.770 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:08:13.770 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:08:13.770 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:15.672 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:15.672 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:08:15.672 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:15.672 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:15.672 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:15.672 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:15.672 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:15.672 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:08:15.672 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:15.672 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:08:15.672 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:08:15.672 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:08:15.672 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:08:15.672 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:08:15.672 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:08:15.672 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:15.672 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:15.672 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:15.672 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:15.672 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:15.672 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:15.672 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:15.672 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:15.672 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:15.672 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:15.672 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:15.672 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:08:15.672 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:08:15.672 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:08:15.672 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:08:15.672 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:08:15.672 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:08:15.672 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:08:15.672 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:15.672 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:15.672 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:08:15.672 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:08:15.672 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:15.672 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:15.672 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:08:15.672 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:08:15.672 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:15.672 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:15.672 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:08:15.672 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:08:15.672 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:15.672 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:15.672 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:08:15.672 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:08:15.672 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:08:15.672 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:08:15.672 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:15.672 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:15.672 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:08:15.673 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:15.673 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ up == up ]] 00:08:15.673 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:15.673 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:15.673 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:15.673 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:15.673 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:15.673 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:15.673 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:15.673 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:08:15.673 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:15.673 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ up == up ]] 00:08:15.673 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:15.673 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:15.673 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:15.673 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:15.673 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:15.673 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:08:15.673 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # is_hw=yes 00:08:15.673 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:08:15.673 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:08:15.673 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:08:15.673 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:15.673 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:15.673 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:15.673 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:15.673 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:15.673 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:15.673 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:15.673 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:15.673 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:15.673 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:15.673 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:15.673 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:15.673 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:15.673 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:15.673 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:15.673 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:15.673 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:15.673 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:15.673 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:15.673 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:15.673 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:15.673 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:15.673 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:15.673 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:15.673 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:08:15.673 00:08:15.673 --- 10.0.0.2 ping statistics --- 00:08:15.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.673 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:08:15.673 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:15.673 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:15.673 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:08:15.673 00:08:15.673 --- 10.0.0.1 ping statistics --- 00:08:15.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.673 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:08:15.673 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:15.673 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # return 0 00:08:15.673 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:08:15.673 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:15.673 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:08:15.673 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:08:15.673 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:15.673 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:08:15.673 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:08:15.673 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:15.673 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:08:15.673 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:15.673 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:15.673 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # nvmfpid=3043405 00:08:15.673 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:15.673 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # waitforlisten 3043405 00:08:15.673 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 3043405 ']' 00:08:15.673 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.673 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:15.673 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.673 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:15.673 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:15.673 [2024-09-29 16:16:16.129435] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:15.673 [2024-09-29 16:16:16.129569] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:15.932 [2024-09-29 16:16:16.274591] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:16.190 [2024-09-29 16:16:16.536501] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:16.190 [2024-09-29 16:16:16.536586] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:16.190 [2024-09-29 16:16:16.536612] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:16.190 [2024-09-29 16:16:16.536636] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:16.190 [2024-09-29 16:16:16.536656] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:16.190 [2024-09-29 16:16:16.536784] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:16.190 [2024-09-29 16:16:16.536848] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.190 [2024-09-29 16:16:16.536854] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:16.756 16:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:16.756 16:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:08:16.756 16:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:08:16.756 16:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:16.756 16:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:16.756 16:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:16.756 16:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:17.014 [2024-09-29 16:16:17.377608] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:17.014 16:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:17.272 16:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:17.272 16:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:17.536 16:16:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:17.536 16:16:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:18.105 16:16:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:18.364 16:16:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=cab57ea0-0136-46a6-b897-1000bd8101fe 00:08:18.364 16:16:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u cab57ea0-0136-46a6-b897-1000bd8101fe lvol 20 00:08:18.621 16:16:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=3360322b-0cd2-47dc-93f9-dda855a091a8 00:08:18.621 16:16:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:18.879 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3360322b-0cd2-47dc-93f9-dda855a091a8 00:08:19.136 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:19.394 [2024-09-29 16:16:19.754583] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:19.394 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:19.653 16:16:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3043965 00:08:19.653 16:16:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:19.653 16:16:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:20.592 16:16:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 3360322b-0cd2-47dc-93f9-dda855a091a8 MY_SNAPSHOT 00:08:21.158 16:16:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=8ad3aad5-ec89-4fba-8905-5ec464ffb015 00:08:21.158 16:16:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 3360322b-0cd2-47dc-93f9-dda855a091a8 30 00:08:21.416 16:16:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 8ad3aad5-ec89-4fba-8905-5ec464ffb015 MY_CLONE 00:08:21.673 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=f29ae510-51fb-4da3-bf60-4e1d66120324 00:08:21.674 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate f29ae510-51fb-4da3-bf60-4e1d66120324 00:08:22.606 16:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3043965 00:08:30.713 Initializing NVMe Controllers 00:08:30.713 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:30.713 Controller IO queue size 128, less than required. 00:08:30.713 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:30.714 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:30.714 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:30.714 Initialization complete. Launching workers. 00:08:30.714 ======================================================== 00:08:30.714 Latency(us) 00:08:30.714 Device Information : IOPS MiB/s Average min max 00:08:30.714 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 8274.60 32.32 15475.64 327.00 188832.10 00:08:30.714 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 8018.40 31.32 15971.96 3279.73 153191.38 00:08:30.714 ======================================================== 00:08:30.714 Total : 16293.00 63.64 15719.90 327.00 188832.10 00:08:30.714 00:08:30.714 16:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:30.714 16:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3360322b-0cd2-47dc-93f9-dda855a091a8 00:08:30.714 16:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cab57ea0-0136-46a6-b897-1000bd8101fe 00:08:30.971 16:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:30.971 16:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:30.971 16:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:30.971 16:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # nvmfcleanup 00:08:30.971 16:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:30.971 16:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:30.971 16:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:30.971 16:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:30.971 16:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:30.971 rmmod nvme_tcp 00:08:30.971 rmmod nvme_fabrics 00:08:30.971 rmmod nvme_keyring 00:08:30.971 16:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:30.971 16:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:30.971 16:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:30.971 16:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@513 -- # '[' -n 3043405 ']' 00:08:30.971 16:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # killprocess 3043405 00:08:30.971 16:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 3043405 ']' 00:08:30.971 16:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 3043405 00:08:30.971 16:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:08:30.971 16:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:30.971 16:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3043405 00:08:30.971 16:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:30.971 16:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:30.971 16:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3043405' 00:08:30.971 killing process with pid 3043405 00:08:30.971 16:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 3043405 00:08:30.971 16:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 3043405 00:08:32.966 16:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:08:32.966 16:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:08:32.966 16:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:08:32.966 16:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:32.966 16:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-save 00:08:32.966 16:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:08:32.966 16:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-restore 00:08:32.966 16:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:32.966 16:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:32.966 16:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:32.966 16:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:32.966 16:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:34.871 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:34.871 00:08:34.871 real 0m21.380s 00:08:34.871 user 1m11.606s 00:08:34.871 sys 0m5.235s 00:08:34.871 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:34.871 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:34.871 ************************************ 00:08:34.871 END TEST nvmf_lvol 00:08:34.871 ************************************ 00:08:34.871 16:16:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:34.871 16:16:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:34.871 16:16:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:34.871 16:16:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:34.871 ************************************ 00:08:34.871 START TEST nvmf_lvs_grow 00:08:34.871 ************************************ 00:08:34.871 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:34.871 * Looking for test storage... 00:08:34.871 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:34.871 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:34.871 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:08:34.871 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:34.871 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:34.871 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:34.871 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:34.871 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:34.871 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:34.871 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:34.871 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:34.871 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:34.871 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:34.871 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:34.871 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:34.871 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:34.871 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:34.871 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:34.871 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:34.871 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:34.871 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:34.871 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:34.871 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:34.871 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:34.871 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:34.871 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:34.871 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:34.871 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:34.871 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:34.871 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:34.871 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:34.871 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:34.871 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:34.871 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:34.871 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:34.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.871 --rc genhtml_branch_coverage=1 00:08:34.871 --rc genhtml_function_coverage=1 00:08:34.871 --rc genhtml_legend=1 00:08:34.871 --rc geninfo_all_blocks=1 00:08:34.871 --rc geninfo_unexecuted_blocks=1 00:08:34.871 00:08:34.871 ' 00:08:34.871 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:34.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.872 --rc genhtml_branch_coverage=1 00:08:34.872 --rc genhtml_function_coverage=1 00:08:34.872 --rc genhtml_legend=1 00:08:34.872 --rc geninfo_all_blocks=1 00:08:34.872 --rc geninfo_unexecuted_blocks=1 00:08:34.872 00:08:34.872 ' 00:08:34.872 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:34.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.872 --rc genhtml_branch_coverage=1 00:08:34.872 --rc genhtml_function_coverage=1 00:08:34.872 --rc genhtml_legend=1 00:08:34.872 --rc geninfo_all_blocks=1 00:08:34.872 --rc geninfo_unexecuted_blocks=1 00:08:34.872 00:08:34.872 ' 00:08:34.872 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:34.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.872 --rc genhtml_branch_coverage=1 00:08:34.872 --rc genhtml_function_coverage=1 00:08:34.872 --rc genhtml_legend=1 00:08:34.872 --rc geninfo_all_blocks=1 00:08:34.872 --rc geninfo_unexecuted_blocks=1 00:08:34.872 00:08:34.872 ' 00:08:34.872 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:34.872 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:34.872 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:34.872 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:34.872 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:34.872 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:34.872 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:34.872 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:34.872 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:34.872 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:34.872 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:34.872 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:34.872 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:34.872 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:34.872 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:34.872 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:34.872 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:34.872 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:34.872 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:34.872 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:34.872 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:34.872 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:34.872 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:34.872 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.872 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.872 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.872 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:34.872 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.872 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:34.872 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:34.872 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:34.872 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:34.872 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:34.872 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:34.872 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:34.872 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:34.872 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:34.872 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:34.872 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:34.872 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:34.872 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:34.872 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:34.872 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:08:34.872 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:34.872 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # prepare_net_devs 00:08:34.872 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@434 -- # local -g is_hw=no 00:08:34.872 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # remove_spdk_ns 00:08:34.872 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.872 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:34.872 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:34.872 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:08:34.872 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:08:34.872 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:08:34.872 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:36.774 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:36.774 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ up == up ]] 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:36.774 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ up == up ]] 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:36.774 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # is_hw=yes 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:36.774 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:37.034 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:37.034 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:37.034 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:37.034 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:37.034 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:37.034 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:37.034 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:37.034 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:37.034 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:37.034 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:37.034 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:37.034 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:08:37.034 00:08:37.034 --- 10.0.0.2 ping statistics --- 00:08:37.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:37.034 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:08:37.034 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:37.034 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:37.034 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:08:37.034 00:08:37.034 --- 10.0.0.1 ping statistics --- 00:08:37.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:37.034 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:08:37.034 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:37.034 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # return 0 00:08:37.034 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:08:37.034 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:37.034 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:08:37.034 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:08:37.034 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:37.034 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:08:37.034 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:08:37.034 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:37.034 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:08:37.034 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:37.034 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:37.034 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # nvmfpid=3047382 00:08:37.034 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:37.034 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # waitforlisten 3047382 00:08:37.034 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 3047382 ']' 00:08:37.035 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.035 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:37.035 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.035 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:37.035 16:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:37.035 [2024-09-29 16:16:37.582552] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:37.035 [2024-09-29 16:16:37.582703] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:37.293 [2024-09-29 16:16:37.729140] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.551 [2024-09-29 16:16:37.987064] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:37.551 [2024-09-29 16:16:37.987160] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:37.551 [2024-09-29 16:16:37.987187] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:37.552 [2024-09-29 16:16:37.987211] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:37.552 [2024-09-29 16:16:37.987231] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:37.552 [2024-09-29 16:16:37.987283] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.118 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:38.118 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:08:38.118 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:08:38.118 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:38.118 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:38.118 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:38.118 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:38.376 [2024-09-29 16:16:38.914688] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:38.376 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:38.376 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:38.376 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:38.376 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:38.634 ************************************ 00:08:38.634 START TEST lvs_grow_clean 00:08:38.634 ************************************ 00:08:38.634 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:08:38.634 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:38.634 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:38.634 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:38.634 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:38.634 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:38.634 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:38.634 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:38.634 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:38.634 16:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:38.892 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:38.892 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:39.150 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=1f35acde-6b78-426c-9b87-2085a3e9090e 00:08:39.150 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1f35acde-6b78-426c-9b87-2085a3e9090e 00:08:39.150 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:39.408 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:39.408 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:39.408 16:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1f35acde-6b78-426c-9b87-2085a3e9090e lvol 150 00:08:39.666 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=0f3b6668-de2e-41ce-ac70-6a68427b2601 00:08:39.666 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:39.666 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:39.924 [2024-09-29 16:16:40.405778] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:39.924 [2024-09-29 16:16:40.405925] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:39.924 true 00:08:39.924 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1f35acde-6b78-426c-9b87-2085a3e9090e 00:08:39.924 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:40.183 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:40.183 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:40.441 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0f3b6668-de2e-41ce-ac70-6a68427b2601 00:08:41.008 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:41.008 [2024-09-29 16:16:41.529491] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:41.008 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:41.266 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3047957 00:08:41.266 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:41.266 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:41.266 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3047957 /var/tmp/bdevperf.sock 00:08:41.266 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 3047957 ']' 00:08:41.267 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:41.267 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:41.267 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:41.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:41.267 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:41.267 16:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:41.525 [2024-09-29 16:16:41.911346] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:41.525 [2024-09-29 16:16:41.911516] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3047957 ] 00:08:41.525 [2024-09-29 16:16:42.052605] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.783 [2024-09-29 16:16:42.302368] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:42.715 16:16:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:42.715 16:16:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:08:42.716 16:16:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:42.973 Nvme0n1 00:08:42.973 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:43.231 [ 00:08:43.231 { 00:08:43.231 "name": "Nvme0n1", 00:08:43.231 "aliases": [ 00:08:43.231 "0f3b6668-de2e-41ce-ac70-6a68427b2601" 00:08:43.231 ], 00:08:43.231 "product_name": "NVMe disk", 00:08:43.231 "block_size": 4096, 00:08:43.231 "num_blocks": 38912, 00:08:43.231 "uuid": "0f3b6668-de2e-41ce-ac70-6a68427b2601", 00:08:43.231 "numa_id": 0, 00:08:43.231 "assigned_rate_limits": { 00:08:43.231 "rw_ios_per_sec": 0, 00:08:43.231 "rw_mbytes_per_sec": 0, 00:08:43.231 "r_mbytes_per_sec": 0, 00:08:43.231 "w_mbytes_per_sec": 0 00:08:43.231 }, 00:08:43.231 "claimed": false, 00:08:43.231 "zoned": false, 00:08:43.231 "supported_io_types": { 00:08:43.231 "read": true, 00:08:43.231 "write": true, 00:08:43.231 "unmap": true, 00:08:43.231 "flush": true, 00:08:43.231 "reset": true, 00:08:43.231 "nvme_admin": true, 00:08:43.231 "nvme_io": true, 00:08:43.231 "nvme_io_md": false, 00:08:43.231 "write_zeroes": true, 00:08:43.231 "zcopy": false, 00:08:43.231 "get_zone_info": false, 00:08:43.231 "zone_management": false, 00:08:43.231 "zone_append": false, 00:08:43.231 "compare": true, 00:08:43.231 "compare_and_write": true, 00:08:43.231 "abort": true, 00:08:43.231 "seek_hole": false, 00:08:43.231 "seek_data": false, 00:08:43.231 "copy": true, 00:08:43.231 "nvme_iov_md": false 00:08:43.231 }, 00:08:43.231 "memory_domains": [ 00:08:43.231 { 00:08:43.231 "dma_device_id": "system", 00:08:43.231 "dma_device_type": 1 00:08:43.231 } 00:08:43.231 ], 00:08:43.231 "driver_specific": { 00:08:43.231 "nvme": [ 00:08:43.231 { 00:08:43.231 "trid": { 00:08:43.231 "trtype": "TCP", 00:08:43.231 "adrfam": "IPv4", 00:08:43.231 "traddr": "10.0.0.2", 00:08:43.231 "trsvcid": "4420", 00:08:43.231 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:43.231 }, 00:08:43.231 "ctrlr_data": { 00:08:43.231 "cntlid": 1, 00:08:43.231 "vendor_id": "0x8086", 00:08:43.231 "model_number": "SPDK bdev Controller", 00:08:43.231 "serial_number": "SPDK0", 00:08:43.231 "firmware_revision": "25.01", 00:08:43.231 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:43.231 "oacs": { 00:08:43.231 "security": 0, 00:08:43.231 "format": 0, 00:08:43.231 "firmware": 0, 00:08:43.231 "ns_manage": 0 00:08:43.231 }, 00:08:43.231 "multi_ctrlr": true, 00:08:43.231 "ana_reporting": false 00:08:43.231 }, 00:08:43.231 "vs": { 00:08:43.231 "nvme_version": "1.3" 00:08:43.231 }, 00:08:43.231 "ns_data": { 00:08:43.231 "id": 1, 00:08:43.231 "can_share": true 00:08:43.231 } 00:08:43.231 } 00:08:43.231 ], 00:08:43.231 "mp_policy": "active_passive" 00:08:43.231 } 00:08:43.231 } 00:08:43.231 ] 00:08:43.231 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3048227 00:08:43.231 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:43.231 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:43.231 Running I/O for 10 seconds... 00:08:44.607 Latency(us) 00:08:44.607 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:44.607 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:44.607 Nvme0n1 : 1.00 10669.00 41.68 0.00 0.00 0.00 0.00 0.00 00:08:44.607 =================================================================================================================== 00:08:44.607 Total : 10669.00 41.68 0.00 0.00 0.00 0.00 0.00 00:08:44.607 00:08:45.174 16:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 1f35acde-6b78-426c-9b87-2085a3e9090e 00:08:45.174 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:45.174 Nvme0n1 : 2.00 10795.50 42.17 0.00 0.00 0.00 0.00 0.00 00:08:45.174 =================================================================================================================== 00:08:45.174 Total : 10795.50 42.17 0.00 0.00 0.00 0.00 0.00 00:08:45.174 00:08:45.432 true 00:08:45.432 16:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1f35acde-6b78-426c-9b87-2085a3e9090e 00:08:45.432 16:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:45.999 16:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:45.999 16:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:45.999 16:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3048227 00:08:46.258 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:46.258 Nvme0n1 : 3.00 10795.33 42.17 0.00 0.00 0.00 0.00 0.00 00:08:46.258 =================================================================================================================== 00:08:46.258 Total : 10795.33 42.17 0.00 0.00 0.00 0.00 0.00 00:08:46.258 00:08:47.194 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:47.194 Nvme0n1 : 4.00 10906.50 42.60 0.00 0.00 0.00 0.00 0.00 00:08:47.194 =================================================================================================================== 00:08:47.194 Total : 10906.50 42.60 0.00 0.00 0.00 0.00 0.00 00:08:47.194 00:08:48.570 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:48.570 Nvme0n1 : 5.00 10973.00 42.86 0.00 0.00 0.00 0.00 0.00 00:08:48.570 =================================================================================================================== 00:08:48.570 Total : 10973.00 42.86 0.00 0.00 0.00 0.00 0.00 00:08:48.570 00:08:49.504 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:49.504 Nvme0n1 : 6.00 11006.83 43.00 0.00 0.00 0.00 0.00 0.00 00:08:49.504 =================================================================================================================== 00:08:49.504 Total : 11006.83 43.00 0.00 0.00 0.00 0.00 0.00 00:08:49.504 00:08:50.439 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:50.439 Nvme0n1 : 7.00 11031.00 43.09 0.00 0.00 0.00 0.00 0.00 00:08:50.439 =================================================================================================================== 00:08:50.439 Total : 11031.00 43.09 0.00 0.00 0.00 0.00 0.00 00:08:50.439 00:08:51.376 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:51.376 Nvme0n1 : 8.00 11049.12 43.16 0.00 0.00 0.00 0.00 0.00 00:08:51.376 =================================================================================================================== 00:08:51.376 Total : 11049.12 43.16 0.00 0.00 0.00 0.00 0.00 00:08:51.376 00:08:52.312 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:52.312 Nvme0n1 : 9.00 11077.33 43.27 0.00 0.00 0.00 0.00 0.00 00:08:52.312 =================================================================================================================== 00:08:52.312 Total : 11077.33 43.27 0.00 0.00 0.00 0.00 0.00 00:08:52.312 00:08:53.247 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:53.247 Nvme0n1 : 10.00 11087.20 43.31 0.00 0.00 0.00 0.00 0.00 00:08:53.247 =================================================================================================================== 00:08:53.247 Total : 11087.20 43.31 0.00 0.00 0.00 0.00 0.00 00:08:53.247 00:08:53.247 00:08:53.247 Latency(us) 00:08:53.247 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:53.247 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:53.247 Nvme0n1 : 10.01 11087.02 43.31 0.00 0.00 11538.13 7475.96 34175.81 00:08:53.247 =================================================================================================================== 00:08:53.247 Total : 11087.02 43.31 0.00 0.00 11538.13 7475.96 34175.81 00:08:53.247 { 00:08:53.247 "results": [ 00:08:53.247 { 00:08:53.247 "job": "Nvme0n1", 00:08:53.247 "core_mask": "0x2", 00:08:53.247 "workload": "randwrite", 00:08:53.247 "status": "finished", 00:08:53.247 "queue_depth": 128, 00:08:53.247 "io_size": 4096, 00:08:53.247 "runtime": 10.01171, 00:08:53.247 "iops": 11087.017102972419, 00:08:53.247 "mibps": 43.30866055848601, 00:08:53.247 "io_failed": 0, 00:08:53.247 "io_timeout": 0, 00:08:53.247 "avg_latency_us": 11538.130833980647, 00:08:53.247 "min_latency_us": 7475.958518518519, 00:08:53.247 "max_latency_us": 34175.81037037037 00:08:53.247 } 00:08:53.247 ], 00:08:53.247 "core_count": 1 00:08:53.247 } 00:08:53.247 16:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3047957 00:08:53.247 16:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 3047957 ']' 00:08:53.247 16:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 3047957 00:08:53.247 16:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:08:53.247 16:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:53.247 16:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3047957 00:08:53.505 16:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:53.505 16:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:53.505 16:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3047957' 00:08:53.505 killing process with pid 3047957 00:08:53.505 16:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 3047957 00:08:53.505 Received shutdown signal, test time was about 10.000000 seconds 00:08:53.505 00:08:53.505 Latency(us) 00:08:53.505 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:53.505 =================================================================================================================== 00:08:53.505 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:53.505 16:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 3047957 00:08:54.439 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:54.697 16:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:54.954 16:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1f35acde-6b78-426c-9b87-2085a3e9090e 00:08:54.954 16:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:55.211 16:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:55.211 16:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:55.212 16:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:55.469 [2024-09-29 16:16:55.985898] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:55.469 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1f35acde-6b78-426c-9b87-2085a3e9090e 00:08:55.469 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:08:55.469 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1f35acde-6b78-426c-9b87-2085a3e9090e 00:08:55.469 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:55.469 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:55.470 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:55.470 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:55.470 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:55.470 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:55.470 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:55.470 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:55.470 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1f35acde-6b78-426c-9b87-2085a3e9090e 00:08:55.727 request: 00:08:55.727 { 00:08:55.727 "uuid": "1f35acde-6b78-426c-9b87-2085a3e9090e", 00:08:55.727 "method": "bdev_lvol_get_lvstores", 00:08:55.727 "req_id": 1 00:08:55.727 } 00:08:55.727 Got JSON-RPC error response 00:08:55.727 response: 00:08:55.727 { 00:08:55.727 "code": -19, 00:08:55.727 "message": "No such device" 00:08:55.727 } 00:08:55.727 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:08:55.727 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:55.727 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:55.727 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:55.727 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:55.984 aio_bdev 00:08:56.242 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 0f3b6668-de2e-41ce-ac70-6a68427b2601 00:08:56.242 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=0f3b6668-de2e-41ce-ac70-6a68427b2601 00:08:56.242 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:56.242 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:08:56.242 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:56.242 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:56.242 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:56.499 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0f3b6668-de2e-41ce-ac70-6a68427b2601 -t 2000 00:08:56.757 [ 00:08:56.757 { 00:08:56.757 "name": "0f3b6668-de2e-41ce-ac70-6a68427b2601", 00:08:56.757 "aliases": [ 00:08:56.757 "lvs/lvol" 00:08:56.757 ], 00:08:56.757 "product_name": "Logical Volume", 00:08:56.757 "block_size": 4096, 00:08:56.757 "num_blocks": 38912, 00:08:56.757 "uuid": "0f3b6668-de2e-41ce-ac70-6a68427b2601", 00:08:56.757 "assigned_rate_limits": { 00:08:56.757 "rw_ios_per_sec": 0, 00:08:56.757 "rw_mbytes_per_sec": 0, 00:08:56.757 "r_mbytes_per_sec": 0, 00:08:56.757 "w_mbytes_per_sec": 0 00:08:56.757 }, 00:08:56.757 "claimed": false, 00:08:56.757 "zoned": false, 00:08:56.757 "supported_io_types": { 00:08:56.757 "read": true, 00:08:56.757 "write": true, 00:08:56.757 "unmap": true, 00:08:56.757 "flush": false, 00:08:56.757 "reset": true, 00:08:56.757 "nvme_admin": false, 00:08:56.757 "nvme_io": false, 00:08:56.757 "nvme_io_md": false, 00:08:56.757 "write_zeroes": true, 00:08:56.757 "zcopy": false, 00:08:56.757 "get_zone_info": false, 00:08:56.757 "zone_management": false, 00:08:56.757 "zone_append": false, 00:08:56.757 "compare": false, 00:08:56.757 "compare_and_write": false, 00:08:56.757 "abort": false, 00:08:56.757 "seek_hole": true, 00:08:56.757 "seek_data": true, 00:08:56.757 "copy": false, 00:08:56.757 "nvme_iov_md": false 00:08:56.757 }, 00:08:56.757 "driver_specific": { 00:08:56.757 "lvol": { 00:08:56.757 "lvol_store_uuid": "1f35acde-6b78-426c-9b87-2085a3e9090e", 00:08:56.757 "base_bdev": "aio_bdev", 00:08:56.757 "thin_provision": false, 00:08:56.757 "num_allocated_clusters": 38, 00:08:56.757 "snapshot": false, 00:08:56.757 "clone": false, 00:08:56.757 "esnap_clone": false 00:08:56.757 } 00:08:56.757 } 00:08:56.757 } 00:08:56.757 ] 00:08:56.757 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:08:56.757 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1f35acde-6b78-426c-9b87-2085a3e9090e 00:08:56.757 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:57.015 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:57.015 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1f35acde-6b78-426c-9b87-2085a3e9090e 00:08:57.015 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:57.273 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:57.273 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0f3b6668-de2e-41ce-ac70-6a68427b2601 00:08:57.530 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1f35acde-6b78-426c-9b87-2085a3e9090e 00:08:57.787 16:16:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:58.045 16:16:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:58.045 00:08:58.045 real 0m19.619s 00:08:58.045 user 0m19.436s 00:08:58.045 sys 0m1.964s 00:08:58.045 16:16:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:58.045 16:16:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:58.045 ************************************ 00:08:58.045 END TEST lvs_grow_clean 00:08:58.045 ************************************ 00:08:58.045 16:16:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:58.045 16:16:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:58.045 16:16:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:58.045 16:16:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:58.303 ************************************ 00:08:58.303 START TEST lvs_grow_dirty 00:08:58.303 ************************************ 00:08:58.303 16:16:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:08:58.303 16:16:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:58.303 16:16:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:58.303 16:16:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:58.303 16:16:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:58.303 16:16:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:58.303 16:16:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:58.303 16:16:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:58.303 16:16:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:58.303 16:16:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:58.562 16:16:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:58.562 16:16:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:58.821 16:16:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=117f86a5-94dc-4b58-b1af-0314d0fef55c 00:08:58.821 16:16:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 117f86a5-94dc-4b58-b1af-0314d0fef55c 00:08:58.821 16:16:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:59.079 16:16:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:59.079 16:16:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:59.079 16:16:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 117f86a5-94dc-4b58-b1af-0314d0fef55c lvol 150 00:08:59.337 16:16:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=f1fb89df-0d33-4bd2-ba94-57cb618878ac 00:08:59.337 16:16:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:59.337 16:16:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:59.595 [2024-09-29 16:17:00.106707] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:59.595 [2024-09-29 16:17:00.106828] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:59.595 true 00:08:59.595 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 117f86a5-94dc-4b58-b1af-0314d0fef55c 00:08:59.595 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:59.853 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:59.853 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:00.419 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f1fb89df-0d33-4bd2-ba94-57cb618878ac 00:09:00.677 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:00.936 [2024-09-29 16:17:01.246436] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:00.936 16:17:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:01.194 16:17:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3050368 00:09:01.194 16:17:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:01.194 16:17:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:01.194 16:17:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3050368 /var/tmp/bdevperf.sock 00:09:01.194 16:17:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 3050368 ']' 00:09:01.194 16:17:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:01.194 16:17:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:01.194 16:17:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:01.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:01.194 16:17:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:01.194 16:17:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:01.194 [2024-09-29 16:17:01.624241] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:01.194 [2024-09-29 16:17:01.624386] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3050368 ] 00:09:01.194 [2024-09-29 16:17:01.756610] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.453 [2024-09-29 16:17:02.007902] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:02.390 16:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:02.390 16:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:02.390 16:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:02.649 Nvme0n1 00:09:02.649 16:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:03.029 [ 00:09:03.029 { 00:09:03.029 "name": "Nvme0n1", 00:09:03.029 "aliases": [ 00:09:03.029 "f1fb89df-0d33-4bd2-ba94-57cb618878ac" 00:09:03.029 ], 00:09:03.029 "product_name": "NVMe disk", 00:09:03.029 "block_size": 4096, 00:09:03.029 "num_blocks": 38912, 00:09:03.029 "uuid": "f1fb89df-0d33-4bd2-ba94-57cb618878ac", 00:09:03.029 "numa_id": 0, 00:09:03.029 "assigned_rate_limits": { 00:09:03.029 "rw_ios_per_sec": 0, 00:09:03.029 "rw_mbytes_per_sec": 0, 00:09:03.029 "r_mbytes_per_sec": 0, 00:09:03.029 "w_mbytes_per_sec": 0 00:09:03.029 }, 00:09:03.029 "claimed": false, 00:09:03.029 "zoned": false, 00:09:03.030 "supported_io_types": { 00:09:03.030 "read": true, 00:09:03.030 "write": true, 00:09:03.030 "unmap": true, 00:09:03.030 "flush": true, 00:09:03.030 "reset": true, 00:09:03.030 "nvme_admin": true, 00:09:03.030 "nvme_io": true, 00:09:03.030 "nvme_io_md": false, 00:09:03.030 "write_zeroes": true, 00:09:03.030 "zcopy": false, 00:09:03.030 "get_zone_info": false, 00:09:03.030 "zone_management": false, 00:09:03.030 "zone_append": false, 00:09:03.030 "compare": true, 00:09:03.030 "compare_and_write": true, 00:09:03.030 "abort": true, 00:09:03.030 "seek_hole": false, 00:09:03.030 "seek_data": false, 00:09:03.030 "copy": true, 00:09:03.030 "nvme_iov_md": false 00:09:03.030 }, 00:09:03.030 "memory_domains": [ 00:09:03.030 { 00:09:03.030 "dma_device_id": "system", 00:09:03.030 "dma_device_type": 1 00:09:03.030 } 00:09:03.030 ], 00:09:03.030 "driver_specific": { 00:09:03.030 "nvme": [ 00:09:03.030 { 00:09:03.030 "trid": { 00:09:03.030 "trtype": "TCP", 00:09:03.030 "adrfam": "IPv4", 00:09:03.030 "traddr": "10.0.0.2", 00:09:03.030 "trsvcid": "4420", 00:09:03.030 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:03.030 }, 00:09:03.030 "ctrlr_data": { 00:09:03.030 "cntlid": 1, 00:09:03.030 "vendor_id": "0x8086", 00:09:03.030 "model_number": "SPDK bdev Controller", 00:09:03.030 "serial_number": "SPDK0", 00:09:03.030 "firmware_revision": "25.01", 00:09:03.030 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:03.030 "oacs": { 00:09:03.030 "security": 0, 00:09:03.030 "format": 0, 00:09:03.030 "firmware": 0, 00:09:03.030 "ns_manage": 0 00:09:03.030 }, 00:09:03.030 "multi_ctrlr": true, 00:09:03.030 "ana_reporting": false 00:09:03.030 }, 00:09:03.030 "vs": { 00:09:03.030 "nvme_version": "1.3" 00:09:03.030 }, 00:09:03.030 "ns_data": { 00:09:03.030 "id": 1, 00:09:03.030 "can_share": true 00:09:03.030 } 00:09:03.030 } 00:09:03.030 ], 00:09:03.030 "mp_policy": "active_passive" 00:09:03.030 } 00:09:03.030 } 00:09:03.030 ] 00:09:03.030 16:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3050564 00:09:03.030 16:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:03.030 16:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:03.329 Running I/O for 10 seconds... 00:09:04.282 Latency(us) 00:09:04.282 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:04.282 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:04.282 Nvme0n1 : 1.00 10669.00 41.68 0.00 0.00 0.00 0.00 0.00 00:09:04.282 =================================================================================================================== 00:09:04.282 Total : 10669.00 41.68 0.00 0.00 0.00 0.00 0.00 00:09:04.282 00:09:04.848 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 117f86a5-94dc-4b58-b1af-0314d0fef55c 00:09:05.105 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:05.105 Nvme0n1 : 2.00 10859.00 42.42 0.00 0.00 0.00 0.00 0.00 00:09:05.105 =================================================================================================================== 00:09:05.105 Total : 10859.00 42.42 0.00 0.00 0.00 0.00 0.00 00:09:05.105 00:09:05.363 true 00:09:05.363 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 117f86a5-94dc-4b58-b1af-0314d0fef55c 00:09:05.363 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:05.621 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:05.621 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:05.621 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3050564 00:09:06.186 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:06.186 Nvme0n1 : 3.00 10922.33 42.67 0.00 0.00 0.00 0.00 0.00 00:09:06.186 =================================================================================================================== 00:09:06.186 Total : 10922.33 42.67 0.00 0.00 0.00 0.00 0.00 00:09:06.186 00:09:07.121 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:07.121 Nvme0n1 : 4.00 10985.75 42.91 0.00 0.00 0.00 0.00 0.00 00:09:07.121 =================================================================================================================== 00:09:07.121 Total : 10985.75 42.91 0.00 0.00 0.00 0.00 0.00 00:09:07.121 00:09:08.055 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:08.055 Nvme0n1 : 5.00 11036.60 43.11 0.00 0.00 0.00 0.00 0.00 00:09:08.055 =================================================================================================================== 00:09:08.055 Total : 11036.60 43.11 0.00 0.00 0.00 0.00 0.00 00:09:08.055 00:09:08.992 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:08.992 Nvme0n1 : 6.00 11081.00 43.29 0.00 0.00 0.00 0.00 0.00 00:09:08.992 =================================================================================================================== 00:09:08.992 Total : 11081.00 43.29 0.00 0.00 0.00 0.00 0.00 00:09:08.992 00:09:10.367 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:10.367 Nvme0n1 : 7.00 11088.29 43.31 0.00 0.00 0.00 0.00 0.00 00:09:10.367 =================================================================================================================== 00:09:10.367 Total : 11088.29 43.31 0.00 0.00 0.00 0.00 0.00 00:09:10.367 00:09:11.301 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:11.301 Nvme0n1 : 8.00 11099.25 43.36 0.00 0.00 0.00 0.00 0.00 00:09:11.301 =================================================================================================================== 00:09:11.301 Total : 11099.25 43.36 0.00 0.00 0.00 0.00 0.00 00:09:11.301 00:09:12.236 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:12.236 Nvme0n1 : 9.00 11107.78 43.39 0.00 0.00 0.00 0.00 0.00 00:09:12.236 =================================================================================================================== 00:09:12.236 Total : 11107.78 43.39 0.00 0.00 0.00 0.00 0.00 00:09:12.236 00:09:13.171 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:13.171 Nvme0n1 : 10.00 11114.60 43.42 0.00 0.00 0.00 0.00 0.00 00:09:13.171 =================================================================================================================== 00:09:13.171 Total : 11114.60 43.42 0.00 0.00 0.00 0.00 0.00 00:09:13.171 00:09:13.171 00:09:13.171 Latency(us) 00:09:13.171 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:13.171 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:13.171 Nvme0n1 : 10.01 11116.93 43.43 0.00 0.00 11507.10 4417.61 22816.24 00:09:13.171 =================================================================================================================== 00:09:13.171 Total : 11116.93 43.43 0.00 0.00 11507.10 4417.61 22816.24 00:09:13.171 { 00:09:13.171 "results": [ 00:09:13.171 { 00:09:13.171 "job": "Nvme0n1", 00:09:13.171 "core_mask": "0x2", 00:09:13.171 "workload": "randwrite", 00:09:13.171 "status": "finished", 00:09:13.171 "queue_depth": 128, 00:09:13.171 "io_size": 4096, 00:09:13.171 "runtime": 10.009421, 00:09:13.171 "iops": 11116.926743315124, 00:09:13.171 "mibps": 43.4254950910747, 00:09:13.171 "io_failed": 0, 00:09:13.171 "io_timeout": 0, 00:09:13.171 "avg_latency_us": 11507.100215830258, 00:09:13.171 "min_latency_us": 4417.6118518518515, 00:09:13.171 "max_latency_us": 22816.237037037037 00:09:13.171 } 00:09:13.171 ], 00:09:13.171 "core_count": 1 00:09:13.171 } 00:09:13.171 16:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3050368 00:09:13.171 16:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 3050368 ']' 00:09:13.171 16:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 3050368 00:09:13.171 16:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:09:13.171 16:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:13.171 16:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3050368 00:09:13.171 16:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:13.172 16:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:13.172 16:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3050368' 00:09:13.172 killing process with pid 3050368 00:09:13.172 16:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 3050368 00:09:13.172 Received shutdown signal, test time was about 10.000000 seconds 00:09:13.172 00:09:13.172 Latency(us) 00:09:13.172 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:13.172 =================================================================================================================== 00:09:13.172 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:13.172 16:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 3050368 00:09:14.105 16:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:14.669 16:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:14.669 16:17:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 117f86a5-94dc-4b58-b1af-0314d0fef55c 00:09:14.669 16:17:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:14.927 16:17:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:14.927 16:17:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:14.927 16:17:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3047382 00:09:14.927 16:17:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3047382 00:09:15.185 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3047382 Killed "${NVMF_APP[@]}" "$@" 00:09:15.185 16:17:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:15.185 16:17:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:15.185 16:17:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:15.185 16:17:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:15.185 16:17:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:15.185 16:17:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # nvmfpid=3052030 00:09:15.185 16:17:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:15.185 16:17:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # waitforlisten 3052030 00:09:15.185 16:17:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 3052030 ']' 00:09:15.185 16:17:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:15.185 16:17:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:15.185 16:17:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:15.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:15.185 16:17:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:15.185 16:17:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:15.185 [2024-09-29 16:17:15.630278] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:15.185 [2024-09-29 16:17:15.630443] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:15.443 [2024-09-29 16:17:15.780084] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.701 [2024-09-29 16:17:16.039312] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:15.701 [2024-09-29 16:17:16.039395] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:15.701 [2024-09-29 16:17:16.039422] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:15.701 [2024-09-29 16:17:16.039446] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:15.701 [2024-09-29 16:17:16.039466] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:15.701 [2024-09-29 16:17:16.039526] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.266 16:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:16.266 16:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:16.266 16:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:16.266 16:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:16.266 16:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:16.266 16:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:16.266 16:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:16.524 [2024-09-29 16:17:16.966067] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:16.524 [2024-09-29 16:17:16.966295] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:16.524 [2024-09-29 16:17:16.966376] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:16.524 16:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:16.524 16:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev f1fb89df-0d33-4bd2-ba94-57cb618878ac 00:09:16.524 16:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=f1fb89df-0d33-4bd2-ba94-57cb618878ac 00:09:16.524 16:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:16.524 16:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:16.524 16:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:16.524 16:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:16.524 16:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:16.781 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f1fb89df-0d33-4bd2-ba94-57cb618878ac -t 2000 00:09:17.040 [ 00:09:17.040 { 00:09:17.040 "name": "f1fb89df-0d33-4bd2-ba94-57cb618878ac", 00:09:17.040 "aliases": [ 00:09:17.040 "lvs/lvol" 00:09:17.040 ], 00:09:17.040 "product_name": "Logical Volume", 00:09:17.040 "block_size": 4096, 00:09:17.040 "num_blocks": 38912, 00:09:17.040 "uuid": "f1fb89df-0d33-4bd2-ba94-57cb618878ac", 00:09:17.040 "assigned_rate_limits": { 00:09:17.040 "rw_ios_per_sec": 0, 00:09:17.040 "rw_mbytes_per_sec": 0, 00:09:17.040 "r_mbytes_per_sec": 0, 00:09:17.040 "w_mbytes_per_sec": 0 00:09:17.040 }, 00:09:17.040 "claimed": false, 00:09:17.040 "zoned": false, 00:09:17.040 "supported_io_types": { 00:09:17.040 "read": true, 00:09:17.040 "write": true, 00:09:17.040 "unmap": true, 00:09:17.040 "flush": false, 00:09:17.040 "reset": true, 00:09:17.040 "nvme_admin": false, 00:09:17.040 "nvme_io": false, 00:09:17.040 "nvme_io_md": false, 00:09:17.040 "write_zeroes": true, 00:09:17.040 "zcopy": false, 00:09:17.040 "get_zone_info": false, 00:09:17.040 "zone_management": false, 00:09:17.040 "zone_append": false, 00:09:17.040 "compare": false, 00:09:17.040 "compare_and_write": false, 00:09:17.040 "abort": false, 00:09:17.040 "seek_hole": true, 00:09:17.040 "seek_data": true, 00:09:17.040 "copy": false, 00:09:17.040 "nvme_iov_md": false 00:09:17.040 }, 00:09:17.040 "driver_specific": { 00:09:17.040 "lvol": { 00:09:17.040 "lvol_store_uuid": "117f86a5-94dc-4b58-b1af-0314d0fef55c", 00:09:17.040 "base_bdev": "aio_bdev", 00:09:17.040 "thin_provision": false, 00:09:17.040 "num_allocated_clusters": 38, 00:09:17.040 "snapshot": false, 00:09:17.040 "clone": false, 00:09:17.040 "esnap_clone": false 00:09:17.040 } 00:09:17.040 } 00:09:17.040 } 00:09:17.040 ] 00:09:17.040 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:17.040 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 117f86a5-94dc-4b58-b1af-0314d0fef55c 00:09:17.040 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:17.298 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:17.298 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 117f86a5-94dc-4b58-b1af-0314d0fef55c 00:09:17.298 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:17.554 16:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:17.554 16:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:17.811 [2024-09-29 16:17:18.347106] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:18.069 16:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 117f86a5-94dc-4b58-b1af-0314d0fef55c 00:09:18.069 16:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:09:18.069 16:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 117f86a5-94dc-4b58-b1af-0314d0fef55c 00:09:18.069 16:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:18.069 16:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:18.069 16:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:18.069 16:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:18.069 16:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:18.069 16:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:18.069 16:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:18.069 16:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:18.069 16:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 117f86a5-94dc-4b58-b1af-0314d0fef55c 00:09:18.326 request: 00:09:18.326 { 00:09:18.326 "uuid": "117f86a5-94dc-4b58-b1af-0314d0fef55c", 00:09:18.326 "method": "bdev_lvol_get_lvstores", 00:09:18.326 "req_id": 1 00:09:18.326 } 00:09:18.326 Got JSON-RPC error response 00:09:18.326 response: 00:09:18.326 { 00:09:18.326 "code": -19, 00:09:18.326 "message": "No such device" 00:09:18.326 } 00:09:18.326 16:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:09:18.326 16:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:18.326 16:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:18.326 16:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:18.326 16:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:18.586 aio_bdev 00:09:18.586 16:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f1fb89df-0d33-4bd2-ba94-57cb618878ac 00:09:18.586 16:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=f1fb89df-0d33-4bd2-ba94-57cb618878ac 00:09:18.586 16:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:18.586 16:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:18.586 16:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:18.586 16:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:18.586 16:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:18.844 16:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f1fb89df-0d33-4bd2-ba94-57cb618878ac -t 2000 00:09:19.102 [ 00:09:19.102 { 00:09:19.102 "name": "f1fb89df-0d33-4bd2-ba94-57cb618878ac", 00:09:19.102 "aliases": [ 00:09:19.102 "lvs/lvol" 00:09:19.102 ], 00:09:19.102 "product_name": "Logical Volume", 00:09:19.102 "block_size": 4096, 00:09:19.102 "num_blocks": 38912, 00:09:19.102 "uuid": "f1fb89df-0d33-4bd2-ba94-57cb618878ac", 00:09:19.102 "assigned_rate_limits": { 00:09:19.102 "rw_ios_per_sec": 0, 00:09:19.102 "rw_mbytes_per_sec": 0, 00:09:19.102 "r_mbytes_per_sec": 0, 00:09:19.102 "w_mbytes_per_sec": 0 00:09:19.102 }, 00:09:19.102 "claimed": false, 00:09:19.102 "zoned": false, 00:09:19.102 "supported_io_types": { 00:09:19.102 "read": true, 00:09:19.102 "write": true, 00:09:19.102 "unmap": true, 00:09:19.102 "flush": false, 00:09:19.102 "reset": true, 00:09:19.102 "nvme_admin": false, 00:09:19.102 "nvme_io": false, 00:09:19.102 "nvme_io_md": false, 00:09:19.102 "write_zeroes": true, 00:09:19.102 "zcopy": false, 00:09:19.102 "get_zone_info": false, 00:09:19.102 "zone_management": false, 00:09:19.102 "zone_append": false, 00:09:19.102 "compare": false, 00:09:19.102 "compare_and_write": false, 00:09:19.102 "abort": false, 00:09:19.102 "seek_hole": true, 00:09:19.102 "seek_data": true, 00:09:19.102 "copy": false, 00:09:19.102 "nvme_iov_md": false 00:09:19.102 }, 00:09:19.102 "driver_specific": { 00:09:19.102 "lvol": { 00:09:19.102 "lvol_store_uuid": "117f86a5-94dc-4b58-b1af-0314d0fef55c", 00:09:19.102 "base_bdev": "aio_bdev", 00:09:19.102 "thin_provision": false, 00:09:19.102 "num_allocated_clusters": 38, 00:09:19.102 "snapshot": false, 00:09:19.102 "clone": false, 00:09:19.102 "esnap_clone": false 00:09:19.102 } 00:09:19.102 } 00:09:19.102 } 00:09:19.102 ] 00:09:19.102 16:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:19.102 16:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 117f86a5-94dc-4b58-b1af-0314d0fef55c 00:09:19.102 16:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:19.359 16:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:19.359 16:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 117f86a5-94dc-4b58-b1af-0314d0fef55c 00:09:19.359 16:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:19.616 16:17:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:19.616 16:17:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f1fb89df-0d33-4bd2-ba94-57cb618878ac 00:09:20.181 16:17:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 117f86a5-94dc-4b58-b1af-0314d0fef55c 00:09:20.181 16:17:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:20.746 16:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:20.747 00:09:20.747 real 0m22.443s 00:09:20.747 user 0m56.712s 00:09:20.747 sys 0m4.575s 00:09:20.747 16:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:20.747 16:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:20.747 ************************************ 00:09:20.747 END TEST lvs_grow_dirty 00:09:20.747 ************************************ 00:09:20.747 16:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:20.747 16:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:09:20.747 16:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:09:20.747 16:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:09:20.747 16:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:20.747 16:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:09:20.747 16:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:09:20.747 16:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:09:20.747 16:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:20.747 nvmf_trace.0 00:09:20.747 16:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:09:20.747 16:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:20.747 16:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:20.747 16:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:20.747 16:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:20.747 16:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:20.747 16:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:20.747 16:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:20.747 rmmod nvme_tcp 00:09:20.747 rmmod nvme_fabrics 00:09:20.747 rmmod nvme_keyring 00:09:20.747 16:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:20.747 16:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:20.747 16:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:20.747 16:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@513 -- # '[' -n 3052030 ']' 00:09:20.747 16:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # killprocess 3052030 00:09:20.747 16:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 3052030 ']' 00:09:20.747 16:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 3052030 00:09:20.747 16:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:09:20.747 16:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:20.747 16:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3052030 00:09:20.747 16:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:20.747 16:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:20.747 16:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3052030' 00:09:20.747 killing process with pid 3052030 00:09:20.747 16:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 3052030 00:09:20.747 16:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 3052030 00:09:22.143 16:17:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:22.143 16:17:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:22.143 16:17:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:22.143 16:17:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:22.143 16:17:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-save 00:09:22.143 16:17:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:22.143 16:17:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-restore 00:09:22.143 16:17:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:22.143 16:17:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:22.143 16:17:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:22.143 16:17:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:22.143 16:17:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:24.047 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:24.047 00:09:24.047 real 0m49.396s 00:09:24.047 user 1m24.423s 00:09:24.047 sys 0m8.675s 00:09:24.047 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:24.047 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:24.047 ************************************ 00:09:24.047 END TEST nvmf_lvs_grow 00:09:24.047 ************************************ 00:09:24.306 16:17:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:24.306 16:17:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:24.306 16:17:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:24.306 16:17:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:24.306 ************************************ 00:09:24.306 START TEST nvmf_bdev_io_wait 00:09:24.306 ************************************ 00:09:24.306 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:24.306 * Looking for test storage... 00:09:24.306 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:24.306 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:24.306 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:09:24.306 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:24.306 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:24.306 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:24.306 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:24.306 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:24.306 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:24.306 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:24.306 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:24.306 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:24.306 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:24.306 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:24.306 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:24.306 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:24.306 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:24.306 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:24.306 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:24.306 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:24.306 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:24.306 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:24.306 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:24.306 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:24.306 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:24.306 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:24.306 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:24.306 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:24.306 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:24.306 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:24.306 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:24.306 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:24.306 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:24.306 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:24.306 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:24.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.307 --rc genhtml_branch_coverage=1 00:09:24.307 --rc genhtml_function_coverage=1 00:09:24.307 --rc genhtml_legend=1 00:09:24.307 --rc geninfo_all_blocks=1 00:09:24.307 --rc geninfo_unexecuted_blocks=1 00:09:24.307 00:09:24.307 ' 00:09:24.307 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:24.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.307 --rc genhtml_branch_coverage=1 00:09:24.307 --rc genhtml_function_coverage=1 00:09:24.307 --rc genhtml_legend=1 00:09:24.307 --rc geninfo_all_blocks=1 00:09:24.307 --rc geninfo_unexecuted_blocks=1 00:09:24.307 00:09:24.307 ' 00:09:24.307 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:24.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.307 --rc genhtml_branch_coverage=1 00:09:24.307 --rc genhtml_function_coverage=1 00:09:24.307 --rc genhtml_legend=1 00:09:24.307 --rc geninfo_all_blocks=1 00:09:24.307 --rc geninfo_unexecuted_blocks=1 00:09:24.307 00:09:24.307 ' 00:09:24.307 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:24.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.307 --rc genhtml_branch_coverage=1 00:09:24.307 --rc genhtml_function_coverage=1 00:09:24.307 --rc genhtml_legend=1 00:09:24.307 --rc geninfo_all_blocks=1 00:09:24.307 --rc geninfo_unexecuted_blocks=1 00:09:24.307 00:09:24.307 ' 00:09:24.307 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:24.307 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:24.307 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:24.307 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:24.307 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:24.307 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:24.307 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:24.307 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:24.307 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:24.307 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:24.307 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:24.307 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:24.307 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:24.307 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:24.307 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:24.307 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:24.307 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:24.307 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:24.307 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:24.307 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:24.307 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:24.307 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:24.307 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:24.307 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.307 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.307 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.307 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:24.307 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.307 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:24.307 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:24.307 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:24.307 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:24.307 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:24.307 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:24.307 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:24.307 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:24.307 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:24.307 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:24.307 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:24.307 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:24.307 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:24.307 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:24.307 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:24.307 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:24.307 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:24.307 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:24.307 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:24.307 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:24.307 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:24.307 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:24.307 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:09:24.307 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:09:24.307 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:09:24.307 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:26.841 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:26.841 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:26.841 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:26.841 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # is_hw=yes 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:26.841 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:26.842 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:26.842 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:26.842 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:26.842 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:26.842 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:26.842 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:26.842 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:26.842 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:26.842 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:26.842 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:26.842 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:26.842 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:26.842 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:26.842 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.362 ms 00:09:26.842 00:09:26.842 --- 10.0.0.2 ping statistics --- 00:09:26.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:26.842 rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms 00:09:26.842 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:26.842 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:26.842 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:09:26.842 00:09:26.842 --- 10.0.0.1 ping statistics --- 00:09:26.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:26.842 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:09:26.842 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:26.842 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # return 0 00:09:26.842 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:26.842 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:26.842 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:26.842 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:26.842 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:26.842 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:26.842 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:26.842 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:26.842 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:26.842 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:26.842 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:26.842 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # nvmfpid=3054836 00:09:26.842 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:26.842 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # waitforlisten 3054836 00:09:26.842 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 3054836 ']' 00:09:26.842 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:26.842 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:26.842 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:26.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:26.842 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:26.842 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:26.842 [2024-09-29 16:17:27.230866] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:26.842 [2024-09-29 16:17:27.231014] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:26.842 [2024-09-29 16:17:27.368866] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:27.100 [2024-09-29 16:17:27.626086] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:27.100 [2024-09-29 16:17:27.626166] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:27.100 [2024-09-29 16:17:27.626191] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:27.100 [2024-09-29 16:17:27.626215] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:27.100 [2024-09-29 16:17:27.626234] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:27.100 [2024-09-29 16:17:27.626368] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:27.100 [2024-09-29 16:17:27.626435] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:09:27.100 [2024-09-29 16:17:27.626534] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.100 [2024-09-29 16:17:27.626542] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:09:28.034 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:28.034 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:09:28.034 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:28.034 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:28.034 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:28.034 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:28.034 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:28.034 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.034 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:28.034 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.034 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:28.034 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.034 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:28.034 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.034 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:28.034 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.034 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:28.034 [2024-09-29 16:17:28.510725] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:28.034 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.034 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:28.034 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.034 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:28.293 Malloc0 00:09:28.293 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.293 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:28.293 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.293 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:28.293 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.293 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:28.293 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.293 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:28.293 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.293 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:28.293 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.293 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:28.293 [2024-09-29 16:17:28.635943] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:28.293 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.293 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3055010 00:09:28.293 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3055013 00:09:28.294 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:28.294 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:28.294 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:09:28.294 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:09:28.294 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:28.294 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3055015 00:09:28.294 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:28.294 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:28.294 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:28.294 { 00:09:28.294 "params": { 00:09:28.294 "name": "Nvme$subsystem", 00:09:28.294 "trtype": "$TEST_TRANSPORT", 00:09:28.294 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:28.294 "adrfam": "ipv4", 00:09:28.294 "trsvcid": "$NVMF_PORT", 00:09:28.294 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:28.294 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:28.294 "hdgst": ${hdgst:-false}, 00:09:28.294 "ddgst": ${ddgst:-false} 00:09:28.294 }, 00:09:28.294 "method": "bdev_nvme_attach_controller" 00:09:28.294 } 00:09:28.294 EOF 00:09:28.294 )") 00:09:28.294 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:09:28.294 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:09:28.294 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:28.294 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:28.294 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:28.294 { 00:09:28.294 "params": { 00:09:28.294 "name": "Nvme$subsystem", 00:09:28.294 "trtype": "$TEST_TRANSPORT", 00:09:28.294 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:28.294 "adrfam": "ipv4", 00:09:28.294 "trsvcid": "$NVMF_PORT", 00:09:28.294 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:28.294 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:28.294 "hdgst": ${hdgst:-false}, 00:09:28.294 "ddgst": ${ddgst:-false} 00:09:28.294 }, 00:09:28.294 "method": "bdev_nvme_attach_controller" 00:09:28.294 } 00:09:28.294 EOF 00:09:28.294 )") 00:09:28.294 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3055017 00:09:28.294 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:28.294 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:28.294 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:09:28.294 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:09:28.294 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:09:28.294 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:28.294 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:28.294 { 00:09:28.294 "params": { 00:09:28.294 "name": "Nvme$subsystem", 00:09:28.294 "trtype": "$TEST_TRANSPORT", 00:09:28.294 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:28.294 "adrfam": "ipv4", 00:09:28.294 "trsvcid": "$NVMF_PORT", 00:09:28.294 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:28.294 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:28.294 "hdgst": ${hdgst:-false}, 00:09:28.294 "ddgst": ${ddgst:-false} 00:09:28.294 }, 00:09:28.294 "method": "bdev_nvme_attach_controller" 00:09:28.294 } 00:09:28.294 EOF 00:09:28.294 )") 00:09:28.294 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:28.294 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:28.294 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:09:28.294 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:09:28.294 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:09:28.294 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:28.294 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:28.294 { 00:09:28.294 "params": { 00:09:28.294 "name": "Nvme$subsystem", 00:09:28.294 "trtype": "$TEST_TRANSPORT", 00:09:28.294 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:28.294 "adrfam": "ipv4", 00:09:28.294 "trsvcid": "$NVMF_PORT", 00:09:28.294 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:28.294 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:28.294 "hdgst": ${hdgst:-false}, 00:09:28.294 "ddgst": ${ddgst:-false} 00:09:28.294 }, 00:09:28.294 "method": "bdev_nvme_attach_controller" 00:09:28.294 } 00:09:28.294 EOF 00:09:28.294 )") 00:09:28.294 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:09:28.294 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3055010 00:09:28.294 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:09:28.294 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:09:28.294 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:09:28.294 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:09:28.294 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:09:28.294 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:09:28.294 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:28.294 "params": { 00:09:28.294 "name": "Nvme1", 00:09:28.294 "trtype": "tcp", 00:09:28.294 "traddr": "10.0.0.2", 00:09:28.294 "adrfam": "ipv4", 00:09:28.294 "trsvcid": "4420", 00:09:28.294 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:28.294 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:28.294 "hdgst": false, 00:09:28.294 "ddgst": false 00:09:28.294 }, 00:09:28.294 "method": "bdev_nvme_attach_controller" 00:09:28.294 }' 00:09:28.294 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:09:28.294 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:28.294 "params": { 00:09:28.294 "name": "Nvme1", 00:09:28.294 "trtype": "tcp", 00:09:28.294 "traddr": "10.0.0.2", 00:09:28.294 "adrfam": "ipv4", 00:09:28.294 "trsvcid": "4420", 00:09:28.294 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:28.294 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:28.294 "hdgst": false, 00:09:28.294 "ddgst": false 00:09:28.294 }, 00:09:28.294 "method": "bdev_nvme_attach_controller" 00:09:28.294 }' 00:09:28.294 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:09:28.294 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:28.294 "params": { 00:09:28.294 "name": "Nvme1", 00:09:28.294 "trtype": "tcp", 00:09:28.294 "traddr": "10.0.0.2", 00:09:28.294 "adrfam": "ipv4", 00:09:28.294 "trsvcid": "4420", 00:09:28.294 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:28.294 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:28.294 "hdgst": false, 00:09:28.294 "ddgst": false 00:09:28.294 }, 00:09:28.294 "method": "bdev_nvme_attach_controller" 00:09:28.294 }' 00:09:28.294 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:09:28.294 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:28.294 "params": { 00:09:28.294 "name": "Nvme1", 00:09:28.294 "trtype": "tcp", 00:09:28.294 "traddr": "10.0.0.2", 00:09:28.294 "adrfam": "ipv4", 00:09:28.294 "trsvcid": "4420", 00:09:28.294 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:28.294 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:28.294 "hdgst": false, 00:09:28.294 "ddgst": false 00:09:28.294 }, 00:09:28.294 "method": "bdev_nvme_attach_controller" 00:09:28.294 }' 00:09:28.294 [2024-09-29 16:17:28.727288] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:28.294 [2024-09-29 16:17:28.727452] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:28.294 [2024-09-29 16:17:28.728444] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:28.294 [2024-09-29 16:17:28.728445] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:28.294 [2024-09-29 16:17:28.728582] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-09-29 16:17:28.728582] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:28.294 --proc-type=auto ] 00:09:28.294 [2024-09-29 16:17:28.738758] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:28.294 [2024-09-29 16:17:28.738915] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:28.553 [2024-09-29 16:17:28.971926] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.553 [2024-09-29 16:17:29.073006] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.810 [2024-09-29 16:17:29.148940] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.810 [2024-09-29 16:17:29.195769] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:09:28.810 [2024-09-29 16:17:29.252460] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.810 [2024-09-29 16:17:29.296532] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:09:28.810 [2024-09-29 16:17:29.371206] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:09:29.068 [2024-09-29 16:17:29.479771] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:09:29.327 Running I/O for 1 seconds... 00:09:29.327 Running I/O for 1 seconds... 00:09:29.586 Running I/O for 1 seconds... 00:09:29.586 Running I/O for 1 seconds... 00:09:30.521 5581.00 IOPS, 21.80 MiB/s 00:09:30.521 Latency(us) 00:09:30.521 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:30.521 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:30.521 Nvme1n1 : 1.04 5515.47 21.54 0.00 0.00 22682.06 4223.43 52817.16 00:09:30.521 =================================================================================================================== 00:09:30.521 Total : 5515.47 21.54 0.00 0.00 22682.06 4223.43 52817.16 00:09:30.521 6858.00 IOPS, 26.79 MiB/s 00:09:30.521 Latency(us) 00:09:30.521 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:30.521 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:30.521 Nvme1n1 : 1.01 6901.20 26.96 0.00 0.00 18430.37 7864.32 27185.30 00:09:30.521 =================================================================================================================== 00:09:30.521 Total : 6901.20 26.96 0.00 0.00 18430.37 7864.32 27185.30 00:09:30.521 148080.00 IOPS, 578.44 MiB/s 00:09:30.521 Latency(us) 00:09:30.521 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:30.521 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:30.521 Nvme1n1 : 1.00 147739.34 577.11 0.00 0.00 861.92 631.09 2281.62 00:09:30.521 =================================================================================================================== 00:09:30.521 Total : 147739.34 577.11 0.00 0.00 861.92 631.09 2281.62 00:09:30.779 4850.00 IOPS, 18.95 MiB/s 00:09:30.779 Latency(us) 00:09:30.779 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:30.779 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:30.779 Nvme1n1 : 1.01 4935.30 19.28 0.00 0.00 25787.01 9563.40 56312.41 00:09:30.779 =================================================================================================================== 00:09:30.779 Total : 4935.30 19.28 0.00 0.00 25787.01 9563.40 56312.41 00:09:31.712 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3055013 00:09:31.712 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3055015 00:09:31.712 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3055017 00:09:31.712 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:31.712 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.712 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:31.712 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.712 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:31.712 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:31.712 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:31.712 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:31.712 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:31.712 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:31.712 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:31.712 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:31.712 rmmod nvme_tcp 00:09:31.712 rmmod nvme_fabrics 00:09:31.712 rmmod nvme_keyring 00:09:31.712 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:31.712 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:31.712 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:31.712 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@513 -- # '[' -n 3054836 ']' 00:09:31.712 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # killprocess 3054836 00:09:31.712 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 3054836 ']' 00:09:31.712 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 3054836 00:09:31.712 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:09:31.712 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:31.713 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3054836 00:09:31.713 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:31.713 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:31.713 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3054836' 00:09:31.713 killing process with pid 3054836 00:09:31.713 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 3054836 00:09:31.713 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 3054836 00:09:33.089 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:33.089 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:33.089 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:33.089 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:33.089 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-save 00:09:33.089 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:33.089 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-restore 00:09:33.089 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:33.089 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:33.089 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.089 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:33.089 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:34.994 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:34.994 00:09:34.994 real 0m10.815s 00:09:34.994 user 0m32.509s 00:09:34.994 sys 0m4.599s 00:09:34.994 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:34.994 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:34.994 ************************************ 00:09:34.994 END TEST nvmf_bdev_io_wait 00:09:34.994 ************************************ 00:09:34.994 16:17:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:34.994 16:17:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:34.994 16:17:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:34.994 16:17:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:34.994 ************************************ 00:09:34.994 START TEST nvmf_queue_depth 00:09:34.994 ************************************ 00:09:34.994 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:35.287 * Looking for test storage... 00:09:35.287 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:35.287 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:35.287 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:09:35.287 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:35.287 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:35.287 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:35.287 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:35.287 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:35.287 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:35.287 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:35.287 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:35.287 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:35.287 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:35.287 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:35.287 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:35.287 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:35.287 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:35.287 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:35.287 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:35.287 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:35.288 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:35.288 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:35.288 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:35.288 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:35.288 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:35.288 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:35.288 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:35.288 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:35.288 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:35.288 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:35.288 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:35.288 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:35.288 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:35.288 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:35.288 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:35.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.288 --rc genhtml_branch_coverage=1 00:09:35.288 --rc genhtml_function_coverage=1 00:09:35.288 --rc genhtml_legend=1 00:09:35.288 --rc geninfo_all_blocks=1 00:09:35.288 --rc geninfo_unexecuted_blocks=1 00:09:35.288 00:09:35.288 ' 00:09:35.288 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:35.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.288 --rc genhtml_branch_coverage=1 00:09:35.288 --rc genhtml_function_coverage=1 00:09:35.288 --rc genhtml_legend=1 00:09:35.288 --rc geninfo_all_blocks=1 00:09:35.288 --rc geninfo_unexecuted_blocks=1 00:09:35.288 00:09:35.288 ' 00:09:35.288 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:35.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.288 --rc genhtml_branch_coverage=1 00:09:35.288 --rc genhtml_function_coverage=1 00:09:35.288 --rc genhtml_legend=1 00:09:35.288 --rc geninfo_all_blocks=1 00:09:35.288 --rc geninfo_unexecuted_blocks=1 00:09:35.288 00:09:35.288 ' 00:09:35.288 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:35.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.288 --rc genhtml_branch_coverage=1 00:09:35.288 --rc genhtml_function_coverage=1 00:09:35.288 --rc genhtml_legend=1 00:09:35.288 --rc geninfo_all_blocks=1 00:09:35.288 --rc geninfo_unexecuted_blocks=1 00:09:35.288 00:09:35.288 ' 00:09:35.288 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:35.288 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:35.288 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:35.288 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:35.288 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:35.288 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:35.288 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:35.288 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:35.288 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:35.288 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:35.288 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:35.288 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:35.288 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:35.288 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:35.288 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:35.288 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:35.288 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:35.288 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:35.288 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:35.288 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:35.288 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:35.288 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:35.288 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:35.288 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.288 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.288 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.288 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:35.288 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.288 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:35.288 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:35.288 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:35.288 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:35.288 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:35.288 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:35.288 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:35.288 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:35.288 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:35.288 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:35.288 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:35.288 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:35.288 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:35.288 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:35.288 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:35.288 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:35.288 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:35.288 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:35.288 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:35.288 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:35.288 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:35.288 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:35.288 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:35.288 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:09:35.288 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:09:35.288 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:09:35.288 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:37.215 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:37.215 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:37.215 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:37.215 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:37.215 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:37.215 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:37.215 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:37.215 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:37.215 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:37.215 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:37.215 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:37.215 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:37.215 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:37.215 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:37.215 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:37.215 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:37.215 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:37.215 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:37.215 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:37.215 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:37.215 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:37.215 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:37.215 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:37.215 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:37.215 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:37.215 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:37.215 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:09:37.215 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:09:37.215 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:09:37.215 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:09:37.215 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:09:37.215 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:09:37.215 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:37.215 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:37.215 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:37.215 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:37.215 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:37.215 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:37.215 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:37.215 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:37.215 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:37.215 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:37.215 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:37.215 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:37.216 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:37.216 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:37.216 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:37.216 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:37.216 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:09:37.216 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:09:37.216 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:09:37.216 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:37.216 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:37.216 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:37.216 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:37.216 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:37.216 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:37.216 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:37.216 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:37.216 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:37.216 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:37.216 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:37.216 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:37.216 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:37.216 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:37.216 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:37.216 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:37.216 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:37.216 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:37.216 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:37.216 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:37.216 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:09:37.216 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # is_hw=yes 00:09:37.216 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:09:37.216 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:09:37.216 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:09:37.216 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:37.216 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:37.216 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:37.216 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:37.216 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:37.216 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:37.216 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:37.216 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:37.216 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:37.216 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:37.216 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:37.216 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:37.216 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:37.216 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:37.216 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:37.216 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:37.216 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:37.216 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:37.216 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:37.216 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:37.216 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:37.216 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:37.216 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:37.216 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:37.216 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:09:37.216 00:09:37.216 --- 10.0.0.2 ping statistics --- 00:09:37.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:37.216 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:09:37.216 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:37.216 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:37.216 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:09:37.216 00:09:37.216 --- 10.0.0.1 ping statistics --- 00:09:37.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:37.216 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:09:37.216 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:37.216 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # return 0 00:09:37.216 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:37.216 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:37.216 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:37.216 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:37.216 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:37.216 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:37.216 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:37.474 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:37.474 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:37.474 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:37.474 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:37.474 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # nvmfpid=3057612 00:09:37.474 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:37.474 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # waitforlisten 3057612 00:09:37.474 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 3057612 ']' 00:09:37.474 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:37.474 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:37.474 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:37.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:37.474 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:37.474 16:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:37.474 [2024-09-29 16:17:37.894369] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:37.474 [2024-09-29 16:17:37.894509] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:37.474 [2024-09-29 16:17:38.029576] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.733 [2024-09-29 16:17:38.282055] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:37.733 [2024-09-29 16:17:38.282153] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:37.733 [2024-09-29 16:17:38.282180] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:37.733 [2024-09-29 16:17:38.282204] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:37.733 [2024-09-29 16:17:38.282224] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:37.733 [2024-09-29 16:17:38.282278] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:38.668 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:38.668 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:38.668 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:38.668 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:38.668 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:38.668 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:38.668 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:38.668 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.668 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:38.668 [2024-09-29 16:17:38.893823] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:38.668 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.668 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:38.668 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.668 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:38.668 Malloc0 00:09:38.668 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.668 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:38.668 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.668 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:38.668 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.668 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:38.668 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.668 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:38.668 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.668 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:38.668 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.668 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:38.668 [2024-09-29 16:17:39.018114] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:38.668 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.668 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3057778 00:09:38.668 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:38.668 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:38.668 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3057778 /var/tmp/bdevperf.sock 00:09:38.668 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 3057778 ']' 00:09:38.668 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:38.668 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:38.668 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:38.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:38.668 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:38.668 16:17:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:38.668 [2024-09-29 16:17:39.106042] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:38.668 [2024-09-29 16:17:39.106190] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3057778 ] 00:09:38.926 [2024-09-29 16:17:39.241738] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.184 [2024-09-29 16:17:39.498042] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.751 16:17:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:39.751 16:17:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:39.751 16:17:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:39.751 16:17:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.751 16:17:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:40.008 NVMe0n1 00:09:40.008 16:17:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.008 16:17:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:40.008 Running I/O for 10 seconds... 00:09:50.224 5776.00 IOPS, 22.56 MiB/s 5768.50 IOPS, 22.53 MiB/s 5882.67 IOPS, 22.98 MiB/s 5916.75 IOPS, 23.11 MiB/s 5975.20 IOPS, 23.34 MiB/s 6000.17 IOPS, 23.44 MiB/s 6014.71 IOPS, 23.49 MiB/s 6018.88 IOPS, 23.51 MiB/s 6045.44 IOPS, 23.62 MiB/s 6040.10 IOPS, 23.59 MiB/s 00:09:50.224 Latency(us) 00:09:50.224 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:50.224 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:50.224 Verification LBA range: start 0x0 length 0x4000 00:09:50.224 NVMe0n1 : 10.11 6076.43 23.74 0.00 0.00 167598.50 27767.85 103304.15 00:09:50.224 =================================================================================================================== 00:09:50.224 Total : 6076.43 23.74 0.00 0.00 167598.50 27767.85 103304.15 00:09:50.224 { 00:09:50.224 "results": [ 00:09:50.224 { 00:09:50.224 "job": "NVMe0n1", 00:09:50.224 "core_mask": "0x1", 00:09:50.224 "workload": "verify", 00:09:50.224 "status": "finished", 00:09:50.224 "verify_range": { 00:09:50.224 "start": 0, 00:09:50.224 "length": 16384 00:09:50.224 }, 00:09:50.224 "queue_depth": 1024, 00:09:50.224 "io_size": 4096, 00:09:50.224 "runtime": 10.106587, 00:09:50.224 "iops": 6076.433122279559, 00:09:50.224 "mibps": 23.736066883904527, 00:09:50.224 "io_failed": 0, 00:09:50.224 "io_timeout": 0, 00:09:50.224 "avg_latency_us": 167598.49863522872, 00:09:50.224 "min_latency_us": 27767.845925925925, 00:09:50.224 "max_latency_us": 103304.15407407407 00:09:50.224 } 00:09:50.224 ], 00:09:50.224 "core_count": 1 00:09:50.224 } 00:09:50.224 16:17:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3057778 00:09:50.224 16:17:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 3057778 ']' 00:09:50.224 16:17:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 3057778 00:09:50.224 16:17:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:50.224 16:17:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:50.224 16:17:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3057778 00:09:50.224 16:17:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:50.224 16:17:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:50.224 16:17:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3057778' 00:09:50.224 killing process with pid 3057778 00:09:50.224 16:17:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 3057778 00:09:50.224 Received shutdown signal, test time was about 10.000000 seconds 00:09:50.224 00:09:50.224 Latency(us) 00:09:50.224 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:50.224 =================================================================================================================== 00:09:50.224 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:50.224 16:17:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 3057778 00:09:51.597 16:17:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:51.597 16:17:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:51.597 16:17:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:51.598 16:17:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:51.598 16:17:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:51.598 16:17:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:51.598 16:17:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:51.598 16:17:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:51.598 rmmod nvme_tcp 00:09:51.598 rmmod nvme_fabrics 00:09:51.598 rmmod nvme_keyring 00:09:51.598 16:17:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:51.598 16:17:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:51.598 16:17:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:51.598 16:17:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@513 -- # '[' -n 3057612 ']' 00:09:51.598 16:17:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # killprocess 3057612 00:09:51.598 16:17:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 3057612 ']' 00:09:51.598 16:17:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 3057612 00:09:51.598 16:17:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:51.598 16:17:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:51.598 16:17:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3057612 00:09:51.598 16:17:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:51.598 16:17:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:51.598 16:17:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3057612' 00:09:51.598 killing process with pid 3057612 00:09:51.598 16:17:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 3057612 00:09:51.598 16:17:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 3057612 00:09:52.972 16:17:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:52.972 16:17:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:52.972 16:17:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:52.972 16:17:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:52.972 16:17:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:52.972 16:17:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-save 00:09:52.972 16:17:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-restore 00:09:52.972 16:17:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:52.972 16:17:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:52.972 16:17:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:52.972 16:17:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:52.972 16:17:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:55.505 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:55.505 00:09:55.505 real 0m19.958s 00:09:55.505 user 0m28.501s 00:09:55.505 sys 0m3.353s 00:09:55.505 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:55.505 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:55.505 ************************************ 00:09:55.505 END TEST nvmf_queue_depth 00:09:55.505 ************************************ 00:09:55.505 16:17:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:55.505 16:17:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:55.505 16:17:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:55.505 16:17:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:55.505 ************************************ 00:09:55.505 START TEST nvmf_target_multipath 00:09:55.505 ************************************ 00:09:55.505 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:55.505 * Looking for test storage... 00:09:55.505 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:55.505 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:55.505 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:09:55.505 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:55.505 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:55.505 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:55.505 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:55.505 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:55.505 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:55.505 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:55.505 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:55.505 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:55.505 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:55.505 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:55.505 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:55.505 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:55.505 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:55.505 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:55.505 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:55.505 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:55.505 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:55.505 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:55.505 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:55.505 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:55.505 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:55.505 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:55.505 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:55.505 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:55.505 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:55.505 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:55.505 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:55.505 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:55.505 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:55.505 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:55.505 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:55.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.505 --rc genhtml_branch_coverage=1 00:09:55.505 --rc genhtml_function_coverage=1 00:09:55.505 --rc genhtml_legend=1 00:09:55.505 --rc geninfo_all_blocks=1 00:09:55.505 --rc geninfo_unexecuted_blocks=1 00:09:55.505 00:09:55.505 ' 00:09:55.505 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:55.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.505 --rc genhtml_branch_coverage=1 00:09:55.505 --rc genhtml_function_coverage=1 00:09:55.505 --rc genhtml_legend=1 00:09:55.505 --rc geninfo_all_blocks=1 00:09:55.505 --rc geninfo_unexecuted_blocks=1 00:09:55.505 00:09:55.505 ' 00:09:55.505 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:55.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.505 --rc genhtml_branch_coverage=1 00:09:55.505 --rc genhtml_function_coverage=1 00:09:55.505 --rc genhtml_legend=1 00:09:55.505 --rc geninfo_all_blocks=1 00:09:55.505 --rc geninfo_unexecuted_blocks=1 00:09:55.505 00:09:55.505 ' 00:09:55.505 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:55.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.505 --rc genhtml_branch_coverage=1 00:09:55.505 --rc genhtml_function_coverage=1 00:09:55.505 --rc genhtml_legend=1 00:09:55.505 --rc geninfo_all_blocks=1 00:09:55.505 --rc geninfo_unexecuted_blocks=1 00:09:55.505 00:09:55.505 ' 00:09:55.505 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:55.505 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:55.505 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:55.505 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:55.505 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:55.505 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:55.505 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:55.505 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:55.505 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:55.505 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:55.505 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:55.505 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:55.505 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:55.505 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:55.505 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:55.505 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:55.505 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:55.505 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:55.506 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:55.506 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:55.506 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:55.506 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:55.506 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:55.506 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.506 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.506 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.506 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:55.506 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.506 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:55.506 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:55.506 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:55.506 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:55.506 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:55.506 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:55.506 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:55.506 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:55.506 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:55.506 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:55.506 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:55.506 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:55.506 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:55.506 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:55.506 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:55.506 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:55.506 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:55.506 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:55.506 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:55.506 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:55.506 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:55.506 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:55.506 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:55.506 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:55.506 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:09:55.506 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:09:55.506 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:09:55.506 16:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:57.410 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:57.410 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:57.410 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:57.410 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # is_hw=yes 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:57.410 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:57.411 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:57.411 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:57.411 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:57.411 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:57.411 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:57.411 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:57.411 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:57.411 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:57.411 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:57.411 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:57.411 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:57.411 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:57.411 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:57.411 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:57.411 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:57.411 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:09:57.411 00:09:57.411 --- 10.0.0.2 ping statistics --- 00:09:57.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.411 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:09:57.411 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:57.411 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:57.411 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:09:57.411 00:09:57.411 --- 10.0.0.1 ping statistics --- 00:09:57.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.411 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:09:57.411 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:57.411 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # return 0 00:09:57.411 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:57.411 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:57.411 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:57.411 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:57.411 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:57.411 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:57.411 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:57.411 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:57.411 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:57.411 only one NIC for nvmf test 00:09:57.411 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:57.411 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:57.411 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:57.411 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:57.411 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:57.411 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:57.411 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:57.411 rmmod nvme_tcp 00:09:57.411 rmmod nvme_fabrics 00:09:57.411 rmmod nvme_keyring 00:09:57.411 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:57.411 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:57.411 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:57.411 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:09:57.411 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:57.411 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:57.411 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:57.411 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:57.411 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-save 00:09:57.411 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:57.411 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:09:57.411 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:57.411 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:57.411 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.411 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:57.411 16:17:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:59.943 16:17:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:59.943 16:17:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:59.943 16:17:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:59.943 16:17:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:59.943 16:17:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:59.943 16:17:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:59.943 16:17:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:59.943 16:17:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:59.943 16:17:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:59.943 16:17:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:59.943 16:17:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:59.943 16:17:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:59.943 16:17:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:09:59.943 16:17:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:59.943 16:17:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:59.943 16:17:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:59.943 16:17:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:59.943 16:17:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-save 00:09:59.943 16:17:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:59.943 16:17:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:09:59.943 16:17:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:59.943 16:17:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:59.943 16:17:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:59.943 16:17:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:59.943 16:17:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:59.943 16:17:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:59.943 00:09:59.943 real 0m4.401s 00:09:59.943 user 0m0.884s 00:09:59.943 sys 0m1.524s 00:09:59.943 16:17:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:59.943 16:17:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:59.943 ************************************ 00:09:59.944 END TEST nvmf_target_multipath 00:09:59.944 ************************************ 00:09:59.944 16:17:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:59.944 16:17:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:59.944 16:17:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:59.944 16:17:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:59.944 ************************************ 00:09:59.944 START TEST nvmf_zcopy 00:09:59.944 ************************************ 00:09:59.944 16:17:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:59.944 * Looking for test storage... 00:09:59.944 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:59.944 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:59.944 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:09:59.944 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:59.944 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:59.944 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:59.944 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:59.944 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:59.944 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:59.944 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:59.944 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:59.944 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:59.944 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:59.944 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:59.944 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:59.944 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:59.944 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:59.944 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:59.944 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:59.944 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:59.944 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:59.944 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:59.944 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:59.944 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:59.944 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:59.944 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:59.944 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:59.944 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:59.944 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:59.944 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:59.944 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:59.944 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:59.944 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:59.944 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:59.944 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:59.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.944 --rc genhtml_branch_coverage=1 00:09:59.944 --rc genhtml_function_coverage=1 00:09:59.944 --rc genhtml_legend=1 00:09:59.944 --rc geninfo_all_blocks=1 00:09:59.944 --rc geninfo_unexecuted_blocks=1 00:09:59.944 00:09:59.944 ' 00:09:59.944 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:59.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.944 --rc genhtml_branch_coverage=1 00:09:59.944 --rc genhtml_function_coverage=1 00:09:59.944 --rc genhtml_legend=1 00:09:59.944 --rc geninfo_all_blocks=1 00:09:59.944 --rc geninfo_unexecuted_blocks=1 00:09:59.944 00:09:59.944 ' 00:09:59.944 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:59.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.944 --rc genhtml_branch_coverage=1 00:09:59.944 --rc genhtml_function_coverage=1 00:09:59.944 --rc genhtml_legend=1 00:09:59.944 --rc geninfo_all_blocks=1 00:09:59.944 --rc geninfo_unexecuted_blocks=1 00:09:59.944 00:09:59.944 ' 00:09:59.944 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:59.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.944 --rc genhtml_branch_coverage=1 00:09:59.944 --rc genhtml_function_coverage=1 00:09:59.944 --rc genhtml_legend=1 00:09:59.944 --rc geninfo_all_blocks=1 00:09:59.944 --rc geninfo_unexecuted_blocks=1 00:09:59.944 00:09:59.944 ' 00:09:59.944 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:59.944 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:59.944 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:59.944 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:59.944 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:59.944 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:59.944 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:59.944 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:59.944 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:59.944 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:59.944 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:59.944 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:59.944 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:59.944 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:59.944 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:59.944 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:59.944 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:59.944 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:59.944 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:59.944 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:59.944 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:59.944 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:59.944 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:59.944 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.944 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.945 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.945 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:59.945 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.945 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:59.945 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:59.945 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:59.945 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:59.945 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:59.945 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:59.945 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:59.945 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:59.945 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:59.945 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:59.945 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:59.945 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:59.945 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:59.945 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:59.945 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:59.945 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:59.945 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:59.945 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:59.945 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:59.945 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:59.945 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:09:59.945 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:09:59.945 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:59.945 16:18:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:01.846 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:01.846 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:10:01.846 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:01.846 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:01.846 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:01.846 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:01.846 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:01.846 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:10:01.846 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:01.847 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:01.847 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:01.847 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:01.847 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # is_hw=yes 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:01.847 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:02.106 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:02.106 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:02.106 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:02.106 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.317 ms 00:10:02.106 00:10:02.106 --- 10.0.0.2 ping statistics --- 00:10:02.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:02.106 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:10:02.106 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:02.106 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:02.106 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:10:02.106 00:10:02.106 --- 10.0.0.1 ping statistics --- 00:10:02.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:02.106 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:10:02.106 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:02.106 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # return 0 00:10:02.106 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:02.106 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:02.106 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:02.106 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:02.106 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:02.106 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:02.106 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:02.107 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:02.107 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:02.107 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:02.107 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:02.107 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # nvmfpid=3063372 00:10:02.107 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:02.107 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # waitforlisten 3063372 00:10:02.107 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 3063372 ']' 00:10:02.107 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:02.107 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:02.107 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:02.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:02.107 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:02.107 16:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:02.107 [2024-09-29 16:18:02.550005] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:10:02.107 [2024-09-29 16:18:02.550168] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:02.365 [2024-09-29 16:18:02.693483] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.624 [2024-09-29 16:18:02.947354] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:02.624 [2024-09-29 16:18:02.947424] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:02.624 [2024-09-29 16:18:02.947449] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:02.624 [2024-09-29 16:18:02.947473] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:02.624 [2024-09-29 16:18:02.947493] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:02.624 [2024-09-29 16:18:02.947551] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:03.189 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:03.189 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:10:03.189 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:03.189 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:03.189 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:03.189 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:03.189 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:03.189 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:03.189 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.189 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:03.189 [2024-09-29 16:18:03.605200] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:03.189 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.189 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:03.189 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.189 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:03.189 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.189 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:03.189 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.190 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:03.190 [2024-09-29 16:18:03.621468] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:03.190 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.190 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:03.190 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.190 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:03.190 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.190 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:03.190 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.190 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:03.190 malloc0 00:10:03.190 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.190 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:03.190 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.190 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:03.190 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.190 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:03.190 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:03.190 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:10:03.190 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:10:03.190 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:03.190 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:03.190 { 00:10:03.190 "params": { 00:10:03.190 "name": "Nvme$subsystem", 00:10:03.190 "trtype": "$TEST_TRANSPORT", 00:10:03.190 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:03.190 "adrfam": "ipv4", 00:10:03.190 "trsvcid": "$NVMF_PORT", 00:10:03.190 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:03.190 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:03.190 "hdgst": ${hdgst:-false}, 00:10:03.190 "ddgst": ${ddgst:-false} 00:10:03.190 }, 00:10:03.190 "method": "bdev_nvme_attach_controller" 00:10:03.190 } 00:10:03.190 EOF 00:10:03.190 )") 00:10:03.190 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:10:03.190 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:10:03.190 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:10:03.190 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:03.190 "params": { 00:10:03.190 "name": "Nvme1", 00:10:03.190 "trtype": "tcp", 00:10:03.190 "traddr": "10.0.0.2", 00:10:03.190 "adrfam": "ipv4", 00:10:03.190 "trsvcid": "4420", 00:10:03.190 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:03.190 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:03.190 "hdgst": false, 00:10:03.190 "ddgst": false 00:10:03.190 }, 00:10:03.190 "method": "bdev_nvme_attach_controller" 00:10:03.190 }' 00:10:03.448 [2024-09-29 16:18:03.800434] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:10:03.448 [2024-09-29 16:18:03.800576] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3063525 ] 00:10:03.448 [2024-09-29 16:18:03.952997] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.706 [2024-09-29 16:18:04.214213] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.273 Running I/O for 10 seconds... 00:10:14.547 4159.00 IOPS, 32.49 MiB/s 4188.50 IOPS, 32.72 MiB/s 4195.33 IOPS, 32.78 MiB/s 4207.50 IOPS, 32.87 MiB/s 4208.60 IOPS, 32.88 MiB/s 4214.83 IOPS, 32.93 MiB/s 4217.57 IOPS, 32.95 MiB/s 4223.75 IOPS, 33.00 MiB/s 4222.00 IOPS, 32.98 MiB/s 4227.20 IOPS, 33.02 MiB/s 00:10:14.547 Latency(us) 00:10:14.547 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:14.547 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:14.547 Verification LBA range: start 0x0 length 0x1000 00:10:14.547 Nvme1n1 : 10.02 4229.83 33.05 0.00 0.00 30179.41 4757.43 41166.32 00:10:14.547 =================================================================================================================== 00:10:14.547 Total : 4229.83 33.05 0.00 0.00 30179.41 4757.43 41166.32 00:10:15.482 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3065487 00:10:15.482 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:15.482 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:15.482 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:15.482 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:15.482 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:10:15.482 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:10:15.482 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:15.482 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:15.482 { 00:10:15.482 "params": { 00:10:15.482 "name": "Nvme$subsystem", 00:10:15.482 "trtype": "$TEST_TRANSPORT", 00:10:15.482 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:15.482 "adrfam": "ipv4", 00:10:15.482 "trsvcid": "$NVMF_PORT", 00:10:15.482 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:15.482 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:15.482 "hdgst": ${hdgst:-false}, 00:10:15.482 "ddgst": ${ddgst:-false} 00:10:15.482 }, 00:10:15.482 "method": "bdev_nvme_attach_controller" 00:10:15.482 } 00:10:15.482 EOF 00:10:15.482 )") 00:10:15.482 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:10:15.482 [2024-09-29 16:18:15.893271] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.482 [2024-09-29 16:18:15.893333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.482 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:10:15.482 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:10:15.482 16:18:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:15.482 "params": { 00:10:15.482 "name": "Nvme1", 00:10:15.482 "trtype": "tcp", 00:10:15.482 "traddr": "10.0.0.2", 00:10:15.482 "adrfam": "ipv4", 00:10:15.482 "trsvcid": "4420", 00:10:15.482 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:15.482 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:15.482 "hdgst": false, 00:10:15.482 "ddgst": false 00:10:15.482 }, 00:10:15.482 "method": "bdev_nvme_attach_controller" 00:10:15.482 }' 00:10:15.482 [2024-09-29 16:18:15.901174] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.482 [2024-09-29 16:18:15.901210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.482 [2024-09-29 16:18:15.909227] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.482 [2024-09-29 16:18:15.909261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.482 [2024-09-29 16:18:15.917237] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.482 [2024-09-29 16:18:15.917272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.482 [2024-09-29 16:18:15.925239] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.482 [2024-09-29 16:18:15.925275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.482 [2024-09-29 16:18:15.933339] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.482 [2024-09-29 16:18:15.933383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.482 [2024-09-29 16:18:15.941317] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.482 [2024-09-29 16:18:15.941353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.482 [2024-09-29 16:18:15.949301] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.482 [2024-09-29 16:18:15.949334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.482 [2024-09-29 16:18:15.957351] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.482 [2024-09-29 16:18:15.957386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.482 [2024-09-29 16:18:15.965360] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.482 [2024-09-29 16:18:15.965393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.482 [2024-09-29 16:18:15.973406] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.482 [2024-09-29 16:18:15.973441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.482 [2024-09-29 16:18:15.981424] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.482 [2024-09-29 16:18:15.981459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.482 [2024-09-29 16:18:15.982045] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:10:15.482 [2024-09-29 16:18:15.982167] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3065487 ] 00:10:15.482 [2024-09-29 16:18:15.989416] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.482 [2024-09-29 16:18:15.989450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.482 [2024-09-29 16:18:15.997468] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.482 [2024-09-29 16:18:15.997510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.482 [2024-09-29 16:18:16.005494] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.482 [2024-09-29 16:18:16.005529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.482 [2024-09-29 16:18:16.013486] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.482 [2024-09-29 16:18:16.013521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.482 [2024-09-29 16:18:16.021541] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.482 [2024-09-29 16:18:16.021578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.482 [2024-09-29 16:18:16.029555] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.482 [2024-09-29 16:18:16.029589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.482 [2024-09-29 16:18:16.037586] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.482 [2024-09-29 16:18:16.037620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.741 [2024-09-29 16:18:16.045613] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.741 [2024-09-29 16:18:16.045647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.741 [2024-09-29 16:18:16.053606] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.741 [2024-09-29 16:18:16.053639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.741 [2024-09-29 16:18:16.061657] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.741 [2024-09-29 16:18:16.061703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.741 [2024-09-29 16:18:16.069667] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.741 [2024-09-29 16:18:16.069710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.741 [2024-09-29 16:18:16.077669] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.741 [2024-09-29 16:18:16.077713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.741 [2024-09-29 16:18:16.085729] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.741 [2024-09-29 16:18:16.085762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.741 [2024-09-29 16:18:16.093727] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.741 [2024-09-29 16:18:16.093760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.741 [2024-09-29 16:18:16.101776] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.741 [2024-09-29 16:18:16.101810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.741 [2024-09-29 16:18:16.109794] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.741 [2024-09-29 16:18:16.109827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.741 [2024-09-29 16:18:16.117792] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.741 [2024-09-29 16:18:16.117824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.741 [2024-09-29 16:18:16.125863] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.741 [2024-09-29 16:18:16.125897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.741 [2024-09-29 16:18:16.133856] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.741 [2024-09-29 16:18:16.133890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.741 [2024-09-29 16:18:16.138577] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.741 [2024-09-29 16:18:16.141852] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.741 [2024-09-29 16:18:16.141885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.741 [2024-09-29 16:18:16.149948] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.741 [2024-09-29 16:18:16.149994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.741 [2024-09-29 16:18:16.157953] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.741 [2024-09-29 16:18:16.158007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.741 [2024-09-29 16:18:16.165958] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.741 [2024-09-29 16:18:16.165991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.741 [2024-09-29 16:18:16.173980] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.741 [2024-09-29 16:18:16.174014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.741 [2024-09-29 16:18:16.181983] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.741 [2024-09-29 16:18:16.182015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.741 [2024-09-29 16:18:16.190022] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.741 [2024-09-29 16:18:16.190057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.741 [2024-09-29 16:18:16.198051] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.741 [2024-09-29 16:18:16.198084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.741 [2024-09-29 16:18:16.206040] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.742 [2024-09-29 16:18:16.206072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.742 [2024-09-29 16:18:16.214092] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.742 [2024-09-29 16:18:16.214125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.742 [2024-09-29 16:18:16.222110] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.742 [2024-09-29 16:18:16.222144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.742 [2024-09-29 16:18:16.230141] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.742 [2024-09-29 16:18:16.230174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.742 [2024-09-29 16:18:16.238181] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.742 [2024-09-29 16:18:16.238215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.742 [2024-09-29 16:18:16.246152] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.742 [2024-09-29 16:18:16.246185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.742 [2024-09-29 16:18:16.254199] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.742 [2024-09-29 16:18:16.254233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.742 [2024-09-29 16:18:16.262221] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.742 [2024-09-29 16:18:16.262254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.742 [2024-09-29 16:18:16.270227] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.742 [2024-09-29 16:18:16.270259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.742 [2024-09-29 16:18:16.278273] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.742 [2024-09-29 16:18:16.278306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.742 [2024-09-29 16:18:16.286281] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.742 [2024-09-29 16:18:16.286317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.742 [2024-09-29 16:18:16.294380] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.742 [2024-09-29 16:18:16.294434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.742 [2024-09-29 16:18:16.302341] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.742 [2024-09-29 16:18:16.302376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.001 [2024-09-29 16:18:16.310332] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.001 [2024-09-29 16:18:16.310365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.001 [2024-09-29 16:18:16.318399] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.001 [2024-09-29 16:18:16.318433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.001 [2024-09-29 16:18:16.326414] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.001 [2024-09-29 16:18:16.326448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.001 [2024-09-29 16:18:16.334399] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.001 [2024-09-29 16:18:16.334432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.001 [2024-09-29 16:18:16.342450] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.001 [2024-09-29 16:18:16.342484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.001 [2024-09-29 16:18:16.350443] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.001 [2024-09-29 16:18:16.350477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.001 [2024-09-29 16:18:16.358496] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.001 [2024-09-29 16:18:16.358529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.001 [2024-09-29 16:18:16.366513] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.001 [2024-09-29 16:18:16.366547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.001 [2024-09-29 16:18:16.374514] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.001 [2024-09-29 16:18:16.374547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.001 [2024-09-29 16:18:16.382565] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.001 [2024-09-29 16:18:16.382599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.001 [2024-09-29 16:18:16.390590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.001 [2024-09-29 16:18:16.390625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.001 [2024-09-29 16:18:16.398582] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.001 [2024-09-29 16:18:16.398615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.001 [2024-09-29 16:18:16.406634] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.001 [2024-09-29 16:18:16.406668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.001 [2024-09-29 16:18:16.414658] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.001 [2024-09-29 16:18:16.414707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.001 [2024-09-29 16:18:16.422688] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.001 [2024-09-29 16:18:16.422721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.001 [2024-09-29 16:18:16.425843] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.001 [2024-09-29 16:18:16.430709] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.001 [2024-09-29 16:18:16.430743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.001 [2024-09-29 16:18:16.438712] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.001 [2024-09-29 16:18:16.438745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.001 [2024-09-29 16:18:16.446840] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.001 [2024-09-29 16:18:16.446901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.001 [2024-09-29 16:18:16.454846] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.001 [2024-09-29 16:18:16.454899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.001 [2024-09-29 16:18:16.462777] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.001 [2024-09-29 16:18:16.462809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.001 [2024-09-29 16:18:16.470848] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.001 [2024-09-29 16:18:16.470881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.001 [2024-09-29 16:18:16.478837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.001 [2024-09-29 16:18:16.478870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.001 [2024-09-29 16:18:16.486877] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.001 [2024-09-29 16:18:16.486911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.001 [2024-09-29 16:18:16.494891] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.001 [2024-09-29 16:18:16.494925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.001 [2024-09-29 16:18:16.502889] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.001 [2024-09-29 16:18:16.502922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.001 [2024-09-29 16:18:16.510962] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.001 [2024-09-29 16:18:16.510996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.001 [2024-09-29 16:18:16.518979] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.001 [2024-09-29 16:18:16.519016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.001 [2024-09-29 16:18:16.526997] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.001 [2024-09-29 16:18:16.527049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.001 [2024-09-29 16:18:16.535063] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.001 [2024-09-29 16:18:16.535116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.001 [2024-09-29 16:18:16.543057] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.001 [2024-09-29 16:18:16.543110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.001 [2024-09-29 16:18:16.551117] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.001 [2024-09-29 16:18:16.551171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.001 [2024-09-29 16:18:16.559077] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.001 [2024-09-29 16:18:16.559114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.260 [2024-09-29 16:18:16.567073] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.260 [2024-09-29 16:18:16.567106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.260 [2024-09-29 16:18:16.575133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.260 [2024-09-29 16:18:16.575167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.260 [2024-09-29 16:18:16.583138] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.260 [2024-09-29 16:18:16.583172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.260 [2024-09-29 16:18:16.591139] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.260 [2024-09-29 16:18:16.591172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.260 [2024-09-29 16:18:16.599183] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.260 [2024-09-29 16:18:16.599217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.260 [2024-09-29 16:18:16.607205] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.260 [2024-09-29 16:18:16.607239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.260 [2024-09-29 16:18:16.615228] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.260 [2024-09-29 16:18:16.615262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.260 [2024-09-29 16:18:16.623292] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.260 [2024-09-29 16:18:16.623326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.260 [2024-09-29 16:18:16.631251] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.260 [2024-09-29 16:18:16.631284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.260 [2024-09-29 16:18:16.639326] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.260 [2024-09-29 16:18:16.639360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.260 [2024-09-29 16:18:16.647319] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.260 [2024-09-29 16:18:16.647353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.260 [2024-09-29 16:18:16.655315] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.260 [2024-09-29 16:18:16.655348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.260 [2024-09-29 16:18:16.663396] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.260 [2024-09-29 16:18:16.663429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.260 [2024-09-29 16:18:16.671359] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.260 [2024-09-29 16:18:16.671391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.261 [2024-09-29 16:18:16.679406] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.261 [2024-09-29 16:18:16.679439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.261 [2024-09-29 16:18:16.687437] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.261 [2024-09-29 16:18:16.687473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.261 [2024-09-29 16:18:16.695483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.261 [2024-09-29 16:18:16.695534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.261 [2024-09-29 16:18:16.703558] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.261 [2024-09-29 16:18:16.703611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.261 [2024-09-29 16:18:16.711565] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.261 [2024-09-29 16:18:16.711620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.261 [2024-09-29 16:18:16.719502] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.261 [2024-09-29 16:18:16.719535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.261 [2024-09-29 16:18:16.727549] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.261 [2024-09-29 16:18:16.727583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.261 [2024-09-29 16:18:16.735546] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.261 [2024-09-29 16:18:16.735578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.261 [2024-09-29 16:18:16.743601] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.261 [2024-09-29 16:18:16.743635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.261 [2024-09-29 16:18:16.751618] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.261 [2024-09-29 16:18:16.751652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.261 [2024-09-29 16:18:16.759614] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.261 [2024-09-29 16:18:16.759647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.261 [2024-09-29 16:18:16.767660] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.261 [2024-09-29 16:18:16.767702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.261 [2024-09-29 16:18:16.775696] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.261 [2024-09-29 16:18:16.775735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.261 [2024-09-29 16:18:16.783702] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.261 [2024-09-29 16:18:16.783743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.261 [2024-09-29 16:18:16.791748] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.261 [2024-09-29 16:18:16.791789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.261 [2024-09-29 16:18:16.799761] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.261 [2024-09-29 16:18:16.799794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.261 [2024-09-29 16:18:16.807794] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.261 [2024-09-29 16:18:16.807828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.261 [2024-09-29 16:18:16.815805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.261 [2024-09-29 16:18:16.815838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.520 [2024-09-29 16:18:16.823812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.520 [2024-09-29 16:18:16.823849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.520 [2024-09-29 16:18:16.831851] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.520 [2024-09-29 16:18:16.831885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.520 [2024-09-29 16:18:16.839876] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.520 [2024-09-29 16:18:16.839911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.520 [2024-09-29 16:18:16.847890] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.520 [2024-09-29 16:18:16.847929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.520 [2024-09-29 16:18:16.855939] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.520 [2024-09-29 16:18:16.855977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.520 [2024-09-29 16:18:16.863934] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.520 [2024-09-29 16:18:16.863970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.520 [2024-09-29 16:18:16.871985] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.520 [2024-09-29 16:18:16.872023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.520 [2024-09-29 16:18:16.880008] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.520 [2024-09-29 16:18:16.880045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.520 [2024-09-29 16:18:16.888007] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.520 [2024-09-29 16:18:16.888044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.520 [2024-09-29 16:18:16.900098] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.520 [2024-09-29 16:18:16.900134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.520 [2024-09-29 16:18:16.943981] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.520 [2024-09-29 16:18:16.944022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.520 [2024-09-29 16:18:16.948276] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.520 [2024-09-29 16:18:16.948312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.520 [2024-09-29 16:18:16.956307] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.520 [2024-09-29 16:18:16.956342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.520 Running I/O for 5 seconds... 00:10:16.520 [2024-09-29 16:18:16.970217] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.520 [2024-09-29 16:18:16.970259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.520 [2024-09-29 16:18:16.985269] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.520 [2024-09-29 16:18:16.985310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.520 [2024-09-29 16:18:17.000771] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.520 [2024-09-29 16:18:17.000808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.520 [2024-09-29 16:18:17.015327] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.520 [2024-09-29 16:18:17.015379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.520 [2024-09-29 16:18:17.030432] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.520 [2024-09-29 16:18:17.030469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.520 [2024-09-29 16:18:17.044898] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.520 [2024-09-29 16:18:17.044935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.520 [2024-09-29 16:18:17.059190] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.520 [2024-09-29 16:18:17.059227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.520 [2024-09-29 16:18:17.073856] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.520 [2024-09-29 16:18:17.073891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.779 [2024-09-29 16:18:17.088237] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.779 [2024-09-29 16:18:17.088275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.779 [2024-09-29 16:18:17.102794] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.779 [2024-09-29 16:18:17.102840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.779 [2024-09-29 16:18:17.117812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.779 [2024-09-29 16:18:17.117848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.779 [2024-09-29 16:18:17.132143] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.779 [2024-09-29 16:18:17.132179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.779 [2024-09-29 16:18:17.147529] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.779 [2024-09-29 16:18:17.147579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.779 [2024-09-29 16:18:17.162892] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.779 [2024-09-29 16:18:17.162929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.779 [2024-09-29 16:18:17.178129] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.779 [2024-09-29 16:18:17.178180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.779 [2024-09-29 16:18:17.192488] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.779 [2024-09-29 16:18:17.192537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.779 [2024-09-29 16:18:17.206983] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.779 [2024-09-29 16:18:17.207018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.779 [2024-09-29 16:18:17.221531] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.779 [2024-09-29 16:18:17.221588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.779 [2024-09-29 16:18:17.236019] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.779 [2024-09-29 16:18:17.236059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.779 [2024-09-29 16:18:17.250786] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.779 [2024-09-29 16:18:17.250868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.779 [2024-09-29 16:18:17.265776] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.779 [2024-09-29 16:18:17.265813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.779 [2024-09-29 16:18:17.280343] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.779 [2024-09-29 16:18:17.280383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.779 [2024-09-29 16:18:17.294833] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.779 [2024-09-29 16:18:17.294869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.779 [2024-09-29 16:18:17.309651] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.779 [2024-09-29 16:18:17.309696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.779 [2024-09-29 16:18:17.324163] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.779 [2024-09-29 16:18:17.324216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.779 [2024-09-29 16:18:17.338550] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.779 [2024-09-29 16:18:17.338603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.038 [2024-09-29 16:18:17.353378] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.038 [2024-09-29 16:18:17.353431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.038 [2024-09-29 16:18:17.368631] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.038 [2024-09-29 16:18:17.368681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.038 [2024-09-29 16:18:17.383291] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.038 [2024-09-29 16:18:17.383345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.038 [2024-09-29 16:18:17.397715] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.038 [2024-09-29 16:18:17.397752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.038 [2024-09-29 16:18:17.412161] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.038 [2024-09-29 16:18:17.412196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.038 [2024-09-29 16:18:17.427186] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.038 [2024-09-29 16:18:17.427239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.038 [2024-09-29 16:18:17.442489] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.038 [2024-09-29 16:18:17.442542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.038 [2024-09-29 16:18:17.456943] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.038 [2024-09-29 16:18:17.456994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.038 [2024-09-29 16:18:17.471247] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.038 [2024-09-29 16:18:17.471309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.038 [2024-09-29 16:18:17.485754] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.038 [2024-09-29 16:18:17.485791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.038 [2024-09-29 16:18:17.499629] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.038 [2024-09-29 16:18:17.499693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.038 [2024-09-29 16:18:17.514052] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.038 [2024-09-29 16:18:17.514103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.038 [2024-09-29 16:18:17.528738] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.038 [2024-09-29 16:18:17.528775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.038 [2024-09-29 16:18:17.543980] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.038 [2024-09-29 16:18:17.544033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.038 [2024-09-29 16:18:17.557660] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.038 [2024-09-29 16:18:17.557725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.038 [2024-09-29 16:18:17.571713] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.038 [2024-09-29 16:18:17.571748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.038 [2024-09-29 16:18:17.586655] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.038 [2024-09-29 16:18:17.586700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.296 [2024-09-29 16:18:17.602168] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.296 [2024-09-29 16:18:17.602209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.296 [2024-09-29 16:18:17.616562] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.296 [2024-09-29 16:18:17.616603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.296 [2024-09-29 16:18:17.632088] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.296 [2024-09-29 16:18:17.632139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.296 [2024-09-29 16:18:17.646852] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.296 [2024-09-29 16:18:17.646889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.296 [2024-09-29 16:18:17.662051] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.296 [2024-09-29 16:18:17.662086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.296 [2024-09-29 16:18:17.674856] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.296 [2024-09-29 16:18:17.674892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.296 [2024-09-29 16:18:17.689142] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.296 [2024-09-29 16:18:17.689193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.296 [2024-09-29 16:18:17.703498] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.296 [2024-09-29 16:18:17.703550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.296 [2024-09-29 16:18:17.718980] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.296 [2024-09-29 16:18:17.719031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.296 [2024-09-29 16:18:17.733935] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.296 [2024-09-29 16:18:17.733986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.296 [2024-09-29 16:18:17.749543] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.296 [2024-09-29 16:18:17.749601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.296 [2024-09-29 16:18:17.764741] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.296 [2024-09-29 16:18:17.764779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.296 [2024-09-29 16:18:17.779703] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.296 [2024-09-29 16:18:17.779739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.296 [2024-09-29 16:18:17.794835] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.296 [2024-09-29 16:18:17.794872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.296 [2024-09-29 16:18:17.809396] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.296 [2024-09-29 16:18:17.809448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.296 [2024-09-29 16:18:17.824815] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.296 [2024-09-29 16:18:17.824851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.296 [2024-09-29 16:18:17.839946] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.296 [2024-09-29 16:18:17.840000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.296 [2024-09-29 16:18:17.855117] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.296 [2024-09-29 16:18:17.855171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.555 [2024-09-29 16:18:17.869737] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.555 [2024-09-29 16:18:17.869774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.555 [2024-09-29 16:18:17.884725] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.555 [2024-09-29 16:18:17.884761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.555 [2024-09-29 16:18:17.899440] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.555 [2024-09-29 16:18:17.899475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.555 [2024-09-29 16:18:17.913961] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.555 [2024-09-29 16:18:17.914013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.555 [2024-09-29 16:18:17.929067] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.555 [2024-09-29 16:18:17.929120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.555 [2024-09-29 16:18:17.943387] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.555 [2024-09-29 16:18:17.943437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.555 [2024-09-29 16:18:17.958426] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.555 [2024-09-29 16:18:17.958477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.555 8533.00 IOPS, 66.66 MiB/s [2024-09-29 16:18:17.973490] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.555 [2024-09-29 16:18:17.973542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.555 [2024-09-29 16:18:17.988923] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.555 [2024-09-29 16:18:17.988975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.555 [2024-09-29 16:18:18.003971] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.555 [2024-09-29 16:18:18.004008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.555 [2024-09-29 16:18:18.018870] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.555 [2024-09-29 16:18:18.018907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.555 [2024-09-29 16:18:18.034018] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.555 [2024-09-29 16:18:18.034055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.555 [2024-09-29 16:18:18.049539] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.555 [2024-09-29 16:18:18.049581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.555 [2024-09-29 16:18:18.065234] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.555 [2024-09-29 16:18:18.065285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.555 [2024-09-29 16:18:18.079906] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.555 [2024-09-29 16:18:18.079943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.555 [2024-09-29 16:18:18.094555] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.555 [2024-09-29 16:18:18.094590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.555 [2024-09-29 16:18:18.109989] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.555 [2024-09-29 16:18:18.110042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.813 [2024-09-29 16:18:18.124539] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.813 [2024-09-29 16:18:18.124575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.813 [2024-09-29 16:18:18.139321] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.813 [2024-09-29 16:18:18.139359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.813 [2024-09-29 16:18:18.153932] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.813 [2024-09-29 16:18:18.153984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.813 [2024-09-29 16:18:18.168544] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.813 [2024-09-29 16:18:18.168581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.813 [2024-09-29 16:18:18.183185] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.813 [2024-09-29 16:18:18.183222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.813 [2024-09-29 16:18:18.198019] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.813 [2024-09-29 16:18:18.198066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.813 [2024-09-29 16:18:18.212375] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.813 [2024-09-29 16:18:18.212412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.813 [2024-09-29 16:18:18.226461] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.813 [2024-09-29 16:18:18.226497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.813 [2024-09-29 16:18:18.240989] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.813 [2024-09-29 16:18:18.241027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.813 [2024-09-29 16:18:18.255013] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.813 [2024-09-29 16:18:18.255050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.813 [2024-09-29 16:18:18.269641] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.813 [2024-09-29 16:18:18.269686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.813 [2024-09-29 16:18:18.284357] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.813 [2024-09-29 16:18:18.284410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.813 [2024-09-29 16:18:18.299178] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.813 [2024-09-29 16:18:18.299214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.813 [2024-09-29 16:18:18.312381] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.813 [2024-09-29 16:18:18.312417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.813 [2024-09-29 16:18:18.326632] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.813 [2024-09-29 16:18:18.326668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.813 [2024-09-29 16:18:18.341151] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.813 [2024-09-29 16:18:18.341201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.813 [2024-09-29 16:18:18.356057] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.813 [2024-09-29 16:18:18.356094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.813 [2024-09-29 16:18:18.370817] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.813 [2024-09-29 16:18:18.370853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.071 [2024-09-29 16:18:18.385018] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.071 [2024-09-29 16:18:18.385070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.071 [2024-09-29 16:18:18.399160] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.071 [2024-09-29 16:18:18.399197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.071 [2024-09-29 16:18:18.413391] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.071 [2024-09-29 16:18:18.413443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.071 [2024-09-29 16:18:18.427815] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.071 [2024-09-29 16:18:18.427851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.071 [2024-09-29 16:18:18.441957] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.071 [2024-09-29 16:18:18.442009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.071 [2024-09-29 16:18:18.456652] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.071 [2024-09-29 16:18:18.456697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.071 [2024-09-29 16:18:18.471155] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.071 [2024-09-29 16:18:18.471191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.071 [2024-09-29 16:18:18.486835] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.072 [2024-09-29 16:18:18.486871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.072 [2024-09-29 16:18:18.501201] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.072 [2024-09-29 16:18:18.501236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.072 [2024-09-29 16:18:18.517068] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.072 [2024-09-29 16:18:18.517105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.072 [2024-09-29 16:18:18.532829] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.072 [2024-09-29 16:18:18.532865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.072 [2024-09-29 16:18:18.547586] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.072 [2024-09-29 16:18:18.547638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.072 [2024-09-29 16:18:18.562660] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.072 [2024-09-29 16:18:18.562725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.072 [2024-09-29 16:18:18.577413] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.072 [2024-09-29 16:18:18.577449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.072 [2024-09-29 16:18:18.592329] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.072 [2024-09-29 16:18:18.592386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.072 [2024-09-29 16:18:18.607408] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.072 [2024-09-29 16:18:18.607459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.072 [2024-09-29 16:18:18.622037] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.072 [2024-09-29 16:18:18.622074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.330 [2024-09-29 16:18:18.636906] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.330 [2024-09-29 16:18:18.636943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.330 [2024-09-29 16:18:18.651660] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.330 [2024-09-29 16:18:18.651732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.330 [2024-09-29 16:18:18.666821] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.330 [2024-09-29 16:18:18.666858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.330 [2024-09-29 16:18:18.681552] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.330 [2024-09-29 16:18:18.681590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.330 [2024-09-29 16:18:18.696857] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.330 [2024-09-29 16:18:18.696893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.330 [2024-09-29 16:18:18.712300] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.330 [2024-09-29 16:18:18.712338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.330 [2024-09-29 16:18:18.727228] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.330 [2024-09-29 16:18:18.727269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.330 [2024-09-29 16:18:18.742425] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.330 [2024-09-29 16:18:18.742476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.330 [2024-09-29 16:18:18.757507] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.330 [2024-09-29 16:18:18.757543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.330 [2024-09-29 16:18:18.772570] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.330 [2024-09-29 16:18:18.772621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.330 [2024-09-29 16:18:18.787421] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.330 [2024-09-29 16:18:18.787472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.330 [2024-09-29 16:18:18.802193] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.330 [2024-09-29 16:18:18.802244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.330 [2024-09-29 16:18:18.817079] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.330 [2024-09-29 16:18:18.817130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.330 [2024-09-29 16:18:18.832034] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.330 [2024-09-29 16:18:18.832086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.330 [2024-09-29 16:18:18.847208] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.330 [2024-09-29 16:18:18.847243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.330 [2024-09-29 16:18:18.861906] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.330 [2024-09-29 16:18:18.861952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.330 [2024-09-29 16:18:18.876309] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.330 [2024-09-29 16:18:18.876345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.330 [2024-09-29 16:18:18.890843] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.330 [2024-09-29 16:18:18.890880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.593 [2024-09-29 16:18:18.905012] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.593 [2024-09-29 16:18:18.905063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.593 [2024-09-29 16:18:18.919357] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.593 [2024-09-29 16:18:18.919392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.593 [2024-09-29 16:18:18.934178] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.593 [2024-09-29 16:18:18.934213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.593 [2024-09-29 16:18:18.948684] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.593 [2024-09-29 16:18:18.948737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.593 [2024-09-29 16:18:18.963309] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.593 [2024-09-29 16:18:18.963361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.593 8573.00 IOPS, 66.98 MiB/s [2024-09-29 16:18:18.978024] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.593 [2024-09-29 16:18:18.978075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.593 [2024-09-29 16:18:18.993140] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.593 [2024-09-29 16:18:18.993181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.593 [2024-09-29 16:18:19.008241] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.593 [2024-09-29 16:18:19.008293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.593 [2024-09-29 16:18:19.022777] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.593 [2024-09-29 16:18:19.022813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.593 [2024-09-29 16:18:19.037511] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.593 [2024-09-29 16:18:19.037565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.593 [2024-09-29 16:18:19.052466] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.593 [2024-09-29 16:18:19.052502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.593 [2024-09-29 16:18:19.068088] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.594 [2024-09-29 16:18:19.068140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.594 [2024-09-29 16:18:19.082691] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.594 [2024-09-29 16:18:19.082754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.594 [2024-09-29 16:18:19.097568] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.594 [2024-09-29 16:18:19.097619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.594 [2024-09-29 16:18:19.112812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.594 [2024-09-29 16:18:19.112848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.594 [2024-09-29 16:18:19.127888] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.594 [2024-09-29 16:18:19.127928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.594 [2024-09-29 16:18:19.142576] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.594 [2024-09-29 16:18:19.142646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.856 [2024-09-29 16:18:19.157193] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.856 [2024-09-29 16:18:19.157246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.856 [2024-09-29 16:18:19.171688] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.856 [2024-09-29 16:18:19.171723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.856 [2024-09-29 16:18:19.186140] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.856 [2024-09-29 16:18:19.186180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.856 [2024-09-29 16:18:19.200820] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.856 [2024-09-29 16:18:19.200856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.856 [2024-09-29 16:18:19.215410] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.856 [2024-09-29 16:18:19.215460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.856 [2024-09-29 16:18:19.230452] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.856 [2024-09-29 16:18:19.230488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.856 [2024-09-29 16:18:19.243335] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.856 [2024-09-29 16:18:19.243370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.856 [2024-09-29 16:18:19.258212] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.856 [2024-09-29 16:18:19.258264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.856 [2024-09-29 16:18:19.273172] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.856 [2024-09-29 16:18:19.273223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.856 [2024-09-29 16:18:19.287454] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.856 [2024-09-29 16:18:19.287489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.856 [2024-09-29 16:18:19.301968] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.856 [2024-09-29 16:18:19.302005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.856 [2024-09-29 16:18:19.317205] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.856 [2024-09-29 16:18:19.317242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.856 [2024-09-29 16:18:19.332202] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.856 [2024-09-29 16:18:19.332243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.856 [2024-09-29 16:18:19.347668] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.856 [2024-09-29 16:18:19.347714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.856 [2024-09-29 16:18:19.362343] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.856 [2024-09-29 16:18:19.362379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.856 [2024-09-29 16:18:19.377400] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.856 [2024-09-29 16:18:19.377456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.856 [2024-09-29 16:18:19.391747] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.856 [2024-09-29 16:18:19.391783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.856 [2024-09-29 16:18:19.406573] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.856 [2024-09-29 16:18:19.406608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.114 [2024-09-29 16:18:19.421478] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.114 [2024-09-29 16:18:19.421542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.114 [2024-09-29 16:18:19.436304] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.114 [2024-09-29 16:18:19.436360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.114 [2024-09-29 16:18:19.450981] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.114 [2024-09-29 16:18:19.451035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.114 [2024-09-29 16:18:19.466637] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.114 [2024-09-29 16:18:19.466702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.114 [2024-09-29 16:18:19.482353] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.114 [2024-09-29 16:18:19.482406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.114 [2024-09-29 16:18:19.496967] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.114 [2024-09-29 16:18:19.497018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.114 [2024-09-29 16:18:19.512052] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.114 [2024-09-29 16:18:19.512094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.114 [2024-09-29 16:18:19.526523] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.114 [2024-09-29 16:18:19.526560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.114 [2024-09-29 16:18:19.540632] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.115 [2024-09-29 16:18:19.540690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.115 [2024-09-29 16:18:19.555317] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.115 [2024-09-29 16:18:19.555354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.115 [2024-09-29 16:18:19.569544] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.115 [2024-09-29 16:18:19.569580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.115 [2024-09-29 16:18:19.584558] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.115 [2024-09-29 16:18:19.584610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.115 [2024-09-29 16:18:19.597436] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.115 [2024-09-29 16:18:19.597472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.115 [2024-09-29 16:18:19.611745] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.115 [2024-09-29 16:18:19.611781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.115 [2024-09-29 16:18:19.626623] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.115 [2024-09-29 16:18:19.626658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.115 [2024-09-29 16:18:19.641846] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.115 [2024-09-29 16:18:19.641882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.115 [2024-09-29 16:18:19.656035] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.115 [2024-09-29 16:18:19.656072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.115 [2024-09-29 16:18:19.670191] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.115 [2024-09-29 16:18:19.670228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.373 [2024-09-29 16:18:19.684653] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.373 [2024-09-29 16:18:19.684699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.373 [2024-09-29 16:18:19.699427] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.373 [2024-09-29 16:18:19.699463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.373 [2024-09-29 16:18:19.714211] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.373 [2024-09-29 16:18:19.714247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.373 [2024-09-29 16:18:19.728788] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.373 [2024-09-29 16:18:19.728823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.373 [2024-09-29 16:18:19.742995] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.373 [2024-09-29 16:18:19.743031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.373 [2024-09-29 16:18:19.757298] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.373 [2024-09-29 16:18:19.757335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.373 [2024-09-29 16:18:19.771708] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.373 [2024-09-29 16:18:19.771745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.373 [2024-09-29 16:18:19.785990] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.373 [2024-09-29 16:18:19.786041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.373 [2024-09-29 16:18:19.800435] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.373 [2024-09-29 16:18:19.800485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.373 [2024-09-29 16:18:19.814630] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.373 [2024-09-29 16:18:19.814667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.373 [2024-09-29 16:18:19.829000] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.373 [2024-09-29 16:18:19.829062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.373 [2024-09-29 16:18:19.843522] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.373 [2024-09-29 16:18:19.843560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.373 [2024-09-29 16:18:19.858367] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.373 [2024-09-29 16:18:19.858404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.373 [2024-09-29 16:18:19.873586] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.373 [2024-09-29 16:18:19.873622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.373 [2024-09-29 16:18:19.889045] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.373 [2024-09-29 16:18:19.889097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.373 [2024-09-29 16:18:19.904338] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.373 [2024-09-29 16:18:19.904374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.373 [2024-09-29 16:18:19.919769] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.373 [2024-09-29 16:18:19.919806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.373 [2024-09-29 16:18:19.934228] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.373 [2024-09-29 16:18:19.934265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.631 [2024-09-29 16:18:19.948812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.631 [2024-09-29 16:18:19.948849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.631 [2024-09-29 16:18:19.963544] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.631 [2024-09-29 16:18:19.963596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.631 8593.33 IOPS, 67.14 MiB/s [2024-09-29 16:18:19.978565] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.631 [2024-09-29 16:18:19.978601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.631 [2024-09-29 16:18:19.993267] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.631 [2024-09-29 16:18:19.993304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.631 [2024-09-29 16:18:20.008735] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.631 [2024-09-29 16:18:20.008782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.631 [2024-09-29 16:18:20.021878] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.631 [2024-09-29 16:18:20.021915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.631 [2024-09-29 16:18:20.036579] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.631 [2024-09-29 16:18:20.036624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.631 [2024-09-29 16:18:20.051803] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.631 [2024-09-29 16:18:20.051858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.631 [2024-09-29 16:18:20.067889] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.631 [2024-09-29 16:18:20.067945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.631 [2024-09-29 16:18:20.083505] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.631 [2024-09-29 16:18:20.083575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.631 [2024-09-29 16:18:20.099047] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.631 [2024-09-29 16:18:20.099084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.631 [2024-09-29 16:18:20.111785] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.631 [2024-09-29 16:18:20.111823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.631 [2024-09-29 16:18:20.126875] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.631 [2024-09-29 16:18:20.126910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.631 [2024-09-29 16:18:20.141145] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.631 [2024-09-29 16:18:20.141201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.631 [2024-09-29 16:18:20.156458] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.631 [2024-09-29 16:18:20.156511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.631 [2024-09-29 16:18:20.171541] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.631 [2024-09-29 16:18:20.171576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.631 [2024-09-29 16:18:20.185883] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.631 [2024-09-29 16:18:20.185919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.889 [2024-09-29 16:18:20.200206] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.890 [2024-09-29 16:18:20.200257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.890 [2024-09-29 16:18:20.214985] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.890 [2024-09-29 16:18:20.215020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.890 [2024-09-29 16:18:20.229516] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.890 [2024-09-29 16:18:20.229568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.890 [2024-09-29 16:18:20.243958] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.890 [2024-09-29 16:18:20.243994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.890 [2024-09-29 16:18:20.258930] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.890 [2024-09-29 16:18:20.258971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.890 [2024-09-29 16:18:20.273642] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.890 [2024-09-29 16:18:20.273687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.890 [2024-09-29 16:18:20.287989] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.890 [2024-09-29 16:18:20.288041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.890 [2024-09-29 16:18:20.302360] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.890 [2024-09-29 16:18:20.302416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.890 [2024-09-29 16:18:20.317367] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.890 [2024-09-29 16:18:20.317419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.890 [2024-09-29 16:18:20.332321] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.890 [2024-09-29 16:18:20.332358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.890 [2024-09-29 16:18:20.347299] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.890 [2024-09-29 16:18:20.347336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.890 [2024-09-29 16:18:20.360271] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.890 [2024-09-29 16:18:20.360308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.890 [2024-09-29 16:18:20.374460] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.890 [2024-09-29 16:18:20.374494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.890 [2024-09-29 16:18:20.389077] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.890 [2024-09-29 16:18:20.389127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.890 [2024-09-29 16:18:20.403546] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.890 [2024-09-29 16:18:20.403602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.890 [2024-09-29 16:18:20.418913] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.890 [2024-09-29 16:18:20.418949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.890 [2024-09-29 16:18:20.432848] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.890 [2024-09-29 16:18:20.432885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.890 [2024-09-29 16:18:20.447641] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.890 [2024-09-29 16:18:20.447707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.149 [2024-09-29 16:18:20.462664] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.149 [2024-09-29 16:18:20.462734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.149 [2024-09-29 16:18:20.477121] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.149 [2024-09-29 16:18:20.477157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.149 [2024-09-29 16:18:20.491516] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.149 [2024-09-29 16:18:20.491573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.149 [2024-09-29 16:18:20.506503] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.149 [2024-09-29 16:18:20.506543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.149 [2024-09-29 16:18:20.521683] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.149 [2024-09-29 16:18:20.521743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.149 [2024-09-29 16:18:20.536099] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.149 [2024-09-29 16:18:20.536153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.149 [2024-09-29 16:18:20.551065] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.149 [2024-09-29 16:18:20.551119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.149 [2024-09-29 16:18:20.565080] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.149 [2024-09-29 16:18:20.565133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.149 [2024-09-29 16:18:20.579498] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.149 [2024-09-29 16:18:20.579539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.149 [2024-09-29 16:18:20.594940] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.149 [2024-09-29 16:18:20.594977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.149 [2024-09-29 16:18:20.606834] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.149 [2024-09-29 16:18:20.606870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.149 [2024-09-29 16:18:20.620823] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.149 [2024-09-29 16:18:20.620859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.149 [2024-09-29 16:18:20.635790] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.149 [2024-09-29 16:18:20.635840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.149 [2024-09-29 16:18:20.649869] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.149 [2024-09-29 16:18:20.649906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.149 [2024-09-29 16:18:20.664816] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.149 [2024-09-29 16:18:20.664853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.149 [2024-09-29 16:18:20.678785] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.149 [2024-09-29 16:18:20.678822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.149 [2024-09-29 16:18:20.693892] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.149 [2024-09-29 16:18:20.693928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.149 [2024-09-29 16:18:20.708938] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.149 [2024-09-29 16:18:20.708974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.408 [2024-09-29 16:18:20.723722] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.408 [2024-09-29 16:18:20.723759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.408 [2024-09-29 16:18:20.738776] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.408 [2024-09-29 16:18:20.738813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.408 [2024-09-29 16:18:20.753799] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.408 [2024-09-29 16:18:20.753835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.408 [2024-09-29 16:18:20.768726] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.408 [2024-09-29 16:18:20.768762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.408 [2024-09-29 16:18:20.783262] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.408 [2024-09-29 16:18:20.783298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.408 [2024-09-29 16:18:20.797681] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.408 [2024-09-29 16:18:20.797725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.408 [2024-09-29 16:18:20.812234] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.408 [2024-09-29 16:18:20.812275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.408 [2024-09-29 16:18:20.826970] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.408 [2024-09-29 16:18:20.827007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.408 [2024-09-29 16:18:20.842109] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.408 [2024-09-29 16:18:20.842146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.408 [2024-09-29 16:18:20.857179] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.408 [2024-09-29 16:18:20.857214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.408 [2024-09-29 16:18:20.872204] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.408 [2024-09-29 16:18:20.872241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.408 [2024-09-29 16:18:20.886322] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.408 [2024-09-29 16:18:20.886359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.408 [2024-09-29 16:18:20.900801] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.408 [2024-09-29 16:18:20.900838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.408 [2024-09-29 16:18:20.915596] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.408 [2024-09-29 16:18:20.915647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.408 [2024-09-29 16:18:20.930148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.408 [2024-09-29 16:18:20.930184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.408 [2024-09-29 16:18:20.944341] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.408 [2024-09-29 16:18:20.944393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.408 [2024-09-29 16:18:20.958514] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.408 [2024-09-29 16:18:20.958551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.667 8598.25 IOPS, 67.17 MiB/s [2024-09-29 16:18:20.973348] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.667 [2024-09-29 16:18:20.973384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.667 [2024-09-29 16:18:20.988380] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.667 [2024-09-29 16:18:20.988416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.667 [2024-09-29 16:18:21.003033] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.667 [2024-09-29 16:18:21.003084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.667 [2024-09-29 16:18:21.017595] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.667 [2024-09-29 16:18:21.017657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.667 [2024-09-29 16:18:21.032399] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.667 [2024-09-29 16:18:21.032450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.667 [2024-09-29 16:18:21.046938] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.667 [2024-09-29 16:18:21.046975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.667 [2024-09-29 16:18:21.061405] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.667 [2024-09-29 16:18:21.061442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.667 [2024-09-29 16:18:21.075837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.667 [2024-09-29 16:18:21.075883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.667 [2024-09-29 16:18:21.090166] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.667 [2024-09-29 16:18:21.090202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.667 [2024-09-29 16:18:21.104716] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.667 [2024-09-29 16:18:21.104760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.667 [2024-09-29 16:18:21.119432] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.667 [2024-09-29 16:18:21.119470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.667 [2024-09-29 16:18:21.133760] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.667 [2024-09-29 16:18:21.133796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.667 [2024-09-29 16:18:21.148086] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.667 [2024-09-29 16:18:21.148138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.667 [2024-09-29 16:18:21.163051] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.667 [2024-09-29 16:18:21.163087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.667 [2024-09-29 16:18:21.177143] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.667 [2024-09-29 16:18:21.177180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.668 [2024-09-29 16:18:21.190889] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.668 [2024-09-29 16:18:21.190925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.668 [2024-09-29 16:18:21.205527] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.668 [2024-09-29 16:18:21.205579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.668 [2024-09-29 16:18:21.219572] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.668 [2024-09-29 16:18:21.219623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.926 [2024-09-29 16:18:21.234619] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.926 [2024-09-29 16:18:21.234682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.926 [2024-09-29 16:18:21.250202] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.926 [2024-09-29 16:18:21.250255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.926 [2024-09-29 16:18:21.265051] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.926 [2024-09-29 16:18:21.265087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.926 [2024-09-29 16:18:21.280450] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.926 [2024-09-29 16:18:21.280486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.926 [2024-09-29 16:18:21.295035] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.926 [2024-09-29 16:18:21.295086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.926 [2024-09-29 16:18:21.310086] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.926 [2024-09-29 16:18:21.310122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.926 [2024-09-29 16:18:21.326014] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.926 [2024-09-29 16:18:21.326066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.926 [2024-09-29 16:18:21.340798] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.926 [2024-09-29 16:18:21.340834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.926 [2024-09-29 16:18:21.356499] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.926 [2024-09-29 16:18:21.356553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.926 [2024-09-29 16:18:21.370827] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.926 [2024-09-29 16:18:21.370881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.926 [2024-09-29 16:18:21.385997] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.926 [2024-09-29 16:18:21.386032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.926 [2024-09-29 16:18:21.400726] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.926 [2024-09-29 16:18:21.400763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.926 [2024-09-29 16:18:21.415922] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.926 [2024-09-29 16:18:21.415974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.926 [2024-09-29 16:18:21.430525] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.926 [2024-09-29 16:18:21.430580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.926 [2024-09-29 16:18:21.446009] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.926 [2024-09-29 16:18:21.446044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.926 [2024-09-29 16:18:21.460394] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.926 [2024-09-29 16:18:21.460430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.926 [2024-09-29 16:18:21.474386] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.926 [2024-09-29 16:18:21.474436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.926 [2024-09-29 16:18:21.489028] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.926 [2024-09-29 16:18:21.489082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.185 [2024-09-29 16:18:21.503862] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.185 [2024-09-29 16:18:21.503898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.185 [2024-09-29 16:18:21.519285] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.185 [2024-09-29 16:18:21.519337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.185 [2024-09-29 16:18:21.534529] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.185 [2024-09-29 16:18:21.534579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.185 [2024-09-29 16:18:21.549894] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.185 [2024-09-29 16:18:21.549930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.185 [2024-09-29 16:18:21.565027] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.185 [2024-09-29 16:18:21.565077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.185 [2024-09-29 16:18:21.580059] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.185 [2024-09-29 16:18:21.580096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.185 [2024-09-29 16:18:21.594517] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.185 [2024-09-29 16:18:21.594567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.185 [2024-09-29 16:18:21.609631] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.185 [2024-09-29 16:18:21.609698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.185 [2024-09-29 16:18:21.624223] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.185 [2024-09-29 16:18:21.624259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.185 [2024-09-29 16:18:21.638722] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.185 [2024-09-29 16:18:21.638758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.185 [2024-09-29 16:18:21.652919] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.185 [2024-09-29 16:18:21.652971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.185 [2024-09-29 16:18:21.667901] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.185 [2024-09-29 16:18:21.667948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.185 [2024-09-29 16:18:21.682292] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.185 [2024-09-29 16:18:21.682328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.185 [2024-09-29 16:18:21.696210] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.185 [2024-09-29 16:18:21.696264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.185 [2024-09-29 16:18:21.710412] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.185 [2024-09-29 16:18:21.710468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.185 [2024-09-29 16:18:21.725254] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.185 [2024-09-29 16:18:21.725309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.185 [2024-09-29 16:18:21.739826] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.185 [2024-09-29 16:18:21.739866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.443 [2024-09-29 16:18:21.754557] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.443 [2024-09-29 16:18:21.754593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.443 [2024-09-29 16:18:21.768959] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.443 [2024-09-29 16:18:21.768995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.443 [2024-09-29 16:18:21.783508] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.443 [2024-09-29 16:18:21.783544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.443 [2024-09-29 16:18:21.797951] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.443 [2024-09-29 16:18:21.798003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.443 [2024-09-29 16:18:21.812643] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.443 [2024-09-29 16:18:21.812707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.443 [2024-09-29 16:18:21.827461] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.443 [2024-09-29 16:18:21.827495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.443 [2024-09-29 16:18:21.842227] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.443 [2024-09-29 16:18:21.842267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.443 [2024-09-29 16:18:21.857964] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.443 [2024-09-29 16:18:21.858015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.443 [2024-09-29 16:18:21.873715] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.443 [2024-09-29 16:18:21.873751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.443 [2024-09-29 16:18:21.888363] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.443 [2024-09-29 16:18:21.888398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.443 [2024-09-29 16:18:21.903267] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.443 [2024-09-29 16:18:21.903307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.443 [2024-09-29 16:18:21.917780] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.443 [2024-09-29 16:18:21.917830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.443 [2024-09-29 16:18:21.932326] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.443 [2024-09-29 16:18:21.932363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.443 [2024-09-29 16:18:21.947003] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.443 [2024-09-29 16:18:21.947039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.443 [2024-09-29 16:18:21.961662] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.443 [2024-09-29 16:18:21.961742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.443 8602.40 IOPS, 67.21 MiB/s [2024-09-29 16:18:21.976132] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.443 [2024-09-29 16:18:21.976165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.443 [2024-09-29 16:18:21.986837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.443 [2024-09-29 16:18:21.986873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.443 00:10:21.443 Latency(us) 00:10:21.443 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:21.443 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:21.443 Nvme1n1 : 5.01 8602.92 67.21 0.00 0.00 14852.82 4927.34 24272.59 00:10:21.443 =================================================================================================================== 00:10:21.443 Total : 8602.92 67.21 0.00 0.00 14852.82 4927.34 24272.59 00:10:21.443 [2024-09-29 16:18:21.991355] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.443 [2024-09-29 16:18:21.991391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.443 [2024-09-29 16:18:21.999356] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.443 [2024-09-29 16:18:21.999394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.702 [2024-09-29 16:18:22.007352] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.702 [2024-09-29 16:18:22.007389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.702 [2024-09-29 16:18:22.015392] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.702 [2024-09-29 16:18:22.015428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.702 [2024-09-29 16:18:22.023413] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.702 [2024-09-29 16:18:22.023449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.702 [2024-09-29 16:18:22.031412] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.702 [2024-09-29 16:18:22.031447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.702 [2024-09-29 16:18:22.039574] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.702 [2024-09-29 16:18:22.039644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.702 [2024-09-29 16:18:22.047582] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.702 [2024-09-29 16:18:22.047645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.702 [2024-09-29 16:18:22.055517] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.702 [2024-09-29 16:18:22.055577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.702 [2024-09-29 16:18:22.063565] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.702 [2024-09-29 16:18:22.063608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.702 [2024-09-29 16:18:22.071523] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.702 [2024-09-29 16:18:22.071567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.702 [2024-09-29 16:18:22.079588] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.702 [2024-09-29 16:18:22.079623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.702 [2024-09-29 16:18:22.087604] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.702 [2024-09-29 16:18:22.087639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.702 [2024-09-29 16:18:22.095595] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.702 [2024-09-29 16:18:22.095631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.702 [2024-09-29 16:18:22.103637] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.702 [2024-09-29 16:18:22.103683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.702 [2024-09-29 16:18:22.111664] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.702 [2024-09-29 16:18:22.111734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.702 [2024-09-29 16:18:22.119657] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.702 [2024-09-29 16:18:22.119701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.702 [2024-09-29 16:18:22.127735] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.702 [2024-09-29 16:18:22.127766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.702 [2024-09-29 16:18:22.135790] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.702 [2024-09-29 16:18:22.135847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.702 [2024-09-29 16:18:22.151918] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.702 [2024-09-29 16:18:22.152041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.702 [2024-09-29 16:18:22.159816] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.702 [2024-09-29 16:18:22.159845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.702 [2024-09-29 16:18:22.167821] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.702 [2024-09-29 16:18:22.167850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.702 [2024-09-29 16:18:22.175844] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.702 [2024-09-29 16:18:22.175882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.702 [2024-09-29 16:18:22.183879] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.702 [2024-09-29 16:18:22.183908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.702 [2024-09-29 16:18:22.191866] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.702 [2024-09-29 16:18:22.191894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.702 [2024-09-29 16:18:22.199904] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.702 [2024-09-29 16:18:22.199932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.702 [2024-09-29 16:18:22.207903] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.702 [2024-09-29 16:18:22.207931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.702 [2024-09-29 16:18:22.215967] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.702 [2024-09-29 16:18:22.215995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.702 [2024-09-29 16:18:22.223981] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.702 [2024-09-29 16:18:22.224037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.702 [2024-09-29 16:18:22.231992] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.702 [2024-09-29 16:18:22.232040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.702 [2024-09-29 16:18:22.240063] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.702 [2024-09-29 16:18:22.240098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.702 [2024-09-29 16:18:22.248072] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.702 [2024-09-29 16:18:22.248106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.702 [2024-09-29 16:18:22.256080] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.702 [2024-09-29 16:18:22.256113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.702 [2024-09-29 16:18:22.264145] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.702 [2024-09-29 16:18:22.264178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.961 [2024-09-29 16:18:22.272117] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.961 [2024-09-29 16:18:22.272150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.961 [2024-09-29 16:18:22.280158] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.961 [2024-09-29 16:18:22.280192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.961 [2024-09-29 16:18:22.288183] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.961 [2024-09-29 16:18:22.288217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.961 [2024-09-29 16:18:22.296284] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.961 [2024-09-29 16:18:22.296319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.961 [2024-09-29 16:18:22.304358] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.961 [2024-09-29 16:18:22.304422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.961 [2024-09-29 16:18:22.312339] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.961 [2024-09-29 16:18:22.312395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.961 [2024-09-29 16:18:22.320258] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.961 [2024-09-29 16:18:22.320291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.961 [2024-09-29 16:18:22.328298] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.961 [2024-09-29 16:18:22.328333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.961 [2024-09-29 16:18:22.336297] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.961 [2024-09-29 16:18:22.336331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.961 [2024-09-29 16:18:22.344349] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.961 [2024-09-29 16:18:22.344383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.961 [2024-09-29 16:18:22.352368] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.961 [2024-09-29 16:18:22.352403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.961 [2024-09-29 16:18:22.360372] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.961 [2024-09-29 16:18:22.360408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.961 [2024-09-29 16:18:22.368543] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.961 [2024-09-29 16:18:22.368606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.961 [2024-09-29 16:18:22.376569] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.961 [2024-09-29 16:18:22.376638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.961 [2024-09-29 16:18:22.384522] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.961 [2024-09-29 16:18:22.384585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.961 [2024-09-29 16:18:22.392509] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.961 [2024-09-29 16:18:22.392544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.961 [2024-09-29 16:18:22.400478] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.961 [2024-09-29 16:18:22.400511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.961 [2024-09-29 16:18:22.408536] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.961 [2024-09-29 16:18:22.408570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.961 [2024-09-29 16:18:22.416554] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.961 [2024-09-29 16:18:22.416588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.961 [2024-09-29 16:18:22.424547] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.961 [2024-09-29 16:18:22.424580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.961 [2024-09-29 16:18:22.432593] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.961 [2024-09-29 16:18:22.432627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.961 [2024-09-29 16:18:22.440620] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.961 [2024-09-29 16:18:22.440654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.961 [2024-09-29 16:18:22.448619] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.961 [2024-09-29 16:18:22.448652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.961 [2024-09-29 16:18:22.456662] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.961 [2024-09-29 16:18:22.456719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.961 [2024-09-29 16:18:22.464658] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.961 [2024-09-29 16:18:22.464700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.961 [2024-09-29 16:18:22.472749] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.961 [2024-09-29 16:18:22.472778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.961 [2024-09-29 16:18:22.480744] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.961 [2024-09-29 16:18:22.480773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.961 [2024-09-29 16:18:22.488746] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.961 [2024-09-29 16:18:22.488783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.961 [2024-09-29 16:18:22.496780] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.961 [2024-09-29 16:18:22.496809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.961 [2024-09-29 16:18:22.504804] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.961 [2024-09-29 16:18:22.504832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.961 [2024-09-29 16:18:22.512801] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.961 [2024-09-29 16:18:22.512829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.961 [2024-09-29 16:18:22.520837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.961 [2024-09-29 16:18:22.520865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.220 [2024-09-29 16:18:22.528832] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.220 [2024-09-29 16:18:22.528860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.220 [2024-09-29 16:18:22.536881] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.220 [2024-09-29 16:18:22.536910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.220 [2024-09-29 16:18:22.545013] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.220 [2024-09-29 16:18:22.545076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.220 [2024-09-29 16:18:22.552968] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.220 [2024-09-29 16:18:22.553029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.220 [2024-09-29 16:18:22.560971] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.220 [2024-09-29 16:18:22.560998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.220 [2024-09-29 16:18:22.568997] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.220 [2024-09-29 16:18:22.569043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.220 [2024-09-29 16:18:22.576990] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.220 [2024-09-29 16:18:22.577024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.220 [2024-09-29 16:18:22.585042] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.220 [2024-09-29 16:18:22.585076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.220 [2024-09-29 16:18:22.593047] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.220 [2024-09-29 16:18:22.593081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.220 [2024-09-29 16:18:22.601063] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.220 [2024-09-29 16:18:22.601091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.220 [2024-09-29 16:18:22.609125] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.220 [2024-09-29 16:18:22.609160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.220 [2024-09-29 16:18:22.617130] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.220 [2024-09-29 16:18:22.617164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.220 [2024-09-29 16:18:22.625183] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.220 [2024-09-29 16:18:22.625217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.220 [2024-09-29 16:18:22.633198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.220 [2024-09-29 16:18:22.633232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.220 [2024-09-29 16:18:22.641182] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.220 [2024-09-29 16:18:22.641215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.220 [2024-09-29 16:18:22.649230] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.220 [2024-09-29 16:18:22.649263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.220 [2024-09-29 16:18:22.657233] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.220 [2024-09-29 16:18:22.657266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.220 [2024-09-29 16:18:22.665388] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.220 [2024-09-29 16:18:22.665446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.220 [2024-09-29 16:18:22.673360] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.220 [2024-09-29 16:18:22.673411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.220 [2024-09-29 16:18:22.681302] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.220 [2024-09-29 16:18:22.681335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.220 [2024-09-29 16:18:22.689342] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.220 [2024-09-29 16:18:22.689376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.220 [2024-09-29 16:18:22.697365] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.220 [2024-09-29 16:18:22.697398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.220 [2024-09-29 16:18:22.705377] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.220 [2024-09-29 16:18:22.705411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.220 [2024-09-29 16:18:22.713443] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.220 [2024-09-29 16:18:22.713478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.220 [2024-09-29 16:18:22.721413] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.220 [2024-09-29 16:18:22.721445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.220 [2024-09-29 16:18:22.729471] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.220 [2024-09-29 16:18:22.729507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.220 [2024-09-29 16:18:22.737485] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.220 [2024-09-29 16:18:22.737518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.220 [2024-09-29 16:18:22.745485] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.220 [2024-09-29 16:18:22.745519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.220 [2024-09-29 16:18:22.753533] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.220 [2024-09-29 16:18:22.753568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.220 [2024-09-29 16:18:22.761684] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.220 [2024-09-29 16:18:22.761748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.220 [2024-09-29 16:18:22.769562] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.220 [2024-09-29 16:18:22.769596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.220 [2024-09-29 16:18:22.777612] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.220 [2024-09-29 16:18:22.777646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.479 [2024-09-29 16:18:22.785606] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.479 [2024-09-29 16:18:22.785638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.479 [2024-09-29 16:18:22.793650] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.479 [2024-09-29 16:18:22.793693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.479 [2024-09-29 16:18:22.801684] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.479 [2024-09-29 16:18:22.801732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.479 [2024-09-29 16:18:22.809668] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.479 [2024-09-29 16:18:22.809731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.479 [2024-09-29 16:18:22.817747] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.479 [2024-09-29 16:18:22.817776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.479 [2024-09-29 16:18:22.825756] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.479 [2024-09-29 16:18:22.825786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.479 [2024-09-29 16:18:22.833762] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.479 [2024-09-29 16:18:22.833792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.479 [2024-09-29 16:18:22.841794] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.479 [2024-09-29 16:18:22.841822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.479 [2024-09-29 16:18:22.849800] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.479 [2024-09-29 16:18:22.849829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.479 [2024-09-29 16:18:22.857854] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.479 [2024-09-29 16:18:22.857884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.480 [2024-09-29 16:18:22.865893] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.480 [2024-09-29 16:18:22.865946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.480 [2024-09-29 16:18:22.873940] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.480 [2024-09-29 16:18:22.874016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.480 [2024-09-29 16:18:22.881917] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.480 [2024-09-29 16:18:22.881962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.480 [2024-09-29 16:18:22.889917] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.480 [2024-09-29 16:18:22.889965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.480 [2024-09-29 16:18:22.897944] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.480 [2024-09-29 16:18:22.897986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.480 [2024-09-29 16:18:22.905976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.480 [2024-09-29 16:18:22.906004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.480 [2024-09-29 16:18:22.913969] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.480 [2024-09-29 16:18:22.914001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.480 [2024-09-29 16:18:22.922112] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.480 [2024-09-29 16:18:22.922176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.480 [2024-09-29 16:18:22.930068] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.480 [2024-09-29 16:18:22.930102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.480 [2024-09-29 16:18:22.938057] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.480 [2024-09-29 16:18:22.938090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.480 [2024-09-29 16:18:22.946112] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.480 [2024-09-29 16:18:22.946147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.480 [2024-09-29 16:18:22.954148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.480 [2024-09-29 16:18:22.954184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.480 [2024-09-29 16:18:22.962124] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.480 [2024-09-29 16:18:22.962157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.480 [2024-09-29 16:18:22.970178] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.480 [2024-09-29 16:18:22.970212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.480 [2024-09-29 16:18:22.978174] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.480 [2024-09-29 16:18:22.978206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.480 [2024-09-29 16:18:22.986239] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.480 [2024-09-29 16:18:22.986273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.480 [2024-09-29 16:18:22.994247] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.480 [2024-09-29 16:18:22.994281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.480 [2024-09-29 16:18:23.002252] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.480 [2024-09-29 16:18:23.002295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.480 [2024-09-29 16:18:23.010302] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.480 [2024-09-29 16:18:23.010336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.480 [2024-09-29 16:18:23.018326] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.480 [2024-09-29 16:18:23.018359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.480 [2024-09-29 16:18:23.026319] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.480 [2024-09-29 16:18:23.026352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.480 [2024-09-29 16:18:23.034364] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.480 [2024-09-29 16:18:23.034398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.738 [2024-09-29 16:18:23.042365] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.738 [2024-09-29 16:18:23.042399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.738 [2024-09-29 16:18:23.050452] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.738 [2024-09-29 16:18:23.050492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.738 [2024-09-29 16:18:23.058433] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.738 [2024-09-29 16:18:23.058467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.738 [2024-09-29 16:18:23.066432] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.738 [2024-09-29 16:18:23.066465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.738 [2024-09-29 16:18:23.074513] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.738 [2024-09-29 16:18:23.074548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.738 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3065487) - No such process 00:10:22.738 16:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3065487 00:10:22.738 16:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:22.738 16:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.738 16:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:22.738 16:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.738 16:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:22.738 16:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.738 16:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:22.738 delay0 00:10:22.738 16:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.738 16:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:22.738 16:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.738 16:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:22.738 16:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.738 16:18:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:22.738 [2024-09-29 16:18:23.222646] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:30.844 Initializing NVMe Controllers 00:10:30.844 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:30.844 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:30.844 Initialization complete. Launching workers. 00:10:30.844 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 235, failed: 15523 00:10:30.844 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 15635, failed to submit 123 00:10:30.844 success 15558, unsuccessful 77, failed 0 00:10:30.844 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:30.844 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:30.844 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:30.844 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:30.844 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:30.844 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:30.844 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:30.844 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:30.844 rmmod nvme_tcp 00:10:30.844 rmmod nvme_fabrics 00:10:30.844 rmmod nvme_keyring 00:10:30.844 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:30.844 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:30.844 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:30.844 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@513 -- # '[' -n 3063372 ']' 00:10:30.844 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # killprocess 3063372 00:10:30.844 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 3063372 ']' 00:10:30.844 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 3063372 00:10:30.844 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:10:30.844 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:30.844 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3063372 00:10:30.844 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:30.844 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:30.844 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3063372' 00:10:30.844 killing process with pid 3063372 00:10:30.844 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 3063372 00:10:30.844 16:18:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 3063372 00:10:31.462 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:31.462 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:31.462 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:31.462 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:31.462 16:18:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-save 00:10:31.462 16:18:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:31.462 16:18:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-restore 00:10:31.462 16:18:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:31.462 16:18:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:31.462 16:18:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:31.462 16:18:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:31.462 16:18:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:33.997 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:33.997 00:10:33.997 real 0m34.082s 00:10:33.997 user 0m51.036s 00:10:33.997 sys 0m8.987s 00:10:33.997 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:33.997 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:33.997 ************************************ 00:10:33.997 END TEST nvmf_zcopy 00:10:33.997 ************************************ 00:10:33.997 16:18:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:33.997 16:18:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:33.997 16:18:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:33.997 16:18:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:33.997 ************************************ 00:10:33.997 START TEST nvmf_nmic 00:10:33.997 ************************************ 00:10:33.997 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:33.998 * Looking for test storage... 00:10:33.998 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:33.998 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:33.998 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:10:33.998 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:33.998 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:33.998 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:33.998 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:33.998 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:33.998 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:33.998 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:33.998 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:33.998 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:33.998 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:33.998 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:33.998 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:33.998 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:33.998 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:33.998 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:33.998 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:33.998 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:33.998 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:33.998 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:33.998 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:33.998 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:33.998 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:33.998 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:33.998 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:33.998 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:33.998 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:33.998 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:33.998 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:33.998 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:33.998 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:33.998 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:33.998 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:33.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.998 --rc genhtml_branch_coverage=1 00:10:33.998 --rc genhtml_function_coverage=1 00:10:33.998 --rc genhtml_legend=1 00:10:33.998 --rc geninfo_all_blocks=1 00:10:33.998 --rc geninfo_unexecuted_blocks=1 00:10:33.998 00:10:33.998 ' 00:10:33.998 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:33.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.998 --rc genhtml_branch_coverage=1 00:10:33.998 --rc genhtml_function_coverage=1 00:10:33.998 --rc genhtml_legend=1 00:10:33.998 --rc geninfo_all_blocks=1 00:10:33.998 --rc geninfo_unexecuted_blocks=1 00:10:33.998 00:10:33.998 ' 00:10:33.998 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:33.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.998 --rc genhtml_branch_coverage=1 00:10:33.998 --rc genhtml_function_coverage=1 00:10:33.998 --rc genhtml_legend=1 00:10:33.998 --rc geninfo_all_blocks=1 00:10:33.998 --rc geninfo_unexecuted_blocks=1 00:10:33.998 00:10:33.998 ' 00:10:33.998 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:33.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.998 --rc genhtml_branch_coverage=1 00:10:33.998 --rc genhtml_function_coverage=1 00:10:33.998 --rc genhtml_legend=1 00:10:33.998 --rc geninfo_all_blocks=1 00:10:33.998 --rc geninfo_unexecuted_blocks=1 00:10:33.998 00:10:33.998 ' 00:10:33.998 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:33.998 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:33.998 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:33.998 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:33.998 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:33.998 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:33.998 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:33.998 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:33.998 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:33.998 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:33.998 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:33.998 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:33.998 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:33.998 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:33.998 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:33.998 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:33.998 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:33.998 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:33.998 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:33.998 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:33.998 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:33.998 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:33.998 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:33.998 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.998 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.998 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.998 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:33.999 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.999 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:33.999 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:33.999 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:33.999 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:33.999 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:33.999 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:33.999 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:33.999 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:33.999 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:33.999 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:33.999 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:33.999 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:33.999 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:33.999 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:33.999 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:33.999 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:33.999 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:33.999 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:33.999 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:33.999 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:33.999 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:33.999 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:33.999 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:10:33.999 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:10:33.999 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:10:33.999 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:35.899 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:35.899 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:35.899 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:35.899 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # is_hw=yes 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:35.899 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:35.900 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:35.900 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:35.900 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:35.900 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:35.900 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:35.900 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:35.900 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:35.900 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:35.900 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:35.900 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:35.900 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:35.900 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:35.900 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:35.900 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:35.900 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:35.900 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:35.900 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:35.900 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:35.900 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:35.900 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:35.900 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:10:35.900 00:10:35.900 --- 10.0.0.2 ping statistics --- 00:10:35.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:35.900 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:10:35.900 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:35.900 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:35.900 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:10:35.900 00:10:35.900 --- 10.0.0.1 ping statistics --- 00:10:35.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:35.900 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:10:35.900 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:35.900 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # return 0 00:10:35.900 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:35.900 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:35.900 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:35.900 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:35.900 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:35.900 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:35.900 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:35.900 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:35.900 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:35.900 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:35.900 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:36.158 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # nvmfpid=3069285 00:10:36.158 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:36.158 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # waitforlisten 3069285 00:10:36.158 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 3069285 ']' 00:10:36.158 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:36.158 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:36.158 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:36.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:36.158 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:36.158 16:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:36.158 [2024-09-29 16:18:36.566282] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:10:36.158 [2024-09-29 16:18:36.566442] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:36.416 [2024-09-29 16:18:36.750710] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:36.674 [2024-09-29 16:18:37.026613] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:36.674 [2024-09-29 16:18:37.026710] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:36.674 [2024-09-29 16:18:37.026737] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:36.674 [2024-09-29 16:18:37.026761] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:36.674 [2024-09-29 16:18:37.026781] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:36.674 [2024-09-29 16:18:37.026884] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:36.674 [2024-09-29 16:18:37.026945] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:36.674 [2024-09-29 16:18:37.026993] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.674 [2024-09-29 16:18:37.027002] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:10:37.240 16:18:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:37.240 16:18:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:10:37.240 16:18:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:37.240 16:18:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:37.240 16:18:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:37.240 16:18:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:37.240 16:18:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:37.240 16:18:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.240 16:18:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:37.240 [2024-09-29 16:18:37.620169] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:37.240 16:18:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.240 16:18:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:37.240 16:18:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.240 16:18:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:37.240 Malloc0 00:10:37.240 16:18:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.240 16:18:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:37.240 16:18:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.240 16:18:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:37.240 16:18:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.240 16:18:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:37.240 16:18:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.240 16:18:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:37.240 16:18:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.240 16:18:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:37.240 16:18:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.240 16:18:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:37.240 [2024-09-29 16:18:37.727801] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:37.240 16:18:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.240 16:18:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:37.240 test case1: single bdev can't be used in multiple subsystems 00:10:37.240 16:18:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:37.240 16:18:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.240 16:18:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:37.240 16:18:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.240 16:18:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:37.241 16:18:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.241 16:18:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:37.241 16:18:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.241 16:18:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:37.241 16:18:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:37.241 16:18:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.241 16:18:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:37.241 [2024-09-29 16:18:37.751561] bdev.c:8193:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:37.241 [2024-09-29 16:18:37.751620] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:37.241 [2024-09-29 16:18:37.751648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.241 request: 00:10:37.241 { 00:10:37.241 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:37.241 "namespace": { 00:10:37.241 "bdev_name": "Malloc0", 00:10:37.241 "no_auto_visible": false 00:10:37.241 }, 00:10:37.241 "method": "nvmf_subsystem_add_ns", 00:10:37.241 "req_id": 1 00:10:37.241 } 00:10:37.241 Got JSON-RPC error response 00:10:37.241 response: 00:10:37.241 { 00:10:37.241 "code": -32602, 00:10:37.241 "message": "Invalid parameters" 00:10:37.241 } 00:10:37.241 16:18:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:37.241 16:18:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:37.241 16:18:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:37.241 16:18:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:37.241 Adding namespace failed - expected result. 00:10:37.241 16:18:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:37.241 test case2: host connect to nvmf target in multiple paths 00:10:37.241 16:18:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:37.241 16:18:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.241 16:18:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:37.241 [2024-09-29 16:18:37.759801] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:37.241 16:18:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.241 16:18:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:38.175 16:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:38.741 16:18:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:38.741 16:18:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:10:38.741 16:18:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:38.741 16:18:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:38.741 16:18:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:10:40.641 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:40.641 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:40.641 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:40.641 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:40.641 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:40.641 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:10:40.641 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:40.641 [global] 00:10:40.641 thread=1 00:10:40.641 invalidate=1 00:10:40.641 rw=write 00:10:40.641 time_based=1 00:10:40.641 runtime=1 00:10:40.641 ioengine=libaio 00:10:40.641 direct=1 00:10:40.641 bs=4096 00:10:40.641 iodepth=1 00:10:40.641 norandommap=0 00:10:40.641 numjobs=1 00:10:40.641 00:10:40.641 verify_dump=1 00:10:40.641 verify_backlog=512 00:10:40.641 verify_state_save=0 00:10:40.641 do_verify=1 00:10:40.641 verify=crc32c-intel 00:10:40.641 [job0] 00:10:40.641 filename=/dev/nvme0n1 00:10:40.641 Could not set queue depth (nvme0n1) 00:10:40.900 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:40.900 fio-3.35 00:10:40.900 Starting 1 thread 00:10:42.274 00:10:42.274 job0: (groupid=0, jobs=1): err= 0: pid=3069931: Sun Sep 29 16:18:42 2024 00:10:42.274 read: IOPS=27, BW=108KiB/s (111kB/s)(112KiB/1033msec) 00:10:42.274 slat (nsec): min=7421, max=48626, avg=24345.11, stdev=11246.12 00:10:42.274 clat (usec): min=308, max=42321, avg=32442.97, stdev=17084.37 00:10:42.274 lat (usec): min=324, max=42332, avg=32467.31, stdev=17088.76 00:10:42.274 clat percentiles (usec): 00:10:42.274 | 1.00th=[ 310], 5.00th=[ 310], 10.00th=[ 314], 20.00th=[ 392], 00:10:42.274 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:42.274 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:10:42.274 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:42.274 | 99.99th=[42206] 00:10:42.274 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:10:42.274 slat (nsec): min=6200, max=40790, avg=12709.80, stdev=6109.50 00:10:42.274 clat (usec): min=172, max=475, avg=226.03, stdev=26.62 00:10:42.274 lat (usec): min=178, max=505, avg=238.74, stdev=27.80 00:10:42.274 clat percentiles (usec): 00:10:42.274 | 1.00th=[ 180], 5.00th=[ 188], 10.00th=[ 194], 20.00th=[ 204], 00:10:42.274 | 30.00th=[ 212], 40.00th=[ 221], 50.00th=[ 227], 60.00th=[ 233], 00:10:42.274 | 70.00th=[ 239], 80.00th=[ 245], 90.00th=[ 255], 95.00th=[ 269], 00:10:42.274 | 99.00th=[ 289], 99.50th=[ 302], 99.90th=[ 478], 99.95th=[ 478], 00:10:42.274 | 99.99th=[ 478] 00:10:42.274 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:42.274 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:42.274 lat (usec) : 250=80.56%, 500=15.37% 00:10:42.274 lat (msec) : 50=4.07% 00:10:42.274 cpu : usr=0.29%, sys=0.87%, ctx=540, majf=0, minf=1 00:10:42.274 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:42.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:42.274 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:42.274 issued rwts: total=28,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:42.274 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:42.274 00:10:42.274 Run status group 0 (all jobs): 00:10:42.274 READ: bw=108KiB/s (111kB/s), 108KiB/s-108KiB/s (111kB/s-111kB/s), io=112KiB (115kB), run=1033-1033msec 00:10:42.274 WRITE: bw=1983KiB/s (2030kB/s), 1983KiB/s-1983KiB/s (2030kB/s-2030kB/s), io=2048KiB (2097kB), run=1033-1033msec 00:10:42.274 00:10:42.274 Disk stats (read/write): 00:10:42.274 nvme0n1: ios=73/512, merge=0/0, ticks=778/109, in_queue=887, util=91.88% 00:10:42.274 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:42.274 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:42.274 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:42.274 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:10:42.274 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:42.274 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:42.274 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:42.274 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:42.274 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:10:42.274 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:42.274 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:42.274 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:42.274 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:42.274 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:42.274 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:42.274 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:42.274 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:42.274 rmmod nvme_tcp 00:10:42.274 rmmod nvme_fabrics 00:10:42.274 rmmod nvme_keyring 00:10:42.532 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:42.532 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:42.532 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:42.532 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@513 -- # '[' -n 3069285 ']' 00:10:42.532 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # killprocess 3069285 00:10:42.532 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 3069285 ']' 00:10:42.532 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 3069285 00:10:42.532 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:10:42.532 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:42.532 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3069285 00:10:42.532 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:42.532 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:42.532 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3069285' 00:10:42.532 killing process with pid 3069285 00:10:42.532 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 3069285 00:10:42.532 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 3069285 00:10:43.906 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:43.906 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:43.906 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:43.906 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:43.906 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-save 00:10:43.906 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:43.906 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-restore 00:10:43.906 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:43.906 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:43.906 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:43.906 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:43.906 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:46.437 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:46.437 00:10:46.437 real 0m12.304s 00:10:46.437 user 0m28.795s 00:10:46.437 sys 0m2.753s 00:10:46.437 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:46.437 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:46.437 ************************************ 00:10:46.437 END TEST nvmf_nmic 00:10:46.437 ************************************ 00:10:46.437 16:18:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:46.437 16:18:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:46.437 16:18:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:46.437 16:18:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:46.437 ************************************ 00:10:46.437 START TEST nvmf_fio_target 00:10:46.437 ************************************ 00:10:46.437 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:46.437 * Looking for test storage... 00:10:46.437 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:46.437 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:46.437 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:10:46.437 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:46.437 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:46.437 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:46.437 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:46.437 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:46.437 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:46.437 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:46.437 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:46.437 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:46.437 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:46.437 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:46.437 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:46.437 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:46.437 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:46.437 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:46.437 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:46.437 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:46.437 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:46.437 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:46.437 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:46.437 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:46.437 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:46.437 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:46.437 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:46.437 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:46.437 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:46.437 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:46.437 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:46.437 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:46.437 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:46.437 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:46.437 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:46.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.437 --rc genhtml_branch_coverage=1 00:10:46.437 --rc genhtml_function_coverage=1 00:10:46.438 --rc genhtml_legend=1 00:10:46.438 --rc geninfo_all_blocks=1 00:10:46.438 --rc geninfo_unexecuted_blocks=1 00:10:46.438 00:10:46.438 ' 00:10:46.438 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:46.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.438 --rc genhtml_branch_coverage=1 00:10:46.438 --rc genhtml_function_coverage=1 00:10:46.438 --rc genhtml_legend=1 00:10:46.438 --rc geninfo_all_blocks=1 00:10:46.438 --rc geninfo_unexecuted_blocks=1 00:10:46.438 00:10:46.438 ' 00:10:46.438 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:46.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.438 --rc genhtml_branch_coverage=1 00:10:46.438 --rc genhtml_function_coverage=1 00:10:46.438 --rc genhtml_legend=1 00:10:46.438 --rc geninfo_all_blocks=1 00:10:46.438 --rc geninfo_unexecuted_blocks=1 00:10:46.438 00:10:46.438 ' 00:10:46.438 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:46.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.438 --rc genhtml_branch_coverage=1 00:10:46.438 --rc genhtml_function_coverage=1 00:10:46.438 --rc genhtml_legend=1 00:10:46.438 --rc geninfo_all_blocks=1 00:10:46.438 --rc geninfo_unexecuted_blocks=1 00:10:46.438 00:10:46.438 ' 00:10:46.438 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:46.438 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:46.438 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:46.438 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:46.438 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:46.438 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:46.438 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:46.438 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:46.438 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:46.438 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:46.438 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:46.438 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:46.438 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:46.438 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:46.438 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:46.438 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:46.438 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:46.438 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:46.438 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:46.438 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:46.438 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:46.438 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:46.438 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:46.438 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.438 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.438 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.438 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:46.438 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.438 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:46.438 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:46.438 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:46.438 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:46.438 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:46.438 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:46.438 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:46.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:46.438 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:46.438 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:46.438 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:46.438 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:46.438 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:46.438 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:46.438 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:46.438 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:46.438 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:46.438 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:46.438 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:46.438 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:46.438 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:46.438 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:46.438 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:46.438 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:10:46.438 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:10:46.438 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:46.438 16:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:48.388 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:48.388 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:48.388 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:48.388 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # is_hw=yes 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:48.388 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:48.389 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:48.389 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:48.389 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:48.389 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:48.389 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:48.389 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:48.389 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:48.389 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:48.389 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:48.389 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:48.389 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:48.389 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:48.389 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:48.389 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:48.389 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:48.389 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:48.389 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:48.389 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:48.389 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:48.389 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.342 ms 00:10:48.389 00:10:48.389 --- 10.0.0.2 ping statistics --- 00:10:48.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:48.389 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:10:48.389 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:48.389 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:48.389 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:10:48.389 00:10:48.389 --- 10.0.0.1 ping statistics --- 00:10:48.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:48.389 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:10:48.389 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:48.389 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # return 0 00:10:48.389 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:48.389 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:48.389 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:48.389 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:48.389 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:48.389 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:48.389 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:48.389 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:48.389 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:48.389 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:48.389 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.389 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # nvmfpid=3072274 00:10:48.389 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:48.389 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # waitforlisten 3072274 00:10:48.389 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 3072274 ']' 00:10:48.389 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.389 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:48.389 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.389 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:48.389 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.670 [2024-09-29 16:18:49.016587] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:10:48.670 [2024-09-29 16:18:49.016752] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:48.670 [2024-09-29 16:18:49.159137] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:48.927 [2024-09-29 16:18:49.434161] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:48.927 [2024-09-29 16:18:49.434245] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:48.927 [2024-09-29 16:18:49.434272] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:48.927 [2024-09-29 16:18:49.434296] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:48.927 [2024-09-29 16:18:49.434316] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:48.927 [2024-09-29 16:18:49.434421] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:48.927 [2024-09-29 16:18:49.434483] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:48.927 [2024-09-29 16:18:49.434533] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.927 [2024-09-29 16:18:49.434540] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:10:49.492 16:18:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:49.492 16:18:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:10:49.492 16:18:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:49.492 16:18:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:49.492 16:18:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.492 16:18:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:49.492 16:18:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:50.057 [2024-09-29 16:18:50.351921] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:50.057 16:18:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:50.314 16:18:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:50.314 16:18:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:50.572 16:18:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:50.572 16:18:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:51.138 16:18:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:51.138 16:18:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:51.396 16:18:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:51.396 16:18:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:51.655 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:51.913 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:51.913 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:52.172 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:52.172 16:18:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:52.737 16:18:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:52.737 16:18:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:52.995 16:18:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:53.252 16:18:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:53.252 16:18:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:53.510 16:18:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:53.510 16:18:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:53.767 16:18:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:54.024 [2024-09-29 16:18:54.373956] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:54.024 16:18:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:54.282 16:18:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:54.539 16:18:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:55.105 16:18:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:55.105 16:18:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:10:55.105 16:18:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:55.105 16:18:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:10:55.105 16:18:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:10:55.105 16:18:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:10:57.633 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:57.633 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:57.633 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:57.633 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:10:57.633 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:57.633 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:10:57.633 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:57.633 [global] 00:10:57.633 thread=1 00:10:57.633 invalidate=1 00:10:57.633 rw=write 00:10:57.633 time_based=1 00:10:57.633 runtime=1 00:10:57.633 ioengine=libaio 00:10:57.633 direct=1 00:10:57.633 bs=4096 00:10:57.633 iodepth=1 00:10:57.633 norandommap=0 00:10:57.633 numjobs=1 00:10:57.633 00:10:57.633 verify_dump=1 00:10:57.633 verify_backlog=512 00:10:57.633 verify_state_save=0 00:10:57.633 do_verify=1 00:10:57.633 verify=crc32c-intel 00:10:57.633 [job0] 00:10:57.633 filename=/dev/nvme0n1 00:10:57.633 [job1] 00:10:57.633 filename=/dev/nvme0n2 00:10:57.633 [job2] 00:10:57.633 filename=/dev/nvme0n3 00:10:57.633 [job3] 00:10:57.633 filename=/dev/nvme0n4 00:10:57.633 Could not set queue depth (nvme0n1) 00:10:57.633 Could not set queue depth (nvme0n2) 00:10:57.633 Could not set queue depth (nvme0n3) 00:10:57.633 Could not set queue depth (nvme0n4) 00:10:57.633 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:57.633 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:57.633 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:57.633 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:57.633 fio-3.35 00:10:57.633 Starting 4 threads 00:10:58.568 00:10:58.568 job0: (groupid=0, jobs=1): err= 0: pid=3073491: Sun Sep 29 16:18:59 2024 00:10:58.568 read: IOPS=505, BW=2022KiB/s (2070kB/s)(2060KiB/1019msec) 00:10:58.568 slat (nsec): min=4350, max=35721, avg=10975.39, stdev=6174.49 00:10:58.568 clat (usec): min=248, max=41950, avg=1312.59, stdev=6149.17 00:10:58.568 lat (usec): min=262, max=41984, avg=1323.56, stdev=6151.01 00:10:58.568 clat percentiles (usec): 00:10:58.568 | 1.00th=[ 265], 5.00th=[ 277], 10.00th=[ 289], 20.00th=[ 306], 00:10:58.568 | 30.00th=[ 314], 40.00th=[ 326], 50.00th=[ 338], 60.00th=[ 367], 00:10:58.568 | 70.00th=[ 379], 80.00th=[ 383], 90.00th=[ 400], 95.00th=[ 437], 00:10:58.568 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:10:58.568 | 99.99th=[42206] 00:10:58.568 write: IOPS=1004, BW=4020KiB/s (4116kB/s)(4096KiB/1019msec); 0 zone resets 00:10:58.568 slat (nsec): min=7617, max=62272, avg=25711.09, stdev=10790.19 00:10:58.568 clat (usec): min=182, max=571, avg=296.32, stdev=90.19 00:10:58.568 lat (usec): min=193, max=609, avg=322.03, stdev=92.75 00:10:58.568 clat percentiles (usec): 00:10:58.568 | 1.00th=[ 190], 5.00th=[ 198], 10.00th=[ 206], 20.00th=[ 221], 00:10:58.568 | 30.00th=[ 227], 40.00th=[ 235], 50.00th=[ 245], 60.00th=[ 302], 00:10:58.568 | 70.00th=[ 371], 80.00th=[ 396], 90.00th=[ 420], 95.00th=[ 449], 00:10:58.568 | 99.00th=[ 515], 99.50th=[ 529], 99.90th=[ 570], 99.95th=[ 570], 00:10:58.568 | 99.99th=[ 570] 00:10:58.568 bw ( KiB/s): min= 4096, max= 4096, per=33.97%, avg=4096.00, stdev= 0.00, samples=2 00:10:58.568 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:10:58.568 lat (usec) : 250=34.44%, 500=63.42%, 750=1.10%, 1000=0.13% 00:10:58.568 lat (msec) : 2=0.06%, 20=0.06%, 50=0.78% 00:10:58.568 cpu : usr=1.57%, sys=3.05%, ctx=1539, majf=0, minf=2 00:10:58.569 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:58.569 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.569 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.569 issued rwts: total=515,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:58.569 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:58.569 job1: (groupid=0, jobs=1): err= 0: pid=3073492: Sun Sep 29 16:18:59 2024 00:10:58.569 read: IOPS=18, BW=75.3KiB/s (77.1kB/s)(76.0KiB/1009msec) 00:10:58.569 slat (nsec): min=12092, max=34277, avg=20143.58, stdev=8631.47 00:10:58.569 clat (usec): min=40648, max=41189, avg=40955.73, stdev=115.88 00:10:58.569 lat (usec): min=40667, max=41205, avg=40975.87, stdev=114.74 00:10:58.569 clat percentiles (usec): 00:10:58.569 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:10:58.569 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:58.569 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:58.569 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:58.569 | 99.99th=[41157] 00:10:58.569 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:10:58.569 slat (nsec): min=9829, max=79968, avg=31741.56, stdev=11101.86 00:10:58.569 clat (usec): min=211, max=659, avg=409.10, stdev=100.20 00:10:58.569 lat (usec): min=232, max=697, avg=440.84, stdev=101.85 00:10:58.569 clat percentiles (usec): 00:10:58.569 | 1.00th=[ 217], 5.00th=[ 229], 10.00th=[ 260], 20.00th=[ 306], 00:10:58.569 | 30.00th=[ 367], 40.00th=[ 396], 50.00th=[ 420], 60.00th=[ 445], 00:10:58.569 | 70.00th=[ 469], 80.00th=[ 502], 90.00th=[ 537], 95.00th=[ 562], 00:10:58.569 | 99.00th=[ 594], 99.50th=[ 619], 99.90th=[ 660], 99.95th=[ 660], 00:10:58.569 | 99.99th=[ 660] 00:10:58.569 bw ( KiB/s): min= 4096, max= 4096, per=33.97%, avg=4096.00, stdev= 0.00, samples=1 00:10:58.569 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:58.569 lat (usec) : 250=8.29%, 500=68.17%, 750=19.96% 00:10:58.569 lat (msec) : 50=3.58% 00:10:58.569 cpu : usr=0.99%, sys=2.18%, ctx=531, majf=0, minf=1 00:10:58.569 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:58.569 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.569 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.569 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:58.569 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:58.569 job2: (groupid=0, jobs=1): err= 0: pid=3073493: Sun Sep 29 16:18:59 2024 00:10:58.569 read: IOPS=615, BW=2462KiB/s (2521kB/s)(2504KiB/1017msec) 00:10:58.569 slat (nsec): min=4600, max=49193, avg=13410.08, stdev=7253.26 00:10:58.569 clat (usec): min=271, max=42906, avg=1131.90, stdev=5606.00 00:10:58.569 lat (usec): min=284, max=42928, avg=1145.31, stdev=5607.91 00:10:58.569 clat percentiles (usec): 00:10:58.569 | 1.00th=[ 289], 5.00th=[ 293], 10.00th=[ 302], 20.00th=[ 314], 00:10:58.569 | 30.00th=[ 322], 40.00th=[ 330], 50.00th=[ 338], 60.00th=[ 347], 00:10:58.569 | 70.00th=[ 355], 80.00th=[ 375], 90.00th=[ 404], 95.00th=[ 437], 00:10:58.569 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42730], 99.95th=[42730], 00:10:58.569 | 99.99th=[42730] 00:10:58.569 write: IOPS=1006, BW=4028KiB/s (4124kB/s)(4096KiB/1017msec); 0 zone resets 00:10:58.569 slat (nsec): min=5438, max=73069, avg=16869.41, stdev=11980.14 00:10:58.569 clat (usec): min=185, max=459, avg=268.46, stdev=59.26 00:10:58.569 lat (usec): min=192, max=483, avg=285.33, stdev=68.31 00:10:58.569 clat percentiles (usec): 00:10:58.569 | 1.00th=[ 194], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 217], 00:10:58.569 | 30.00th=[ 223], 40.00th=[ 231], 50.00th=[ 249], 60.00th=[ 277], 00:10:58.569 | 70.00th=[ 297], 80.00th=[ 318], 90.00th=[ 371], 95.00th=[ 388], 00:10:58.569 | 99.00th=[ 416], 99.50th=[ 429], 99.90th=[ 457], 99.95th=[ 461], 00:10:58.569 | 99.99th=[ 461] 00:10:58.569 bw ( KiB/s): min= 4096, max= 4096, per=33.97%, avg=4096.00, stdev= 0.00, samples=2 00:10:58.569 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:10:58.569 lat (usec) : 250=31.03%, 500=68.06%, 750=0.06% 00:10:58.569 lat (msec) : 2=0.06%, 4=0.06%, 50=0.73% 00:10:58.569 cpu : usr=2.17%, sys=2.66%, ctx=1650, majf=0, minf=1 00:10:58.569 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:58.569 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.569 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.569 issued rwts: total=626,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:58.569 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:58.569 job3: (groupid=0, jobs=1): err= 0: pid=3073494: Sun Sep 29 16:18:59 2024 00:10:58.569 read: IOPS=166, BW=667KiB/s (683kB/s)(672KiB/1008msec) 00:10:58.569 slat (nsec): min=7195, max=53194, avg=25991.64, stdev=10925.96 00:10:58.569 clat (usec): min=276, max=42039, avg=4757.39, stdev=12695.20 00:10:58.569 lat (usec): min=289, max=42052, avg=4783.38, stdev=12693.36 00:10:58.569 clat percentiles (usec): 00:10:58.569 | 1.00th=[ 277], 5.00th=[ 281], 10.00th=[ 289], 20.00th=[ 318], 00:10:58.569 | 30.00th=[ 351], 40.00th=[ 371], 50.00th=[ 383], 60.00th=[ 392], 00:10:58.569 | 70.00th=[ 408], 80.00th=[ 449], 90.00th=[41157], 95.00th=[41157], 00:10:58.569 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:58.569 | 99.99th=[42206] 00:10:58.569 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:10:58.569 slat (nsec): min=8518, max=62125, avg=26858.73, stdev=11037.10 00:10:58.569 clat (usec): min=214, max=583, avg=357.69, stdev=76.61 00:10:58.569 lat (usec): min=237, max=624, avg=384.55, stdev=74.76 00:10:58.569 clat percentiles (usec): 00:10:58.569 | 1.00th=[ 219], 5.00th=[ 233], 10.00th=[ 245], 20.00th=[ 273], 00:10:58.569 | 30.00th=[ 318], 40.00th=[ 338], 50.00th=[ 367], 60.00th=[ 388], 00:10:58.569 | 70.00th=[ 408], 80.00th=[ 429], 90.00th=[ 457], 95.00th=[ 474], 00:10:58.569 | 99.00th=[ 506], 99.50th=[ 523], 99.90th=[ 586], 99.95th=[ 586], 00:10:58.569 | 99.99th=[ 586] 00:10:58.569 bw ( KiB/s): min= 4096, max= 4096, per=33.97%, avg=4096.00, stdev= 0.00, samples=1 00:10:58.569 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:58.569 lat (usec) : 250=8.68%, 500=86.47%, 750=2.21% 00:10:58.569 lat (msec) : 50=2.65% 00:10:58.569 cpu : usr=1.49%, sys=1.29%, ctx=682, majf=0, minf=1 00:10:58.569 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:58.569 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.569 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.569 issued rwts: total=168,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:58.569 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:58.569 00:10:58.569 Run status group 0 (all jobs): 00:10:58.569 READ: bw=5213KiB/s (5338kB/s), 75.3KiB/s-2462KiB/s (77.1kB/s-2521kB/s), io=5312KiB (5439kB), run=1008-1019msec 00:10:58.569 WRITE: bw=11.8MiB/s (12.3MB/s), 2030KiB/s-4028KiB/s (2078kB/s-4124kB/s), io=12.0MiB (12.6MB), run=1008-1019msec 00:10:58.569 00:10:58.569 Disk stats (read/write): 00:10:58.569 nvme0n1: ios=562/848, merge=0/0, ticks=708/247, in_queue=955, util=99.40% 00:10:58.569 nvme0n2: ios=36/512, merge=0/0, ticks=637/192, in_queue=829, util=87.08% 00:10:58.569 nvme0n3: ios=617/1024, merge=0/0, ticks=541/261, in_queue=802, util=88.92% 00:10:58.569 nvme0n4: ios=214/512, merge=0/0, ticks=765/187, in_queue=952, util=99.79% 00:10:58.569 16:18:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:58.569 [global] 00:10:58.569 thread=1 00:10:58.569 invalidate=1 00:10:58.569 rw=randwrite 00:10:58.569 time_based=1 00:10:58.569 runtime=1 00:10:58.569 ioengine=libaio 00:10:58.569 direct=1 00:10:58.569 bs=4096 00:10:58.569 iodepth=1 00:10:58.569 norandommap=0 00:10:58.569 numjobs=1 00:10:58.569 00:10:58.569 verify_dump=1 00:10:58.569 verify_backlog=512 00:10:58.569 verify_state_save=0 00:10:58.569 do_verify=1 00:10:58.569 verify=crc32c-intel 00:10:58.827 [job0] 00:10:58.827 filename=/dev/nvme0n1 00:10:58.827 [job1] 00:10:58.827 filename=/dev/nvme0n2 00:10:58.827 [job2] 00:10:58.827 filename=/dev/nvme0n3 00:10:58.827 [job3] 00:10:58.827 filename=/dev/nvme0n4 00:10:58.827 Could not set queue depth (nvme0n1) 00:10:58.827 Could not set queue depth (nvme0n2) 00:10:58.827 Could not set queue depth (nvme0n3) 00:10:58.827 Could not set queue depth (nvme0n4) 00:10:58.827 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:58.827 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:58.827 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:58.827 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:58.827 fio-3.35 00:10:58.827 Starting 4 threads 00:11:00.202 00:11:00.202 job0: (groupid=0, jobs=1): err= 0: pid=3073720: Sun Sep 29 16:19:00 2024 00:11:00.202 read: IOPS=19, BW=78.9KiB/s (80.8kB/s)(80.0KiB/1014msec) 00:11:00.202 slat (nsec): min=13848, max=35186, avg=24454.35, stdev=9363.73 00:11:00.202 clat (usec): min=40889, max=41970, avg=41183.93, stdev=405.39 00:11:00.202 lat (usec): min=40923, max=42005, avg=41208.38, stdev=405.23 00:11:00.202 clat percentiles (usec): 00:11:00.202 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:11:00.202 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:00.202 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:11:00.202 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:00.202 | 99.99th=[42206] 00:11:00.202 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:11:00.202 slat (nsec): min=7477, max=64381, avg=16059.78, stdev=8612.08 00:11:00.202 clat (usec): min=207, max=1287, avg=349.87, stdev=99.36 00:11:00.202 lat (usec): min=226, max=1300, avg=365.93, stdev=97.45 00:11:00.202 clat percentiles (usec): 00:11:00.202 | 1.00th=[ 221], 5.00th=[ 235], 10.00th=[ 249], 20.00th=[ 273], 00:11:00.202 | 30.00th=[ 293], 40.00th=[ 310], 50.00th=[ 330], 60.00th=[ 359], 00:11:00.202 | 70.00th=[ 396], 80.00th=[ 416], 90.00th=[ 457], 95.00th=[ 519], 00:11:00.202 | 99.00th=[ 578], 99.50th=[ 668], 99.90th=[ 1287], 99.95th=[ 1287], 00:11:00.202 | 99.99th=[ 1287] 00:11:00.202 bw ( KiB/s): min= 4096, max= 4096, per=33.80%, avg=4096.00, stdev= 0.00, samples=1 00:11:00.202 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:00.202 lat (usec) : 250=9.96%, 500=80.26%, 750=5.64% 00:11:00.202 lat (msec) : 2=0.38%, 50=3.76% 00:11:00.202 cpu : usr=0.59%, sys=0.69%, ctx=535, majf=0, minf=1 00:11:00.202 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:00.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.202 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.202 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:00.202 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:00.202 job1: (groupid=0, jobs=1): err= 0: pid=3073721: Sun Sep 29 16:19:00 2024 00:11:00.202 read: IOPS=19, BW=79.1KiB/s (80.9kB/s)(80.0KiB/1012msec) 00:11:00.202 slat (nsec): min=13591, max=48208, avg=26157.35, stdev=10492.09 00:11:00.202 clat (usec): min=40815, max=41021, avg=40962.31, stdev=46.87 00:11:00.202 lat (usec): min=40849, max=41048, avg=40988.47, stdev=43.10 00:11:00.202 clat percentiles (usec): 00:11:00.202 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:11:00.202 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:00.202 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:00.202 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:00.202 | 99.99th=[41157] 00:11:00.202 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:11:00.202 slat (nsec): min=6438, max=46633, avg=16476.91, stdev=7865.35 00:11:00.202 clat (usec): min=195, max=1329, avg=353.87, stdev=95.50 00:11:00.202 lat (usec): min=211, max=1349, avg=370.35, stdev=94.61 00:11:00.202 clat percentiles (usec): 00:11:00.202 | 1.00th=[ 210], 5.00th=[ 233], 10.00th=[ 247], 20.00th=[ 273], 00:11:00.202 | 30.00th=[ 306], 40.00th=[ 322], 50.00th=[ 347], 60.00th=[ 363], 00:11:00.202 | 70.00th=[ 388], 80.00th=[ 429], 90.00th=[ 469], 95.00th=[ 506], 00:11:00.202 | 99.00th=[ 594], 99.50th=[ 652], 99.90th=[ 1336], 99.95th=[ 1336], 00:11:00.202 | 99.99th=[ 1336] 00:11:00.202 bw ( KiB/s): min= 4096, max= 4096, per=33.80%, avg=4096.00, stdev= 0.00, samples=1 00:11:00.202 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:00.202 lat (usec) : 250=10.53%, 500=79.89%, 750=5.45%, 1000=0.19% 00:11:00.202 lat (msec) : 2=0.19%, 50=3.76% 00:11:00.202 cpu : usr=0.79%, sys=0.79%, ctx=534, majf=0, minf=1 00:11:00.202 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:00.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.203 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.203 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:00.203 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:00.203 job2: (groupid=0, jobs=1): err= 0: pid=3073722: Sun Sep 29 16:19:00 2024 00:11:00.203 read: IOPS=1387, BW=5550KiB/s (5684kB/s)(5556KiB/1001msec) 00:11:00.203 slat (nsec): min=4936, max=71295, avg=19231.13, stdev=12263.92 00:11:00.203 clat (usec): min=285, max=41260, avg=394.41, stdev=1098.77 00:11:00.203 lat (usec): min=291, max=41296, avg=413.64, stdev=1099.69 00:11:00.203 clat percentiles (usec): 00:11:00.203 | 1.00th=[ 293], 5.00th=[ 302], 10.00th=[ 306], 20.00th=[ 314], 00:11:00.203 | 30.00th=[ 322], 40.00th=[ 338], 50.00th=[ 351], 60.00th=[ 375], 00:11:00.203 | 70.00th=[ 392], 80.00th=[ 412], 90.00th=[ 445], 95.00th=[ 461], 00:11:00.203 | 99.00th=[ 537], 99.50th=[ 553], 99.90th=[ 906], 99.95th=[41157], 00:11:00.203 | 99.99th=[41157] 00:11:00.203 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:11:00.203 slat (nsec): min=6963, max=79595, avg=15763.83, stdev=9283.13 00:11:00.203 clat (usec): min=184, max=1476, avg=252.57, stdev=92.84 00:11:00.203 lat (usec): min=193, max=1490, avg=268.34, stdev=95.02 00:11:00.203 clat percentiles (usec): 00:11:00.203 | 1.00th=[ 190], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 204], 00:11:00.203 | 30.00th=[ 208], 40.00th=[ 212], 50.00th=[ 219], 60.00th=[ 229], 00:11:00.203 | 70.00th=[ 249], 80.00th=[ 281], 90.00th=[ 355], 95.00th=[ 445], 00:11:00.203 | 99.00th=[ 529], 99.50th=[ 701], 99.90th=[ 1221], 99.95th=[ 1483], 00:11:00.203 | 99.99th=[ 1483] 00:11:00.203 bw ( KiB/s): min= 8192, max= 8192, per=67.60%, avg=8192.00, stdev= 0.00, samples=1 00:11:00.203 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:00.203 lat (usec) : 250=37.13%, 500=61.33%, 750=1.23%, 1000=0.14% 00:11:00.203 lat (msec) : 2=0.14%, 50=0.03% 00:11:00.203 cpu : usr=2.20%, sys=5.70%, ctx=2926, majf=0, minf=1 00:11:00.203 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:00.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.203 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.203 issued rwts: total=1389,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:00.203 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:00.203 job3: (groupid=0, jobs=1): err= 0: pid=3073723: Sun Sep 29 16:19:00 2024 00:11:00.203 read: IOPS=184, BW=739KiB/s (757kB/s)(740KiB/1001msec) 00:11:00.203 slat (nsec): min=9183, max=42587, avg=24350.44, stdev=7391.14 00:11:00.203 clat (usec): min=345, max=41046, avg=4610.44, stdev=12333.43 00:11:00.203 lat (usec): min=373, max=41082, avg=4634.79, stdev=12333.32 00:11:00.203 clat percentiles (usec): 00:11:00.203 | 1.00th=[ 351], 5.00th=[ 363], 10.00th=[ 379], 20.00th=[ 408], 00:11:00.203 | 30.00th=[ 429], 40.00th=[ 449], 50.00th=[ 461], 60.00th=[ 469], 00:11:00.203 | 70.00th=[ 482], 80.00th=[ 494], 90.00th=[40633], 95.00th=[41157], 00:11:00.203 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:00.203 | 99.99th=[41157] 00:11:00.203 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:11:00.203 slat (nsec): min=6600, max=49699, avg=12429.41, stdev=5648.34 00:11:00.203 clat (usec): min=198, max=681, avg=261.30, stdev=53.47 00:11:00.203 lat (usec): min=207, max=694, avg=273.73, stdev=54.73 00:11:00.203 clat percentiles (usec): 00:11:00.203 | 1.00th=[ 206], 5.00th=[ 212], 10.00th=[ 217], 20.00th=[ 225], 00:11:00.203 | 30.00th=[ 231], 40.00th=[ 237], 50.00th=[ 241], 60.00th=[ 249], 00:11:00.203 | 70.00th=[ 265], 80.00th=[ 310], 90.00th=[ 338], 95.00th=[ 359], 00:11:00.203 | 99.00th=[ 412], 99.50th=[ 441], 99.90th=[ 685], 99.95th=[ 685], 00:11:00.203 | 99.99th=[ 685] 00:11:00.203 bw ( KiB/s): min= 4096, max= 4096, per=33.80%, avg=4096.00, stdev= 0.00, samples=1 00:11:00.203 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:00.203 lat (usec) : 250=45.62%, 500=49.50%, 750=2.01% 00:11:00.203 lat (msec) : 2=0.14%, 50=2.73% 00:11:00.203 cpu : usr=1.20%, sys=0.70%, ctx=698, majf=0, minf=1 00:11:00.203 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:00.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.203 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.203 issued rwts: total=185,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:00.203 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:00.203 00:11:00.203 Run status group 0 (all jobs): 00:11:00.203 READ: bw=6367KiB/s (6520kB/s), 78.9KiB/s-5550KiB/s (80.8kB/s-5684kB/s), io=6456KiB (6611kB), run=1001-1014msec 00:11:00.203 WRITE: bw=11.8MiB/s (12.4MB/s), 2020KiB/s-6138KiB/s (2068kB/s-6285kB/s), io=12.0MiB (12.6MB), run=1001-1014msec 00:11:00.203 00:11:00.203 Disk stats (read/write): 00:11:00.203 nvme0n1: ios=66/512, merge=0/0, ticks=762/181, in_queue=943, util=93.59% 00:11:00.203 nvme0n2: ios=40/512, merge=0/0, ticks=1599/177, in_queue=1776, util=100.00% 00:11:00.203 nvme0n3: ios=1193/1536, merge=0/0, ticks=591/368, in_queue=959, util=99.69% 00:11:00.203 nvme0n4: ios=74/512, merge=0/0, ticks=1016/134, in_queue=1150, util=97.35% 00:11:00.203 16:19:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:00.203 [global] 00:11:00.203 thread=1 00:11:00.203 invalidate=1 00:11:00.203 rw=write 00:11:00.203 time_based=1 00:11:00.203 runtime=1 00:11:00.203 ioengine=libaio 00:11:00.203 direct=1 00:11:00.203 bs=4096 00:11:00.203 iodepth=128 00:11:00.203 norandommap=0 00:11:00.203 numjobs=1 00:11:00.203 00:11:00.203 verify_dump=1 00:11:00.203 verify_backlog=512 00:11:00.203 verify_state_save=0 00:11:00.203 do_verify=1 00:11:00.203 verify=crc32c-intel 00:11:00.203 [job0] 00:11:00.203 filename=/dev/nvme0n1 00:11:00.203 [job1] 00:11:00.203 filename=/dev/nvme0n2 00:11:00.203 [job2] 00:11:00.203 filename=/dev/nvme0n3 00:11:00.203 [job3] 00:11:00.203 filename=/dev/nvme0n4 00:11:00.203 Could not set queue depth (nvme0n1) 00:11:00.203 Could not set queue depth (nvme0n2) 00:11:00.203 Could not set queue depth (nvme0n3) 00:11:00.203 Could not set queue depth (nvme0n4) 00:11:00.462 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:00.462 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:00.462 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:00.462 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:00.462 fio-3.35 00:11:00.462 Starting 4 threads 00:11:01.838 00:11:01.838 job0: (groupid=0, jobs=1): err= 0: pid=3074075: Sun Sep 29 16:19:02 2024 00:11:01.838 read: IOPS=2605, BW=10.2MiB/s (10.7MB/s)(10.2MiB/1007msec) 00:11:01.838 slat (usec): min=2, max=20298, avg=167.87, stdev=1224.65 00:11:01.838 clat (usec): min=1950, max=69649, avg=21910.08, stdev=9326.15 00:11:01.838 lat (usec): min=8173, max=78068, avg=22077.95, stdev=9433.41 00:11:01.838 clat percentiles (usec): 00:11:01.838 | 1.00th=[ 8455], 5.00th=[12256], 10.00th=[12649], 20.00th=[13566], 00:11:01.838 | 30.00th=[14877], 40.00th=[15664], 50.00th=[21103], 60.00th=[23987], 00:11:01.838 | 70.00th=[25297], 80.00th=[29754], 90.00th=[35914], 95.00th=[38011], 00:11:01.838 | 99.00th=[54789], 99.50th=[61604], 99.90th=[63177], 99.95th=[63177], 00:11:01.838 | 99.99th=[69731] 00:11:01.838 write: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec); 0 zone resets 00:11:01.838 slat (usec): min=4, max=12772, avg=176.39, stdev=1053.78 00:11:01.838 clat (usec): min=8725, max=79640, avg=22616.07, stdev=14251.01 00:11:01.838 lat (usec): min=8745, max=80473, avg=22792.46, stdev=14369.31 00:11:01.838 clat percentiles (usec): 00:11:01.838 | 1.00th=[11207], 5.00th=[13173], 10.00th=[13566], 20.00th=[13829], 00:11:01.838 | 30.00th=[14091], 40.00th=[15139], 50.00th=[17957], 60.00th=[19792], 00:11:01.838 | 70.00th=[21627], 80.00th=[22938], 90.00th=[53216], 95.00th=[58459], 00:11:01.838 | 99.00th=[71828], 99.50th=[76022], 99.90th=[79168], 99.95th=[79168], 00:11:01.838 | 99.99th=[79168] 00:11:01.838 bw ( KiB/s): min=11016, max=13048, per=23.29%, avg=12032.00, stdev=1436.84, samples=2 00:11:01.838 iops : min= 2754, max= 3262, avg=3008.00, stdev=359.21, samples=2 00:11:01.838 lat (msec) : 2=0.02%, 10=1.02%, 20=54.62%, 50=38.01%, 100=6.34% 00:11:01.838 cpu : usr=2.88%, sys=4.17%, ctx=174, majf=0, minf=1 00:11:01.838 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:11:01.838 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.838 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:01.838 issued rwts: total=2624,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:01.838 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:01.838 job1: (groupid=0, jobs=1): err= 0: pid=3074076: Sun Sep 29 16:19:02 2024 00:11:01.838 read: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec) 00:11:01.838 slat (usec): min=2, max=17446, avg=109.96, stdev=794.04 00:11:01.838 clat (usec): min=6397, max=55667, avg=14959.84, stdev=5255.85 00:11:01.838 lat (usec): min=6402, max=55672, avg=15069.79, stdev=5332.23 00:11:01.838 clat percentiles (usec): 00:11:01.838 | 1.00th=[ 9110], 5.00th=[10421], 10.00th=[11207], 20.00th=[12518], 00:11:01.838 | 30.00th=[12911], 40.00th=[13304], 50.00th=[13566], 60.00th=[13960], 00:11:01.838 | 70.00th=[15401], 80.00th=[16450], 90.00th=[19268], 95.00th=[21627], 00:11:01.838 | 99.00th=[45876], 99.50th=[47973], 99.90th=[54789], 99.95th=[54789], 00:11:01.838 | 99.99th=[55837] 00:11:01.838 write: IOPS=4292, BW=16.8MiB/s (17.6MB/s)(16.8MiB/1002msec); 0 zone resets 00:11:01.838 slat (usec): min=3, max=13000, avg=113.95, stdev=752.54 00:11:01.838 clat (usec): min=865, max=59925, avg=15306.08, stdev=9118.68 00:11:01.838 lat (usec): min=879, max=59931, avg=15420.04, stdev=9193.97 00:11:01.838 clat percentiles (usec): 00:11:01.838 | 1.00th=[ 2507], 5.00th=[ 7308], 10.00th=[11207], 20.00th=[11994], 00:11:01.838 | 30.00th=[12518], 40.00th=[12780], 50.00th=[13173], 60.00th=[13435], 00:11:01.838 | 70.00th=[14222], 80.00th=[15533], 90.00th=[19268], 95.00th=[40633], 00:11:01.838 | 99.00th=[55313], 99.50th=[56361], 99.90th=[60031], 99.95th=[60031], 00:11:01.838 | 99.99th=[60031] 00:11:01.838 bw ( KiB/s): min=16016, max=17376, per=32.32%, avg=16696.00, stdev=961.67, samples=2 00:11:01.838 iops : min= 4004, max= 4344, avg=4174.00, stdev=240.42, samples=2 00:11:01.838 lat (usec) : 1000=0.06% 00:11:01.838 lat (msec) : 2=0.14%, 4=0.60%, 10=5.48%, 20=86.85%, 50=5.28% 00:11:01.838 lat (msec) : 100=1.60% 00:11:01.838 cpu : usr=4.20%, sys=8.19%, ctx=279, majf=0, minf=1 00:11:01.838 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:01.838 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.838 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:01.838 issued rwts: total=4096,4301,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:01.838 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:01.838 job2: (groupid=0, jobs=1): err= 0: pid=3074078: Sun Sep 29 16:19:02 2024 00:11:01.838 read: IOPS=2918, BW=11.4MiB/s (12.0MB/s)(11.5MiB/1007msec) 00:11:01.838 slat (usec): min=2, max=21316, avg=187.35, stdev=1287.32 00:11:01.838 clat (usec): min=381, max=52638, avg=23332.96, stdev=9866.82 00:11:01.838 lat (usec): min=5989, max=53013, avg=23520.31, stdev=9964.40 00:11:01.838 clat percentiles (usec): 00:11:01.838 | 1.00th=[ 8029], 5.00th=[12256], 10.00th=[13173], 20.00th=[15139], 00:11:01.838 | 30.00th=[15270], 40.00th=[16188], 50.00th=[18744], 60.00th=[24249], 00:11:01.838 | 70.00th=[31065], 80.00th=[33817], 90.00th=[38536], 95.00th=[39584], 00:11:01.838 | 99.00th=[45351], 99.50th=[47449], 99.90th=[49546], 99.95th=[50070], 00:11:01.838 | 99.99th=[52691] 00:11:01.838 write: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec); 0 zone resets 00:11:01.838 slat (usec): min=3, max=13292, avg=133.31, stdev=977.21 00:11:01.838 clat (usec): min=2068, max=46308, avg=19237.91, stdev=8249.11 00:11:01.838 lat (usec): min=2215, max=46332, avg=19371.22, stdev=8340.31 00:11:01.838 clat percentiles (usec): 00:11:01.838 | 1.00th=[ 4178], 5.00th=[ 9372], 10.00th=[12256], 20.00th=[14353], 00:11:01.838 | 30.00th=[15008], 40.00th=[15664], 50.00th=[16188], 60.00th=[16909], 00:11:01.838 | 70.00th=[18744], 80.00th=[26870], 90.00th=[32900], 95.00th=[34866], 00:11:01.838 | 99.00th=[44827], 99.50th=[45351], 99.90th=[45351], 99.95th=[45351], 00:11:01.838 | 99.99th=[46400] 00:11:01.838 bw ( KiB/s): min= 8192, max=16384, per=23.79%, avg=12288.00, stdev=5792.62, samples=2 00:11:01.838 iops : min= 2048, max= 4096, avg=3072.00, stdev=1448.15, samples=2 00:11:01.838 lat (usec) : 500=0.02% 00:11:01.838 lat (msec) : 4=0.38%, 10=3.21%, 20=57.74%, 50=38.61%, 100=0.03% 00:11:01.838 cpu : usr=2.39%, sys=4.57%, ctx=247, majf=0, minf=1 00:11:01.838 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:11:01.838 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.838 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:01.838 issued rwts: total=2939,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:01.838 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:01.838 job3: (groupid=0, jobs=1): err= 0: pid=3074079: Sun Sep 29 16:19:02 2024 00:11:01.838 read: IOPS=2484, BW=9936KiB/s (10.2MB/s)(9956KiB/1002msec) 00:11:01.838 slat (usec): min=2, max=18043, avg=195.95, stdev=1141.59 00:11:01.838 clat (usec): min=1290, max=118196, avg=25998.04, stdev=18004.72 00:11:01.838 lat (usec): min=1401, max=118200, avg=26194.00, stdev=18117.48 00:11:01.838 clat percentiles (msec): 00:11:01.838 | 1.00th=[ 6], 5.00th=[ 13], 10.00th=[ 14], 20.00th=[ 15], 00:11:01.838 | 30.00th=[ 15], 40.00th=[ 16], 50.00th=[ 17], 60.00th=[ 24], 00:11:01.838 | 70.00th=[ 33], 80.00th=[ 35], 90.00th=[ 46], 95.00th=[ 56], 00:11:01.838 | 99.00th=[ 106], 99.50th=[ 113], 99.90th=[ 118], 99.95th=[ 118], 00:11:01.838 | 99.99th=[ 118] 00:11:01.838 write: IOPS=2554, BW=9.98MiB/s (10.5MB/s)(10.0MiB/1002msec); 0 zone resets 00:11:01.838 slat (usec): min=3, max=17587, avg=191.80, stdev=1164.26 00:11:01.838 clat (msec): min=10, max=111, avg=23.91, stdev=14.46 00:11:01.838 lat (msec): min=11, max=111, avg=24.10, stdev=14.60 00:11:01.839 clat percentiles (msec): 00:11:01.839 | 1.00th=[ 13], 5.00th=[ 15], 10.00th=[ 15], 20.00th=[ 16], 00:11:01.839 | 30.00th=[ 16], 40.00th=[ 18], 50.00th=[ 21], 60.00th=[ 23], 00:11:01.839 | 70.00th=[ 25], 80.00th=[ 30], 90.00th=[ 34], 95.00th=[ 45], 00:11:01.839 | 99.00th=[ 101], 99.50th=[ 110], 99.90th=[ 111], 99.95th=[ 111], 00:11:01.839 | 99.99th=[ 111] 00:11:01.839 bw ( KiB/s): min= 8192, max=12288, per=19.82%, avg=10240.00, stdev=2896.31, samples=2 00:11:01.839 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:11:01.839 lat (msec) : 2=0.18%, 10=1.27%, 20=51.02%, 50=41.55%, 100=4.75% 00:11:01.839 lat (msec) : 250=1.23% 00:11:01.839 cpu : usr=2.60%, sys=4.30%, ctx=248, majf=0, minf=1 00:11:01.839 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:11:01.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.839 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:01.839 issued rwts: total=2489,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:01.839 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:01.839 00:11:01.839 Run status group 0 (all jobs): 00:11:01.839 READ: bw=47.1MiB/s (49.4MB/s), 9936KiB/s-16.0MiB/s (10.2MB/s-16.7MB/s), io=47.5MiB (49.8MB), run=1002-1007msec 00:11:01.839 WRITE: bw=50.4MiB/s (52.9MB/s), 9.98MiB/s-16.8MiB/s (10.5MB/s-17.6MB/s), io=50.8MiB (53.3MB), run=1002-1007msec 00:11:01.839 00:11:01.839 Disk stats (read/write): 00:11:01.839 nvme0n1: ios=2087/2467, merge=0/0, ticks=22063/28744, in_queue=50807, util=98.50% 00:11:01.839 nvme0n2: ios=3108/3454, merge=0/0, ticks=37138/43611, in_queue=80749, util=98.56% 00:11:01.839 nvme0n3: ios=2605/2774, merge=0/0, ticks=37800/37231, in_queue=75031, util=100.00% 00:11:01.839 nvme0n4: ios=1536/1892, merge=0/0, ticks=15982/17103, in_queue=33085, util=89.10% 00:11:01.839 16:19:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:01.839 [global] 00:11:01.839 thread=1 00:11:01.839 invalidate=1 00:11:01.839 rw=randwrite 00:11:01.839 time_based=1 00:11:01.839 runtime=1 00:11:01.839 ioengine=libaio 00:11:01.839 direct=1 00:11:01.839 bs=4096 00:11:01.839 iodepth=128 00:11:01.839 norandommap=0 00:11:01.839 numjobs=1 00:11:01.839 00:11:01.839 verify_dump=1 00:11:01.839 verify_backlog=512 00:11:01.839 verify_state_save=0 00:11:01.839 do_verify=1 00:11:01.839 verify=crc32c-intel 00:11:01.839 [job0] 00:11:01.839 filename=/dev/nvme0n1 00:11:01.839 [job1] 00:11:01.839 filename=/dev/nvme0n2 00:11:01.839 [job2] 00:11:01.839 filename=/dev/nvme0n3 00:11:01.839 [job3] 00:11:01.839 filename=/dev/nvme0n4 00:11:01.839 Could not set queue depth (nvme0n1) 00:11:01.839 Could not set queue depth (nvme0n2) 00:11:01.839 Could not set queue depth (nvme0n3) 00:11:01.839 Could not set queue depth (nvme0n4) 00:11:01.839 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:01.839 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:01.839 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:01.839 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:01.839 fio-3.35 00:11:01.839 Starting 4 threads 00:11:03.215 00:11:03.215 job0: (groupid=0, jobs=1): err= 0: pid=3074305: Sun Sep 29 16:19:03 2024 00:11:03.215 read: IOPS=3227, BW=12.6MiB/s (13.2MB/s)(12.7MiB/1010msec) 00:11:03.215 slat (nsec): min=1922, max=10465k, avg=156068.76, stdev=976423.97 00:11:03.215 clat (usec): min=7810, max=42585, avg=20041.09, stdev=5705.32 00:11:03.215 lat (usec): min=7818, max=42600, avg=20197.16, stdev=5798.48 00:11:03.215 clat percentiles (usec): 00:11:03.215 | 1.00th=[ 9110], 5.00th=[11207], 10.00th=[14091], 20.00th=[15795], 00:11:03.215 | 30.00th=[16188], 40.00th=[17433], 50.00th=[19006], 60.00th=[20317], 00:11:03.215 | 70.00th=[23200], 80.00th=[24249], 90.00th=[28443], 95.00th=[31851], 00:11:03.215 | 99.00th=[32375], 99.50th=[32900], 99.90th=[42206], 99.95th=[42206], 00:11:03.215 | 99.99th=[42730] 00:11:03.215 write: IOPS=3548, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1010msec); 0 zone resets 00:11:03.215 slat (usec): min=2, max=12137, avg=130.54, stdev=799.81 00:11:03.215 clat (usec): min=6679, max=34807, avg=17399.66, stdev=4932.86 00:11:03.215 lat (usec): min=6699, max=34846, avg=17530.20, stdev=5001.53 00:11:03.215 clat percentiles (usec): 00:11:03.215 | 1.00th=[ 8094], 5.00th=[ 9765], 10.00th=[11076], 20.00th=[13829], 00:11:03.215 | 30.00th=[14746], 40.00th=[15533], 50.00th=[16319], 60.00th=[17695], 00:11:03.215 | 70.00th=[20055], 80.00th=[21890], 90.00th=[24511], 95.00th=[26608], 00:11:03.215 | 99.00th=[28181], 99.50th=[29230], 99.90th=[33817], 99.95th=[34866], 00:11:03.215 | 99.99th=[34866] 00:11:03.215 bw ( KiB/s): min=12640, max=16032, per=25.89%, avg=14336.00, stdev=2398.51, samples=2 00:11:03.215 iops : min= 3160, max= 4008, avg=3584.00, stdev=599.63, samples=2 00:11:03.215 lat (msec) : 10=4.81%, 20=59.37%, 50=35.83% 00:11:03.215 cpu : usr=2.58%, sys=4.46%, ctx=275, majf=0, minf=1 00:11:03.215 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:11:03.215 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.215 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:03.215 issued rwts: total=3260,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:03.215 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:03.215 job1: (groupid=0, jobs=1): err= 0: pid=3074306: Sun Sep 29 16:19:03 2024 00:11:03.215 read: IOPS=4051, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1011msec) 00:11:03.215 slat (usec): min=2, max=12493, avg=119.08, stdev=729.31 00:11:03.215 clat (usec): min=7561, max=35006, avg=15633.27, stdev=3968.40 00:11:03.215 lat (usec): min=7570, max=37990, avg=15752.35, stdev=4010.30 00:11:03.215 clat percentiles (usec): 00:11:03.215 | 1.00th=[ 8160], 5.00th=[10945], 10.00th=[11863], 20.00th=[13304], 00:11:03.215 | 30.00th=[13566], 40.00th=[14222], 50.00th=[14615], 60.00th=[15139], 00:11:03.215 | 70.00th=[16581], 80.00th=[17433], 90.00th=[20579], 95.00th=[24249], 00:11:03.215 | 99.00th=[30278], 99.50th=[34866], 99.90th=[34866], 99.95th=[34866], 00:11:03.215 | 99.99th=[34866] 00:11:03.215 write: IOPS=4246, BW=16.6MiB/s (17.4MB/s)(16.8MiB/1011msec); 0 zone resets 00:11:03.215 slat (usec): min=2, max=8754, avg=112.61, stdev=692.45 00:11:03.215 clat (usec): min=6640, max=40298, avg=14877.73, stdev=3288.58 00:11:03.215 lat (usec): min=6651, max=40309, avg=14990.34, stdev=3334.63 00:11:03.215 clat percentiles (usec): 00:11:03.215 | 1.00th=[ 7832], 5.00th=[10683], 10.00th=[11994], 20.00th=[13698], 00:11:03.215 | 30.00th=[14222], 40.00th=[14484], 50.00th=[14615], 60.00th=[14877], 00:11:03.215 | 70.00th=[15008], 80.00th=[15664], 90.00th=[16909], 95.00th=[19006], 00:11:03.215 | 99.00th=[28967], 99.50th=[35914], 99.90th=[40109], 99.95th=[40109], 00:11:03.215 | 99.99th=[40109] 00:11:03.215 bw ( KiB/s): min=15856, max=17464, per=30.09%, avg=16660.00, stdev=1137.03, samples=2 00:11:03.215 iops : min= 3964, max= 4366, avg=4165.00, stdev=284.26, samples=2 00:11:03.215 lat (msec) : 10=2.66%, 20=90.14%, 50=7.20% 00:11:03.215 cpu : usr=4.06%, sys=4.55%, ctx=362, majf=0, minf=1 00:11:03.215 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:03.215 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.215 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:03.215 issued rwts: total=4096,4293,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:03.215 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:03.215 job2: (groupid=0, jobs=1): err= 0: pid=3074307: Sun Sep 29 16:19:03 2024 00:11:03.215 read: IOPS=3392, BW=13.3MiB/s (13.9MB/s)(13.3MiB/1003msec) 00:11:03.215 slat (usec): min=2, max=14247, avg=147.23, stdev=862.10 00:11:03.215 clat (usec): min=651, max=49296, avg=18469.68, stdev=6876.33 00:11:03.215 lat (usec): min=6233, max=49310, avg=18616.91, stdev=6933.40 00:11:03.215 clat percentiles (usec): 00:11:03.215 | 1.00th=[ 6652], 5.00th=[12387], 10.00th=[13304], 20.00th=[15270], 00:11:03.215 | 30.00th=[15795], 40.00th=[16581], 50.00th=[16909], 60.00th=[17171], 00:11:03.215 | 70.00th=[17957], 80.00th=[18744], 90.00th=[24249], 95.00th=[39584], 00:11:03.215 | 99.00th=[44303], 99.50th=[47449], 99.90th=[47449], 99.95th=[49021], 00:11:03.215 | 99.99th=[49546] 00:11:03.215 write: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec); 0 zone resets 00:11:03.215 slat (usec): min=2, max=9565, avg=132.47, stdev=701.42 00:11:03.215 clat (usec): min=8731, max=38626, avg=17741.04, stdev=3747.14 00:11:03.215 lat (usec): min=9786, max=38685, avg=17873.50, stdev=3767.43 00:11:03.215 clat percentiles (usec): 00:11:03.215 | 1.00th=[11994], 5.00th=[13173], 10.00th=[14615], 20.00th=[15270], 00:11:03.215 | 30.00th=[15795], 40.00th=[16188], 50.00th=[16909], 60.00th=[17171], 00:11:03.215 | 70.00th=[18482], 80.00th=[19792], 90.00th=[22152], 95.00th=[24511], 00:11:03.215 | 99.00th=[34866], 99.50th=[34866], 99.90th=[34866], 99.95th=[34866], 00:11:03.215 | 99.99th=[38536] 00:11:03.215 bw ( KiB/s): min=12816, max=15856, per=25.89%, avg=14336.00, stdev=2149.60, samples=2 00:11:03.215 iops : min= 3204, max= 3964, avg=3584.00, stdev=537.40, samples=2 00:11:03.215 lat (usec) : 750=0.01% 00:11:03.215 lat (msec) : 10=0.67%, 20=82.12%, 50=17.19% 00:11:03.215 cpu : usr=2.79%, sys=4.89%, ctx=345, majf=0, minf=1 00:11:03.215 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:11:03.215 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.215 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:03.215 issued rwts: total=3403,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:03.215 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:03.215 job3: (groupid=0, jobs=1): err= 0: pid=3074308: Sun Sep 29 16:19:03 2024 00:11:03.215 read: IOPS=2116, BW=8466KiB/s (8669kB/s)(8576KiB/1013msec) 00:11:03.215 slat (usec): min=2, max=16458, avg=196.72, stdev=1191.35 00:11:03.215 clat (usec): min=2300, max=62230, avg=21804.47, stdev=6495.15 00:11:03.215 lat (usec): min=8364, max=62264, avg=22001.19, stdev=6576.86 00:11:03.215 clat percentiles (usec): 00:11:03.215 | 1.00th=[ 8455], 5.00th=[12911], 10.00th=[15664], 20.00th=[18744], 00:11:03.215 | 30.00th=[19530], 40.00th=[20317], 50.00th=[21365], 60.00th=[21627], 00:11:03.215 | 70.00th=[21890], 80.00th=[23200], 90.00th=[30016], 95.00th=[33817], 00:11:03.215 | 99.00th=[49021], 99.50th=[54789], 99.90th=[62129], 99.95th=[62129], 00:11:03.215 | 99.99th=[62129] 00:11:03.215 write: IOPS=2527, BW=9.87MiB/s (10.4MB/s)(10.0MiB/1013msec); 0 zone resets 00:11:03.215 slat (usec): min=3, max=11810, avg=220.11, stdev=931.73 00:11:03.215 clat (usec): min=9631, max=71430, avg=31479.20, stdev=15215.36 00:11:03.215 lat (usec): min=9666, max=71462, avg=31699.32, stdev=15287.54 00:11:03.215 clat percentiles (usec): 00:11:03.215 | 1.00th=[ 9896], 5.00th=[15926], 10.00th=[16909], 20.00th=[18482], 00:11:03.215 | 30.00th=[20841], 40.00th=[21365], 50.00th=[23987], 60.00th=[30802], 00:11:03.215 | 70.00th=[40633], 80.00th=[44827], 90.00th=[56361], 95.00th=[61080], 00:11:03.215 | 99.00th=[68682], 99.50th=[70779], 99.90th=[71828], 99.95th=[71828], 00:11:03.215 | 99.99th=[71828] 00:11:03.215 bw ( KiB/s): min= 8713, max=11528, per=18.28%, avg=10120.50, stdev=1990.51, samples=2 00:11:03.215 iops : min= 2178, max= 2882, avg=2530.00, stdev=497.80, samples=2 00:11:03.215 lat (msec) : 4=0.02%, 10=1.81%, 20=28.06%, 50=62.41%, 100=7.70% 00:11:03.215 cpu : usr=2.37%, sys=4.35%, ctx=317, majf=0, minf=1 00:11:03.215 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:11:03.215 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.215 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:03.215 issued rwts: total=2144,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:03.215 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:03.215 00:11:03.215 Run status group 0 (all jobs): 00:11:03.215 READ: bw=49.8MiB/s (52.2MB/s), 8466KiB/s-15.8MiB/s (8669kB/s-16.6MB/s), io=50.4MiB (52.8MB), run=1003-1013msec 00:11:03.215 WRITE: bw=54.1MiB/s (56.7MB/s), 9.87MiB/s-16.6MiB/s (10.4MB/s-17.4MB/s), io=54.8MiB (57.4MB), run=1003-1013msec 00:11:03.215 00:11:03.215 Disk stats (read/write): 00:11:03.215 nvme0n1: ios=2655/3072, merge=0/0, ticks=19360/17598, in_queue=36958, util=85.67% 00:11:03.215 nvme0n2: ios=3634/3607, merge=0/0, ticks=24736/23908, in_queue=48644, util=90.74% 00:11:03.215 nvme0n3: ios=2786/3072, merge=0/0, ticks=16431/15831, in_queue=32262, util=95.29% 00:11:03.215 nvme0n4: ios=2048/2048, merge=0/0, ticks=19877/29041, in_queue=48918, util=93.88% 00:11:03.215 16:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:03.215 16:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3074444 00:11:03.215 16:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:03.215 16:19:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:03.215 [global] 00:11:03.215 thread=1 00:11:03.215 invalidate=1 00:11:03.215 rw=read 00:11:03.215 time_based=1 00:11:03.215 runtime=10 00:11:03.215 ioengine=libaio 00:11:03.215 direct=1 00:11:03.215 bs=4096 00:11:03.215 iodepth=1 00:11:03.215 norandommap=1 00:11:03.215 numjobs=1 00:11:03.215 00:11:03.215 [job0] 00:11:03.215 filename=/dev/nvme0n1 00:11:03.215 [job1] 00:11:03.215 filename=/dev/nvme0n2 00:11:03.215 [job2] 00:11:03.215 filename=/dev/nvme0n3 00:11:03.216 [job3] 00:11:03.216 filename=/dev/nvme0n4 00:11:03.216 Could not set queue depth (nvme0n1) 00:11:03.216 Could not set queue depth (nvme0n2) 00:11:03.216 Could not set queue depth (nvme0n3) 00:11:03.216 Could not set queue depth (nvme0n4) 00:11:03.473 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:03.473 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:03.473 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:03.473 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:03.473 fio-3.35 00:11:03.473 Starting 4 threads 00:11:06.754 16:19:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:06.754 16:19:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:06.754 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=29122560, buflen=4096 00:11:06.754 fio: pid=3074545, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:06.754 16:19:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:06.754 16:19:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:06.754 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=352256, buflen=4096 00:11:06.754 fio: pid=3074544, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:07.012 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=39301120, buflen=4096 00:11:07.012 fio: pid=3074542, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:07.012 16:19:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:07.012 16:19:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:07.269 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=1515520, buflen=4096 00:11:07.269 fio: pid=3074543, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:11:07.527 00:11:07.527 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3074542: Sun Sep 29 16:19:07 2024 00:11:07.527 read: IOPS=2789, BW=10.9MiB/s (11.4MB/s)(37.5MiB/3440msec) 00:11:07.527 slat (usec): min=4, max=29027, avg=19.44, stdev=363.62 00:11:07.527 clat (usec): min=230, max=41003, avg=333.17, stdev=442.74 00:11:07.527 lat (usec): min=236, max=41016, avg=351.39, stdev=561.48 00:11:07.527 clat percentiles (usec): 00:11:07.527 | 1.00th=[ 243], 5.00th=[ 253], 10.00th=[ 265], 20.00th=[ 302], 00:11:07.527 | 30.00th=[ 314], 40.00th=[ 322], 50.00th=[ 330], 60.00th=[ 334], 00:11:07.527 | 70.00th=[ 343], 80.00th=[ 351], 90.00th=[ 371], 95.00th=[ 388], 00:11:07.527 | 99.00th=[ 498], 99.50th=[ 529], 99.90th=[ 644], 99.95th=[ 1303], 00:11:07.527 | 99.99th=[41157] 00:11:07.527 bw ( KiB/s): min= 9936, max=11568, per=60.46%, avg=10964.00, stdev=568.70, samples=6 00:11:07.527 iops : min= 2484, max= 2892, avg=2741.00, stdev=142.18, samples=6 00:11:07.527 lat (usec) : 250=3.55%, 500=95.49%, 750=0.90% 00:11:07.527 lat (msec) : 2=0.01%, 10=0.02%, 20=0.01%, 50=0.01% 00:11:07.527 cpu : usr=2.44%, sys=5.26%, ctx=9601, majf=0, minf=1 00:11:07.527 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:07.527 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.527 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.527 issued rwts: total=9596,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.527 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:07.527 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=3074543: Sun Sep 29 16:19:07 2024 00:11:07.527 read: IOPS=98, BW=391KiB/s (400kB/s)(1480KiB/3785msec) 00:11:07.527 slat (usec): min=6, max=16840, avg=104.30, stdev=1045.11 00:11:07.527 clat (usec): min=249, max=41979, avg=10120.27, stdev=17399.73 00:11:07.527 lat (usec): min=258, max=57918, avg=10203.50, stdev=17548.69 00:11:07.527 clat percentiles (usec): 00:11:07.527 | 1.00th=[ 265], 5.00th=[ 277], 10.00th=[ 285], 20.00th=[ 293], 00:11:07.527 | 30.00th=[ 297], 40.00th=[ 302], 50.00th=[ 322], 60.00th=[ 371], 00:11:07.527 | 70.00th=[ 449], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:07.527 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:07.527 | 99.99th=[42206] 00:11:07.527 bw ( KiB/s): min= 94, max= 2312, per=2.28%, avg=413.43, stdev=837.20, samples=7 00:11:07.527 iops : min= 23, max= 578, avg=103.29, stdev=209.33, samples=7 00:11:07.527 lat (usec) : 250=0.27%, 500=73.32%, 750=0.81%, 1000=0.54% 00:11:07.527 lat (msec) : 2=0.81%, 50=23.99% 00:11:07.527 cpu : usr=0.03%, sys=0.50%, ctx=374, majf=0, minf=2 00:11:07.527 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:07.527 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.527 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.527 issued rwts: total=371,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.527 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:07.527 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3074544: Sun Sep 29 16:19:07 2024 00:11:07.527 read: IOPS=27, BW=108KiB/s (111kB/s)(344KiB/3173msec) 00:11:07.527 slat (usec): min=13, max=1892, avg=46.66, stdev=200.42 00:11:07.527 clat (usec): min=429, max=41123, avg=36722.17, stdev=12449.84 00:11:07.527 lat (usec): min=461, max=43015, avg=36769.17, stdev=12461.51 00:11:07.527 clat percentiles (usec): 00:11:07.527 | 1.00th=[ 429], 5.00th=[ 474], 10.00th=[ 857], 20.00th=[41157], 00:11:07.527 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:07.527 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:07.527 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:07.527 | 99.99th=[41157] 00:11:07.527 bw ( KiB/s): min= 96, max= 128, per=0.60%, avg=109.33, stdev=13.06, samples=6 00:11:07.527 iops : min= 24, max= 32, avg=27.33, stdev= 3.27, samples=6 00:11:07.527 lat (usec) : 500=6.90%, 750=2.30%, 1000=1.15% 00:11:07.527 lat (msec) : 50=88.51% 00:11:07.527 cpu : usr=0.16%, sys=0.00%, ctx=89, majf=0, minf=1 00:11:07.527 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:07.527 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.527 complete : 0=1.1%, 4=98.9%, 8 16:19:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:07.527 =0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.527 issued rwts: total=87,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.527 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:07.527 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3074545: Sun Sep 29 16:19:07 2024 00:11:07.527 read: IOPS=2458, BW=9834KiB/s (10.1MB/s)(27.8MiB/2892msec) 00:11:07.527 slat (nsec): min=5875, max=61315, avg=13888.24, stdev=5448.84 00:11:07.527 clat (usec): min=251, max=41993, avg=386.07, stdev=1086.57 00:11:07.527 lat (usec): min=258, max=42009, avg=399.96, stdev=1086.70 00:11:07.527 clat percentiles (usec): 00:11:07.527 | 1.00th=[ 277], 5.00th=[ 310], 10.00th=[ 330], 20.00th=[ 338], 00:11:07.527 | 30.00th=[ 347], 40.00th=[ 351], 50.00th=[ 359], 60.00th=[ 363], 00:11:07.527 | 70.00th=[ 367], 80.00th=[ 375], 90.00th=[ 383], 95.00th=[ 392], 00:11:07.527 | 99.00th=[ 502], 99.50th=[ 537], 99.90th=[ 676], 99.95th=[41157], 00:11:07.527 | 99.99th=[42206] 00:11:07.527 bw ( KiB/s): min= 8512, max=11024, per=56.68%, avg=10280.00, stdev=1046.44, samples=5 00:11:07.527 iops : min= 2128, max= 2756, avg=2570.00, stdev=261.61, samples=5 00:11:07.527 lat (usec) : 500=98.93%, 750=0.98% 00:11:07.527 lat (msec) : 50=0.07% 00:11:07.527 cpu : usr=1.80%, sys=5.64%, ctx=7112, majf=0, minf=2 00:11:07.527 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:07.527 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.527 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.527 issued rwts: total=7111,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.527 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:07.527 00:11:07.527 Run status group 0 (all jobs): 00:11:07.527 READ: bw=17.7MiB/s (18.6MB/s), 108KiB/s-10.9MiB/s (111kB/s-11.4MB/s), io=67.0MiB (70.3MB), run=2892-3785msec 00:11:07.527 00:11:07.527 Disk stats (read/write): 00:11:07.527 nvme0n1: ios=9457/0, merge=0/0, ticks=3298/0, in_queue=3298, util=98.14% 00:11:07.527 nvme0n2: ios=364/0, merge=0/0, ticks=3539/0, in_queue=3539, util=95.82% 00:11:07.528 nvme0n3: ios=134/0, merge=0/0, ticks=3327/0, in_queue=3327, util=99.34% 00:11:07.528 nvme0n4: ios=7160/0, merge=0/0, ticks=4000/0, in_queue=4000, util=99.29% 00:11:07.528 16:19:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:07.786 16:19:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:07.786 16:19:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:08.044 16:19:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:08.044 16:19:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:08.302 16:19:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:08.302 16:19:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:08.562 16:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:08.562 16:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:09.128 16:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:09.128 16:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 3074444 00:11:09.128 16:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:09.128 16:19:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:10.062 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.062 16:19:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:10.062 16:19:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:11:10.062 16:19:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:10.062 16:19:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:10.062 16:19:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:10.062 16:19:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:10.062 16:19:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:11:10.062 16:19:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:10.062 16:19:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:10.062 nvmf hotplug test: fio failed as expected 00:11:10.062 16:19:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:10.062 16:19:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:10.062 16:19:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:10.062 16:19:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:10.062 16:19:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:10.062 16:19:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:10.062 16:19:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:10.062 16:19:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:10.062 16:19:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:10.062 16:19:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:10.062 16:19:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:10.062 16:19:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:10.062 rmmod nvme_tcp 00:11:10.321 rmmod nvme_fabrics 00:11:10.321 rmmod nvme_keyring 00:11:10.321 16:19:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:10.321 16:19:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:10.321 16:19:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:10.321 16:19:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@513 -- # '[' -n 3072274 ']' 00:11:10.321 16:19:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # killprocess 3072274 00:11:10.321 16:19:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 3072274 ']' 00:11:10.321 16:19:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 3072274 00:11:10.321 16:19:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:11:10.321 16:19:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:10.321 16:19:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3072274 00:11:10.321 16:19:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:10.321 16:19:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:10.321 16:19:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3072274' 00:11:10.321 killing process with pid 3072274 00:11:10.321 16:19:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 3072274 00:11:10.321 16:19:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 3072274 00:11:11.696 16:19:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:11.696 16:19:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:11:11.696 16:19:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:11:11.696 16:19:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:11:11.696 16:19:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-save 00:11:11.696 16:19:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:11:11.696 16:19:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-restore 00:11:11.696 16:19:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:11.696 16:19:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:11.696 16:19:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:11.696 16:19:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:11.696 16:19:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:13.600 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:13.600 00:11:13.600 real 0m27.593s 00:11:13.600 user 1m35.691s 00:11:13.600 sys 0m7.126s 00:11:13.600 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:13.600 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.600 ************************************ 00:11:13.600 END TEST nvmf_fio_target 00:11:13.600 ************************************ 00:11:13.600 16:19:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:13.600 16:19:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:13.600 16:19:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:13.600 16:19:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:13.600 ************************************ 00:11:13.600 START TEST nvmf_bdevio 00:11:13.600 ************************************ 00:11:13.600 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:13.600 * Looking for test storage... 00:11:13.600 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:13.600 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:13.600 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:11:13.600 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:13.859 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:13.859 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:13.859 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:13.859 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:13.859 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:13.859 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:13.859 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:13.859 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:13.859 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:13.859 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:13.859 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:13.859 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:13.859 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:13.859 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:13.859 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:13.859 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:13.859 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:13.859 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:13.859 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:13.859 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:13.859 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:13.859 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:13.859 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:13.859 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:13.859 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:13.859 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:13.859 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:13.859 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:13.859 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:13.859 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:13.859 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:13.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.859 --rc genhtml_branch_coverage=1 00:11:13.859 --rc genhtml_function_coverage=1 00:11:13.859 --rc genhtml_legend=1 00:11:13.859 --rc geninfo_all_blocks=1 00:11:13.859 --rc geninfo_unexecuted_blocks=1 00:11:13.859 00:11:13.859 ' 00:11:13.859 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:13.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.859 --rc genhtml_branch_coverage=1 00:11:13.859 --rc genhtml_function_coverage=1 00:11:13.860 --rc genhtml_legend=1 00:11:13.860 --rc geninfo_all_blocks=1 00:11:13.860 --rc geninfo_unexecuted_blocks=1 00:11:13.860 00:11:13.860 ' 00:11:13.860 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:13.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.860 --rc genhtml_branch_coverage=1 00:11:13.860 --rc genhtml_function_coverage=1 00:11:13.860 --rc genhtml_legend=1 00:11:13.860 --rc geninfo_all_blocks=1 00:11:13.860 --rc geninfo_unexecuted_blocks=1 00:11:13.860 00:11:13.860 ' 00:11:13.860 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:13.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.860 --rc genhtml_branch_coverage=1 00:11:13.860 --rc genhtml_function_coverage=1 00:11:13.860 --rc genhtml_legend=1 00:11:13.860 --rc geninfo_all_blocks=1 00:11:13.860 --rc geninfo_unexecuted_blocks=1 00:11:13.860 00:11:13.860 ' 00:11:13.860 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:13.860 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:13.860 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:13.860 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:13.860 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:13.860 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:13.860 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:13.860 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:13.860 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:13.860 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:13.860 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:13.860 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:13.860 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:13.860 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:13.860 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:13.860 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:13.860 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:13.860 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:13.860 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:13.860 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:13.860 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:13.860 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:13.860 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:13.860 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.860 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.860 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.860 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:13.860 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.860 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:13.860 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:13.860 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:13.860 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:13.860 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:13.860 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:13.860 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:13.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:13.860 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:13.860 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:13.860 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:13.860 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:13.860 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:13.860 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:13.860 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:13.860 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:13.860 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:13.860 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:13.860 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:13.860 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:13.860 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:13.860 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:13.860 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:11:13.860 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:11:13.860 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:11:13.860 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:16.428 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:16.428 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:11:16.428 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:16.428 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:16.429 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:16.429 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:16.429 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:16.429 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # is_hw=yes 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:16.429 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:16.429 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.384 ms 00:11:16.429 00:11:16.429 --- 10.0.0.2 ping statistics --- 00:11:16.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:16.429 rtt min/avg/max/mdev = 0.384/0.384/0.384/0.000 ms 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:16.429 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:16.429 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:11:16.429 00:11:16.429 --- 10.0.0.1 ping statistics --- 00:11:16.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:16.429 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # return 0 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:11:16.429 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:16.430 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:16.430 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:16.430 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:16.430 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # nvmfpid=3077445 00:11:16.430 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # waitforlisten 3077445 00:11:16.430 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 3077445 ']' 00:11:16.430 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:16.430 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:16.430 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:16.430 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:16.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:16.430 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:16.430 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:16.430 [2024-09-29 16:19:16.649879] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:11:16.430 [2024-09-29 16:19:16.650048] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:16.430 [2024-09-29 16:19:16.810168] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:16.688 [2024-09-29 16:19:17.080518] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:16.688 [2024-09-29 16:19:17.080601] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:16.688 [2024-09-29 16:19:17.080626] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:16.688 [2024-09-29 16:19:17.080651] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:16.688 [2024-09-29 16:19:17.080683] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:16.688 [2024-09-29 16:19:17.080834] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:11:16.688 [2024-09-29 16:19:17.080898] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:11:16.688 [2024-09-29 16:19:17.080950] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:11:16.688 [2024-09-29 16:19:17.080958] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:11:17.254 16:19:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:17.254 16:19:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:11:17.254 16:19:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:17.254 16:19:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:17.254 16:19:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:17.254 16:19:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:17.254 16:19:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:17.254 16:19:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.254 16:19:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:17.254 [2024-09-29 16:19:17.660073] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:17.254 16:19:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.254 16:19:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:17.254 16:19:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.254 16:19:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:17.254 Malloc0 00:11:17.254 16:19:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.254 16:19:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:17.254 16:19:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.254 16:19:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:17.254 16:19:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.254 16:19:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:17.254 16:19:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.254 16:19:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:17.254 16:19:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.254 16:19:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:17.254 16:19:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.254 16:19:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:17.254 [2024-09-29 16:19:17.766859] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:17.254 16:19:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.254 16:19:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:17.254 16:19:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:17.254 16:19:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # config=() 00:11:17.254 16:19:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # local subsystem config 00:11:17.254 16:19:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:11:17.254 16:19:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:11:17.254 { 00:11:17.254 "params": { 00:11:17.254 "name": "Nvme$subsystem", 00:11:17.254 "trtype": "$TEST_TRANSPORT", 00:11:17.254 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:17.254 "adrfam": "ipv4", 00:11:17.254 "trsvcid": "$NVMF_PORT", 00:11:17.254 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:17.254 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:17.254 "hdgst": ${hdgst:-false}, 00:11:17.254 "ddgst": ${ddgst:-false} 00:11:17.254 }, 00:11:17.254 "method": "bdev_nvme_attach_controller" 00:11:17.254 } 00:11:17.254 EOF 00:11:17.254 )") 00:11:17.254 16:19:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@578 -- # cat 00:11:17.254 16:19:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # jq . 00:11:17.254 16:19:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@581 -- # IFS=, 00:11:17.254 16:19:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:11:17.254 "params": { 00:11:17.254 "name": "Nvme1", 00:11:17.254 "trtype": "tcp", 00:11:17.254 "traddr": "10.0.0.2", 00:11:17.254 "adrfam": "ipv4", 00:11:17.254 "trsvcid": "4420", 00:11:17.254 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:17.254 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:17.254 "hdgst": false, 00:11:17.254 "ddgst": false 00:11:17.254 }, 00:11:17.255 "method": "bdev_nvme_attach_controller" 00:11:17.255 }' 00:11:17.513 [2024-09-29 16:19:17.853383] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:11:17.513 [2024-09-29 16:19:17.853516] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3077602 ] 00:11:17.513 [2024-09-29 16:19:17.985337] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:17.770 [2024-09-29 16:19:18.229154] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:17.770 [2024-09-29 16:19:18.229205] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:17.770 [2024-09-29 16:19:18.229199] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.336 I/O targets: 00:11:18.336 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:18.336 00:11:18.336 00:11:18.336 CUnit - A unit testing framework for C - Version 2.1-3 00:11:18.336 http://cunit.sourceforge.net/ 00:11:18.336 00:11:18.336 00:11:18.336 Suite: bdevio tests on: Nvme1n1 00:11:18.336 Test: blockdev write read block ...passed 00:11:18.336 Test: blockdev write zeroes read block ...passed 00:11:18.336 Test: blockdev write zeroes read no split ...passed 00:11:18.336 Test: blockdev write zeroes read split ...passed 00:11:18.336 Test: blockdev write zeroes read split partial ...passed 00:11:18.336 Test: blockdev reset ...[2024-09-29 16:19:18.816467] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:18.336 [2024-09-29 16:19:18.816647] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2f00 (9): Bad file descriptor 00:11:18.336 [2024-09-29 16:19:18.870948] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:18.336 passed 00:11:18.598 Test: blockdev write read 8 blocks ...passed 00:11:18.598 Test: blockdev write read size > 128k ...passed 00:11:18.598 Test: blockdev write read invalid size ...passed 00:11:18.598 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:18.598 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:18.598 Test: blockdev write read max offset ...passed 00:11:18.598 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:18.598 Test: blockdev writev readv 8 blocks ...passed 00:11:18.598 Test: blockdev writev readv 30 x 1block ...passed 00:11:18.598 Test: blockdev writev readv block ...passed 00:11:18.598 Test: blockdev writev readv size > 128k ...passed 00:11:18.598 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:18.598 Test: blockdev comparev and writev ...[2024-09-29 16:19:19.087065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:18.598 [2024-09-29 16:19:19.087144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:18.598 [2024-09-29 16:19:19.087184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:18.598 [2024-09-29 16:19:19.087211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:18.598 [2024-09-29 16:19:19.087703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:18.598 [2024-09-29 16:19:19.087746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:18.598 [2024-09-29 16:19:19.087786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:18.598 [2024-09-29 16:19:19.087813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:18.598 [2024-09-29 16:19:19.088262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:18.598 [2024-09-29 16:19:19.088295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:18.598 [2024-09-29 16:19:19.088333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:18.598 [2024-09-29 16:19:19.088360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:18.598 [2024-09-29 16:19:19.088853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:18.598 [2024-09-29 16:19:19.088886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:18.598 [2024-09-29 16:19:19.088920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:18.598 [2024-09-29 16:19:19.088955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:18.598 passed 00:11:18.856 Test: blockdev nvme passthru rw ...passed 00:11:18.856 Test: blockdev nvme passthru vendor specific ...[2024-09-29 16:19:19.171137] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:18.856 [2024-09-29 16:19:19.171209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:18.856 [2024-09-29 16:19:19.171448] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:18.856 [2024-09-29 16:19:19.171480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:18.856 [2024-09-29 16:19:19.171697] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:18.857 [2024-09-29 16:19:19.171737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:18.857 [2024-09-29 16:19:19.171979] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:18.857 [2024-09-29 16:19:19.172011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:18.857 passed 00:11:18.857 Test: blockdev nvme admin passthru ...passed 00:11:18.857 Test: blockdev copy ...passed 00:11:18.857 00:11:18.857 Run Summary: Type Total Ran Passed Failed Inactive 00:11:18.857 suites 1 1 n/a 0 0 00:11:18.857 tests 23 23 23 0 0 00:11:18.857 asserts 152 152 152 0 n/a 00:11:18.857 00:11:18.857 Elapsed time = 1.189 seconds 00:11:19.790 16:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:19.790 16:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.790 16:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:19.790 16:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.790 16:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:19.790 16:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:19.790 16:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:19.790 16:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:19.790 16:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:19.790 16:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:19.790 16:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:19.790 16:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:19.790 rmmod nvme_tcp 00:11:19.790 rmmod nvme_fabrics 00:11:19.790 rmmod nvme_keyring 00:11:19.790 16:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:19.790 16:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:19.790 16:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:19.790 16:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@513 -- # '[' -n 3077445 ']' 00:11:19.790 16:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # killprocess 3077445 00:11:19.790 16:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 3077445 ']' 00:11:19.790 16:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 3077445 00:11:19.790 16:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:11:19.790 16:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:19.790 16:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3077445 00:11:19.790 16:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:11:19.790 16:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:11:19.790 16:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3077445' 00:11:19.790 killing process with pid 3077445 00:11:19.790 16:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 3077445 00:11:19.790 16:19:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 3077445 00:11:21.164 16:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:21.164 16:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:11:21.164 16:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:11:21.164 16:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:21.164 16:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-save 00:11:21.164 16:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:11:21.164 16:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-restore 00:11:21.164 16:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:21.164 16:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:21.164 16:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:21.164 16:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:21.164 16:19:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:23.697 16:19:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:23.697 00:11:23.697 real 0m9.663s 00:11:23.697 user 0m22.577s 00:11:23.697 sys 0m2.547s 00:11:23.697 16:19:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:23.697 16:19:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:23.697 ************************************ 00:11:23.697 END TEST nvmf_bdevio 00:11:23.697 ************************************ 00:11:23.697 16:19:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:23.697 00:11:23.697 real 4m36.213s 00:11:23.697 user 12m2.770s 00:11:23.697 sys 1m11.044s 00:11:23.697 16:19:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:23.697 16:19:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:23.697 ************************************ 00:11:23.697 END TEST nvmf_target_core 00:11:23.697 ************************************ 00:11:23.697 16:19:23 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:23.697 16:19:23 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:23.697 16:19:23 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:23.697 16:19:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:23.697 ************************************ 00:11:23.697 START TEST nvmf_target_extra 00:11:23.697 ************************************ 00:11:23.697 16:19:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:23.697 * Looking for test storage... 00:11:23.697 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:23.697 16:19:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:23.697 16:19:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lcov --version 00:11:23.697 16:19:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:23.697 16:19:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:23.697 16:19:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:23.697 16:19:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:23.697 16:19:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:23.697 16:19:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:23.697 16:19:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:23.697 16:19:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:23.697 16:19:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:23.697 16:19:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:23.697 16:19:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:23.697 16:19:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:23.697 16:19:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:23.697 16:19:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:23.697 16:19:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:23.697 16:19:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:23.697 16:19:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:23.697 16:19:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:23.697 16:19:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:23.697 16:19:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:23.697 16:19:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:23.697 16:19:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:23.697 16:19:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:23.697 16:19:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:23.697 16:19:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:23.697 16:19:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:23.697 16:19:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:23.697 16:19:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:23.697 16:19:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:23.697 16:19:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:23.697 16:19:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:23.697 16:19:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:23.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.697 --rc genhtml_branch_coverage=1 00:11:23.697 --rc genhtml_function_coverage=1 00:11:23.697 --rc genhtml_legend=1 00:11:23.697 --rc geninfo_all_blocks=1 00:11:23.697 --rc geninfo_unexecuted_blocks=1 00:11:23.697 00:11:23.697 ' 00:11:23.697 16:19:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:23.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.697 --rc genhtml_branch_coverage=1 00:11:23.697 --rc genhtml_function_coverage=1 00:11:23.697 --rc genhtml_legend=1 00:11:23.697 --rc geninfo_all_blocks=1 00:11:23.697 --rc geninfo_unexecuted_blocks=1 00:11:23.697 00:11:23.697 ' 00:11:23.697 16:19:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:23.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.697 --rc genhtml_branch_coverage=1 00:11:23.697 --rc genhtml_function_coverage=1 00:11:23.697 --rc genhtml_legend=1 00:11:23.697 --rc geninfo_all_blocks=1 00:11:23.697 --rc geninfo_unexecuted_blocks=1 00:11:23.697 00:11:23.697 ' 00:11:23.698 16:19:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:23.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.698 --rc genhtml_branch_coverage=1 00:11:23.698 --rc genhtml_function_coverage=1 00:11:23.698 --rc genhtml_legend=1 00:11:23.698 --rc geninfo_all_blocks=1 00:11:23.698 --rc geninfo_unexecuted_blocks=1 00:11:23.698 00:11:23.698 ' 00:11:23.698 16:19:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:23.698 16:19:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:23.698 16:19:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:23.698 16:19:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:23.698 16:19:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:23.698 16:19:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:23.698 16:19:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:23.698 16:19:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:23.698 16:19:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:23.698 16:19:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:23.698 16:19:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:23.698 16:19:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:23.698 16:19:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:23.698 16:19:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:23.698 16:19:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:23.698 16:19:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:23.698 16:19:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:23.698 16:19:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:23.698 16:19:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:23.698 16:19:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:23.698 16:19:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:23.698 16:19:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:23.698 16:19:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:23.698 16:19:24 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.698 16:19:24 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.698 16:19:24 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.698 16:19:24 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:23.698 16:19:24 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.698 16:19:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:23.698 16:19:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:23.698 16:19:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:23.698 16:19:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:23.698 16:19:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:23.698 16:19:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:23.698 16:19:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:23.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:23.698 16:19:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:23.698 16:19:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:23.698 16:19:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:23.698 16:19:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:23.698 16:19:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:23.698 16:19:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:23.698 16:19:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:23.698 16:19:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:23.698 16:19:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:23.698 16:19:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:23.698 ************************************ 00:11:23.698 START TEST nvmf_example 00:11:23.698 ************************************ 00:11:23.698 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:23.698 * Looking for test storage... 00:11:23.698 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:23.698 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:23.698 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # lcov --version 00:11:23.698 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:23.698 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:23.698 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:23.698 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:23.698 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:23.698 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:11:23.698 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:11:23.698 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:11:23.698 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:11:23.698 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:11:23.698 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:11:23.698 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:11:23.698 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:23.698 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:11:23.698 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:11:23.698 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:23.698 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:23.698 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:11:23.698 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:11:23.698 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:23.698 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:11:23.698 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:11:23.698 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:11:23.698 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:11:23.698 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:23.698 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:11:23.698 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:11:23.698 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:23.698 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:23.698 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:11:23.698 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:23.698 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:23.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.698 --rc genhtml_branch_coverage=1 00:11:23.698 --rc genhtml_function_coverage=1 00:11:23.698 --rc genhtml_legend=1 00:11:23.698 --rc geninfo_all_blocks=1 00:11:23.698 --rc geninfo_unexecuted_blocks=1 00:11:23.698 00:11:23.698 ' 00:11:23.698 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:23.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.698 --rc genhtml_branch_coverage=1 00:11:23.698 --rc genhtml_function_coverage=1 00:11:23.698 --rc genhtml_legend=1 00:11:23.698 --rc geninfo_all_blocks=1 00:11:23.698 --rc geninfo_unexecuted_blocks=1 00:11:23.698 00:11:23.698 ' 00:11:23.698 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:23.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.698 --rc genhtml_branch_coverage=1 00:11:23.698 --rc genhtml_function_coverage=1 00:11:23.698 --rc genhtml_legend=1 00:11:23.698 --rc geninfo_all_blocks=1 00:11:23.698 --rc geninfo_unexecuted_blocks=1 00:11:23.698 00:11:23.698 ' 00:11:23.698 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:23.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.699 --rc genhtml_branch_coverage=1 00:11:23.699 --rc genhtml_function_coverage=1 00:11:23.699 --rc genhtml_legend=1 00:11:23.699 --rc geninfo_all_blocks=1 00:11:23.699 --rc geninfo_unexecuted_blocks=1 00:11:23.699 00:11:23.699 ' 00:11:23.699 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:23.699 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:23.699 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:23.699 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:23.699 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:23.699 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:23.699 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:23.699 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:23.699 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:23.699 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:23.699 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:23.699 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:23.699 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:23.699 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:23.699 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:23.699 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:23.699 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:23.699 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:23.699 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:23.699 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:11:23.699 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:23.699 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:23.699 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:23.699 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.699 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.699 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.699 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:23.699 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.699 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:11:23.699 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:23.699 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:23.699 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:23.699 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:23.699 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:23.699 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:23.699 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:23.699 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:23.699 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:23.699 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:23.699 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:23.699 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:23.699 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:23.699 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:23.699 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:23.699 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:23.699 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:23.699 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:23.699 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:23.699 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:23.699 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:23.699 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:23.699 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:23.699 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:23.699 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:23.699 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:23.699 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:23.699 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:23.699 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:23.699 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:11:23.699 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:11:23.699 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:11:23.699 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:25.600 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:25.600 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:25.600 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:25.600 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # is_hw=yes 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:25.600 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:25.858 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:25.858 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:25.858 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:25.858 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:25.858 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:25.858 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:25.859 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:25.859 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:25.859 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:25.859 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:11:25.859 00:11:25.859 --- 10.0.0.2 ping statistics --- 00:11:25.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:25.859 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:11:25.859 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:25.859 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:25.859 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:11:25.859 00:11:25.859 --- 10.0.0.1 ping statistics --- 00:11:25.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:25.859 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:11:25.859 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:25.859 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # return 0 00:11:25.859 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:25.859 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:25.859 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:11:25.859 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:11:25.859 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:25.859 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:11:25.859 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:11:25.859 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:25.859 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:25.859 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:25.859 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:25.859 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:25.859 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:25.859 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3080131 00:11:25.859 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:25.859 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:25.859 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3080131 00:11:25.859 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 3080131 ']' 00:11:25.859 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:25.859 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:25.859 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:25.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:25.859 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:25.859 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:27.232 16:19:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:27.232 16:19:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:11:27.232 16:19:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:27.232 16:19:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:27.232 16:19:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:27.232 16:19:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:27.232 16:19:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.232 16:19:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:27.232 16:19:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.232 16:19:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:27.232 16:19:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.232 16:19:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:27.232 16:19:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.232 16:19:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:27.232 16:19:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:27.232 16:19:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.232 16:19:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:27.232 16:19:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.232 16:19:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:27.232 16:19:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:27.232 16:19:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.232 16:19:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:27.232 16:19:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.232 16:19:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:27.232 16:19:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.232 16:19:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:27.232 16:19:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.232 16:19:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:27.232 16:19:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:39.427 Initializing NVMe Controllers 00:11:39.427 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:39.427 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:39.427 Initialization complete. Launching workers. 00:11:39.427 ======================================================== 00:11:39.427 Latency(us) 00:11:39.427 Device Information : IOPS MiB/s Average min max 00:11:39.427 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11663.80 45.56 5488.87 1291.62 15670.87 00:11:39.427 ======================================================== 00:11:39.427 Total : 11663.80 45.56 5488.87 1291.62 15670.87 00:11:39.427 00:11:39.427 16:19:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:39.427 16:19:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:39.427 16:19:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:39.427 16:19:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:39.427 16:19:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:39.427 16:19:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:39.427 16:19:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:39.427 16:19:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:39.428 rmmod nvme_tcp 00:11:39.428 rmmod nvme_fabrics 00:11:39.428 rmmod nvme_keyring 00:11:39.428 16:19:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:39.428 16:19:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:39.428 16:19:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:39.428 16:19:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@513 -- # '[' -n 3080131 ']' 00:11:39.428 16:19:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@514 -- # killprocess 3080131 00:11:39.428 16:19:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 3080131 ']' 00:11:39.428 16:19:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 3080131 00:11:39.428 16:19:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:11:39.428 16:19:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:39.428 16:19:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3080131 00:11:39.428 16:19:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:11:39.428 16:19:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:11:39.428 16:19:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3080131' 00:11:39.428 killing process with pid 3080131 00:11:39.428 16:19:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 3080131 00:11:39.428 16:19:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 3080131 00:11:39.428 nvmf threads initialize successfully 00:11:39.428 bdev subsystem init successfully 00:11:39.428 created a nvmf target service 00:11:39.428 create targets's poll groups done 00:11:39.428 all subsystems of target started 00:11:39.428 nvmf target is running 00:11:39.428 all subsystems of target stopped 00:11:39.428 destroy targets's poll groups done 00:11:39.428 destroyed the nvmf target service 00:11:39.428 bdev subsystem finish successfully 00:11:39.428 nvmf threads destroy successfully 00:11:39.428 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:39.428 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:11:39.428 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:11:39.428 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:39.428 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@787 -- # iptables-save 00:11:39.428 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:11:39.428 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@787 -- # iptables-restore 00:11:39.428 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:39.428 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:39.428 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:39.428 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:39.428 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:40.804 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:40.804 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:40.804 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:40.804 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:40.804 00:11:40.804 real 0m17.223s 00:11:40.804 user 0m48.413s 00:11:40.804 sys 0m3.247s 00:11:40.804 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:40.804 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:40.804 ************************************ 00:11:40.804 END TEST nvmf_example 00:11:40.804 ************************************ 00:11:40.804 16:19:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:40.804 16:19:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:40.804 16:19:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:40.804 16:19:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:40.804 ************************************ 00:11:40.804 START TEST nvmf_filesystem 00:11:40.804 ************************************ 00:11:40.804 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:40.804 * Looking for test storage... 00:11:40.804 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:40.804 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:40.804 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lcov --version 00:11:40.804 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:41.065 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:41.065 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:41.065 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:41.065 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:41.065 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:41.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.066 --rc genhtml_branch_coverage=1 00:11:41.066 --rc genhtml_function_coverage=1 00:11:41.066 --rc genhtml_legend=1 00:11:41.066 --rc geninfo_all_blocks=1 00:11:41.066 --rc geninfo_unexecuted_blocks=1 00:11:41.066 00:11:41.066 ' 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:41.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.066 --rc genhtml_branch_coverage=1 00:11:41.066 --rc genhtml_function_coverage=1 00:11:41.066 --rc genhtml_legend=1 00:11:41.066 --rc geninfo_all_blocks=1 00:11:41.066 --rc geninfo_unexecuted_blocks=1 00:11:41.066 00:11:41.066 ' 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:41.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.066 --rc genhtml_branch_coverage=1 00:11:41.066 --rc genhtml_function_coverage=1 00:11:41.066 --rc genhtml_legend=1 00:11:41.066 --rc geninfo_all_blocks=1 00:11:41.066 --rc geninfo_unexecuted_blocks=1 00:11:41.066 00:11:41.066 ' 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:41.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.066 --rc genhtml_branch_coverage=1 00:11:41.066 --rc genhtml_function_coverage=1 00:11:41.066 --rc genhtml_legend=1 00:11:41.066 --rc geninfo_all_blocks=1 00:11:41.066 --rc geninfo_unexecuted_blocks=1 00:11:41.066 00:11:41.066 ' 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB= 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER=n 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR= 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:41.066 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:11:41.067 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:11:41.067 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=n 00:11:41.067 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:11:41.067 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:11:41.067 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=n 00:11:41.067 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:11:41.067 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:11:41.067 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:11:41.067 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:41.067 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:11:41.067 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:11:41.067 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:11:41.067 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:11:41.067 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR= 00:11:41.067 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:11:41.067 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:11:41.067 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_SHARED=y 00:11:41.067 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:11:41.067 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:11:41.067 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:41.067 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_FC=n 00:11:41.067 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:11:41.067 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:11:41.067 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:11:41.067 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:11:41.067 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:11:41.067 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:11:41.067 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:11:41.067 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:11:41.067 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:11:41.067 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:11:41.067 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:41.067 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:11:41.067 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:11:41.067 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_URING=n 00:11:41.067 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:41.067 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:41.067 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:41.067 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:41.067 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:41.067 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:41.067 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:41.067 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:41.067 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:41.067 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:41.067 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:41.067 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:41.067 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:41.067 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:41.067 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:41.067 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:41.067 #define SPDK_CONFIG_H 00:11:41.067 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:41.067 #define SPDK_CONFIG_APPS 1 00:11:41.067 #define SPDK_CONFIG_ARCH native 00:11:41.067 #define SPDK_CONFIG_ASAN 1 00:11:41.067 #undef SPDK_CONFIG_AVAHI 00:11:41.067 #undef SPDK_CONFIG_CET 00:11:41.067 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:41.067 #define SPDK_CONFIG_COVERAGE 1 00:11:41.067 #define SPDK_CONFIG_CROSS_PREFIX 00:11:41.067 #undef SPDK_CONFIG_CRYPTO 00:11:41.067 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:41.067 #undef SPDK_CONFIG_CUSTOMOCF 00:11:41.067 #undef SPDK_CONFIG_DAOS 00:11:41.067 #define SPDK_CONFIG_DAOS_DIR 00:11:41.067 #define SPDK_CONFIG_DEBUG 1 00:11:41.067 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:41.067 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:41.067 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:41.067 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:41.067 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:41.067 #undef SPDK_CONFIG_DPDK_UADK 00:11:41.067 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:41.067 #define SPDK_CONFIG_EXAMPLES 1 00:11:41.067 #undef SPDK_CONFIG_FC 00:11:41.067 #define SPDK_CONFIG_FC_PATH 00:11:41.067 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:41.067 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:41.067 #define SPDK_CONFIG_FSDEV 1 00:11:41.067 #undef SPDK_CONFIG_FUSE 00:11:41.067 #undef SPDK_CONFIG_FUZZER 00:11:41.067 #define SPDK_CONFIG_FUZZER_LIB 00:11:41.067 #undef SPDK_CONFIG_GOLANG 00:11:41.067 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:41.067 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:41.067 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:41.067 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:41.067 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:41.067 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:41.067 #undef SPDK_CONFIG_HAVE_LZ4 00:11:41.067 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:41.067 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:41.067 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:41.067 #define SPDK_CONFIG_IDXD 1 00:11:41.067 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:41.067 #undef SPDK_CONFIG_IPSEC_MB 00:11:41.067 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:41.067 #define SPDK_CONFIG_ISAL 1 00:11:41.067 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:41.067 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:41.067 #define SPDK_CONFIG_LIBDIR 00:11:41.067 #undef SPDK_CONFIG_LTO 00:11:41.067 #define SPDK_CONFIG_MAX_LCORES 128 00:11:41.067 #define SPDK_CONFIG_NVME_CUSE 1 00:11:41.067 #undef SPDK_CONFIG_OCF 00:11:41.067 #define SPDK_CONFIG_OCF_PATH 00:11:41.067 #define SPDK_CONFIG_OPENSSL_PATH 00:11:41.067 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:41.067 #define SPDK_CONFIG_PGO_DIR 00:11:41.067 #undef SPDK_CONFIG_PGO_USE 00:11:41.067 #define SPDK_CONFIG_PREFIX /usr/local 00:11:41.067 #undef SPDK_CONFIG_RAID5F 00:11:41.067 #undef SPDK_CONFIG_RBD 00:11:41.067 #define SPDK_CONFIG_RDMA 1 00:11:41.067 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:41.067 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:41.067 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:41.067 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:41.067 #define SPDK_CONFIG_SHARED 1 00:11:41.067 #undef SPDK_CONFIG_SMA 00:11:41.067 #define SPDK_CONFIG_TESTS 1 00:11:41.067 #undef SPDK_CONFIG_TSAN 00:11:41.067 #define SPDK_CONFIG_UBLK 1 00:11:41.067 #define SPDK_CONFIG_UBSAN 1 00:11:41.067 #undef SPDK_CONFIG_UNIT_TESTS 00:11:41.067 #undef SPDK_CONFIG_URING 00:11:41.067 #define SPDK_CONFIG_URING_PATH 00:11:41.067 #undef SPDK_CONFIG_URING_ZNS 00:11:41.067 #undef SPDK_CONFIG_USDT 00:11:41.067 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:41.067 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:41.067 #undef SPDK_CONFIG_VFIO_USER 00:11:41.067 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:41.067 #define SPDK_CONFIG_VHOST 1 00:11:41.067 #define SPDK_CONFIG_VIRTIO 1 00:11:41.067 #undef SPDK_CONFIG_VTUNE 00:11:41.067 #define SPDK_CONFIG_VTUNE_DIR 00:11:41.067 #define SPDK_CONFIG_WERROR 1 00:11:41.067 #define SPDK_CONFIG_WPDK_DIR 00:11:41.067 #undef SPDK_CONFIG_XNVME 00:11:41.067 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:41.067 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:41.067 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:41.067 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:41.067 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:41.067 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:41.067 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:41.067 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:11:41.068 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:11:41.069 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j48 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 3081966 ]] 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 3081966 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.3GvNwE 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.3GvNwE/tests/target /tmp/spdk.3GvNwE 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=4096 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5284425728 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=54975053824 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=61988511744 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=7013457920 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=30982889472 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=30994255872 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=11366400 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=12375265280 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=12397703168 00:11:41.070 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=22437888 00:11:41.071 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:41.071 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:41.071 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:41.071 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=30993686528 00:11:41.071 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=30994255872 00:11:41.071 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=569344 00:11:41.071 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:41.071 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:41.071 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:41.071 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=6198837248 00:11:41.071 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=6198849536 00:11:41.071 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:11:41.071 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:41.071 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:11:41.071 * Looking for test storage... 00:11:41.071 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:11:41.071 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:11:41.071 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:41.071 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:41.071 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:11:41.071 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=54975053824 00:11:41.071 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:11:41.071 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:11:41.071 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:11:41.071 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:11:41.071 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:11:41.071 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=9228050432 00:11:41.071 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:41.071 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:41.071 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:41.071 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:41.071 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:41.071 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:11:41.071 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1668 -- # set -o errtrace 00:11:41.071 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:11:41.071 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:41.071 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1672 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:41.071 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1673 -- # true 00:11:41.071 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1675 -- # xtrace_fd 00:11:41.071 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:41.071 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:41.071 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:41.071 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:41.071 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:41.071 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:41.071 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:41.071 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:41.071 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:41.071 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lcov --version 00:11:41.071 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:41.071 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:41.071 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:41.071 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:41.330 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:41.330 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:41.330 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:41.331 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:41.331 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:41.331 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:41.331 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:41.331 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:41.331 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:41.331 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:41.331 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:41.331 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:41.331 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:41.331 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:41.331 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:41.331 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:41.331 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:41.331 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:41.331 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:41.331 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:41.331 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:41.331 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:41.331 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:41.331 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:41.331 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:41.331 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:41.331 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:41.331 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:41.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.331 --rc genhtml_branch_coverage=1 00:11:41.331 --rc genhtml_function_coverage=1 00:11:41.331 --rc genhtml_legend=1 00:11:41.331 --rc geninfo_all_blocks=1 00:11:41.331 --rc geninfo_unexecuted_blocks=1 00:11:41.331 00:11:41.331 ' 00:11:41.331 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:41.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.331 --rc genhtml_branch_coverage=1 00:11:41.331 --rc genhtml_function_coverage=1 00:11:41.331 --rc genhtml_legend=1 00:11:41.331 --rc geninfo_all_blocks=1 00:11:41.331 --rc geninfo_unexecuted_blocks=1 00:11:41.331 00:11:41.331 ' 00:11:41.331 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:41.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.331 --rc genhtml_branch_coverage=1 00:11:41.331 --rc genhtml_function_coverage=1 00:11:41.331 --rc genhtml_legend=1 00:11:41.331 --rc geninfo_all_blocks=1 00:11:41.331 --rc geninfo_unexecuted_blocks=1 00:11:41.331 00:11:41.331 ' 00:11:41.331 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:41.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.331 --rc genhtml_branch_coverage=1 00:11:41.331 --rc genhtml_function_coverage=1 00:11:41.331 --rc genhtml_legend=1 00:11:41.331 --rc geninfo_all_blocks=1 00:11:41.331 --rc geninfo_unexecuted_blocks=1 00:11:41.331 00:11:41.331 ' 00:11:41.331 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:41.331 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:41.331 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:41.331 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:41.331 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:41.331 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:41.331 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:41.331 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:41.331 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:41.331 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:41.331 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:41.331 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:41.331 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:41.331 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:41.331 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:41.331 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:41.331 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:41.331 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:41.331 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:41.331 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:41.331 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:41.331 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:41.331 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:41.331 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.331 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.331 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.331 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:41.331 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.331 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:41.331 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:41.332 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:41.332 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:41.332 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:41.332 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:41.332 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:41.332 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:41.332 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:41.332 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:41.332 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:41.332 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:41.332 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:41.332 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:41.332 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:41.332 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:41.332 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:41.332 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:41.332 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:41.332 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:41.332 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:41.332 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:41.332 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:11:41.332 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:11:41.332 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:41.332 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:43.231 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:43.232 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:43.232 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:43.232 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:43.232 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # is_hw=yes 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:43.232 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:43.233 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:43.233 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:43.233 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:43.233 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:43.233 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:43.233 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:43.233 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:43.233 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:43.489 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:43.489 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:43.489 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:43.489 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:43.490 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:43.490 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:43.490 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:43.490 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:43.490 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:43.490 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:11:43.490 00:11:43.490 --- 10.0.0.2 ping statistics --- 00:11:43.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.490 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:11:43.490 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:43.490 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:43.490 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.055 ms 00:11:43.490 00:11:43.490 --- 10.0.0.1 ping statistics --- 00:11:43.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.490 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:11:43.490 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:43.490 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # return 0 00:11:43.490 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:43.490 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:43.490 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:11:43.490 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:11:43.490 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:43.490 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:11:43.490 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:11:43.490 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:43.490 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:43.490 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:43.490 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:43.490 ************************************ 00:11:43.490 START TEST nvmf_filesystem_no_in_capsule 00:11:43.490 ************************************ 00:11:43.490 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:11:43.490 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:43.490 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:43.490 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:43.490 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:43.490 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:43.490 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@505 -- # nvmfpid=3083606 00:11:43.490 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:43.490 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@506 -- # waitforlisten 3083606 00:11:43.490 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 3083606 ']' 00:11:43.490 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.490 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:43.490 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.490 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:43.490 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:43.747 [2024-09-29 16:19:44.063044] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:11:43.747 [2024-09-29 16:19:44.063200] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:43.747 [2024-09-29 16:19:44.201464] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:44.003 [2024-09-29 16:19:44.459756] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:44.003 [2024-09-29 16:19:44.459828] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:44.003 [2024-09-29 16:19:44.459854] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:44.003 [2024-09-29 16:19:44.459879] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:44.003 [2024-09-29 16:19:44.459898] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:44.003 [2024-09-29 16:19:44.460006] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:44.003 [2024-09-29 16:19:44.460075] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:44.003 [2024-09-29 16:19:44.460173] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.003 [2024-09-29 16:19:44.460178] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:11:44.567 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:44.567 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:44.567 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:44.567 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:44.567 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:44.567 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:44.567 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:44.567 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:44.567 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.567 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:44.567 [2024-09-29 16:19:45.061539] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:44.567 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.567 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:44.567 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.567 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:45.132 Malloc1 00:11:45.132 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.132 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:45.132 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.132 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:45.132 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.132 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:45.132 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.132 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:45.132 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.132 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:45.132 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.132 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:45.132 [2024-09-29 16:19:45.643018] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:45.132 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.132 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:45.132 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:45.132 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:45.132 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:45.132 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:45.132 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:45.132 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.132 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:45.132 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.132 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:45.132 { 00:11:45.132 "name": "Malloc1", 00:11:45.132 "aliases": [ 00:11:45.132 "cf3b8a4e-2777-4149-8c5a-f9634ad652d9" 00:11:45.132 ], 00:11:45.132 "product_name": "Malloc disk", 00:11:45.132 "block_size": 512, 00:11:45.132 "num_blocks": 1048576, 00:11:45.132 "uuid": "cf3b8a4e-2777-4149-8c5a-f9634ad652d9", 00:11:45.132 "assigned_rate_limits": { 00:11:45.132 "rw_ios_per_sec": 0, 00:11:45.132 "rw_mbytes_per_sec": 0, 00:11:45.132 "r_mbytes_per_sec": 0, 00:11:45.132 "w_mbytes_per_sec": 0 00:11:45.132 }, 00:11:45.132 "claimed": true, 00:11:45.132 "claim_type": "exclusive_write", 00:11:45.132 "zoned": false, 00:11:45.132 "supported_io_types": { 00:11:45.132 "read": true, 00:11:45.132 "write": true, 00:11:45.132 "unmap": true, 00:11:45.132 "flush": true, 00:11:45.132 "reset": true, 00:11:45.132 "nvme_admin": false, 00:11:45.132 "nvme_io": false, 00:11:45.132 "nvme_io_md": false, 00:11:45.132 "write_zeroes": true, 00:11:45.132 "zcopy": true, 00:11:45.132 "get_zone_info": false, 00:11:45.132 "zone_management": false, 00:11:45.132 "zone_append": false, 00:11:45.132 "compare": false, 00:11:45.132 "compare_and_write": false, 00:11:45.132 "abort": true, 00:11:45.132 "seek_hole": false, 00:11:45.132 "seek_data": false, 00:11:45.132 "copy": true, 00:11:45.132 "nvme_iov_md": false 00:11:45.132 }, 00:11:45.132 "memory_domains": [ 00:11:45.132 { 00:11:45.132 "dma_device_id": "system", 00:11:45.132 "dma_device_type": 1 00:11:45.132 }, 00:11:45.132 { 00:11:45.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.132 "dma_device_type": 2 00:11:45.132 } 00:11:45.132 ], 00:11:45.132 "driver_specific": {} 00:11:45.132 } 00:11:45.132 ]' 00:11:45.132 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:45.389 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:45.389 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:45.389 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:45.389 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:45.389 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:45.389 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:45.389 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:45.954 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:45.954 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:45.954 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:45.954 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:45.954 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:48.477 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:48.477 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:48.477 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:48.477 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:48.477 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:48.477 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:48.477 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:48.477 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:48.477 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:48.477 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:48.477 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:48.477 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:48.477 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:48.477 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:48.477 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:48.477 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:48.477 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:48.477 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:49.046 16:19:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:49.978 16:19:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:49.978 16:19:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:49.978 16:19:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:49.978 16:19:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:49.978 16:19:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.978 ************************************ 00:11:49.978 START TEST filesystem_ext4 00:11:49.978 ************************************ 00:11:49.978 16:19:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:49.978 16:19:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:49.978 16:19:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:49.978 16:19:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:49.978 16:19:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:49.978 16:19:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:49.978 16:19:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:49.978 16:19:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:49.979 16:19:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:49.979 16:19:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:49.979 16:19:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:49.979 mke2fs 1.47.0 (5-Feb-2023) 00:11:50.236 Discarding device blocks: 0/522240 done 00:11:50.236 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:50.236 Filesystem UUID: a0cc100c-4467-474b-9e35-111bcb016384 00:11:50.236 Superblock backups stored on blocks: 00:11:50.236 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:50.236 00:11:50.236 Allocating group tables: 0/64 done 00:11:50.236 Writing inode tables: 0/64 done 00:11:50.802 Creating journal (8192 blocks): done 00:11:53.052 Writing superblocks and filesystem accounting information: 0/6410/64 done 00:11:53.052 00:11:53.052 16:19:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:53.052 16:19:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:58.313 16:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:58.313 16:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:58.313 16:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:58.313 16:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:58.313 16:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:58.314 16:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:58.314 16:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3083606 00:11:58.314 16:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:58.314 16:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:58.314 16:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:58.314 16:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:58.314 00:11:58.314 real 0m8.274s 00:11:58.314 user 0m0.020s 00:11:58.314 sys 0m0.055s 00:11:58.314 16:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:58.314 16:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:58.314 ************************************ 00:11:58.314 END TEST filesystem_ext4 00:11:58.314 ************************************ 00:11:58.314 16:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:58.314 16:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:58.314 16:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:58.314 16:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:58.314 ************************************ 00:11:58.314 START TEST filesystem_btrfs 00:11:58.314 ************************************ 00:11:58.314 16:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:58.314 16:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:58.314 16:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:58.314 16:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:58.314 16:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:58.314 16:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:58.314 16:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:58.314 16:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:58.314 16:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:58.314 16:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:58.314 16:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:58.572 btrfs-progs v6.8.1 00:11:58.572 See https://btrfs.readthedocs.io for more information. 00:11:58.572 00:11:58.572 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:58.572 NOTE: several default settings have changed in version 5.15, please make sure 00:11:58.572 this does not affect your deployments: 00:11:58.572 - DUP for metadata (-m dup) 00:11:58.572 - enabled no-holes (-O no-holes) 00:11:58.572 - enabled free-space-tree (-R free-space-tree) 00:11:58.572 00:11:58.572 Label: (null) 00:11:58.572 UUID: f5d371a2-a130-4b99-a794-fdc71c383f36 00:11:58.572 Node size: 16384 00:11:58.572 Sector size: 4096 (CPU page size: 4096) 00:11:58.572 Filesystem size: 510.00MiB 00:11:58.572 Block group profiles: 00:11:58.572 Data: single 8.00MiB 00:11:58.572 Metadata: DUP 32.00MiB 00:11:58.572 System: DUP 8.00MiB 00:11:58.572 SSD detected: yes 00:11:58.572 Zoned device: no 00:11:58.572 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:58.572 Checksum: crc32c 00:11:58.572 Number of devices: 1 00:11:58.572 Devices: 00:11:58.572 ID SIZE PATH 00:11:58.572 1 510.00MiB /dev/nvme0n1p1 00:11:58.572 00:11:58.572 16:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:58.572 16:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:59.504 16:20:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:59.762 16:20:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:59.762 16:20:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:59.762 16:20:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:59.762 16:20:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:59.762 16:20:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:59.762 16:20:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3083606 00:11:59.762 16:20:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:59.762 16:20:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:59.762 16:20:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:59.762 16:20:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:59.762 00:11:59.762 real 0m1.281s 00:11:59.762 user 0m0.016s 00:11:59.762 sys 0m0.100s 00:11:59.762 16:20:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:59.762 16:20:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:59.762 ************************************ 00:11:59.762 END TEST filesystem_btrfs 00:11:59.762 ************************************ 00:11:59.762 16:20:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:59.762 16:20:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:59.762 16:20:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:59.762 16:20:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:59.762 ************************************ 00:11:59.762 START TEST filesystem_xfs 00:11:59.762 ************************************ 00:11:59.762 16:20:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:59.762 16:20:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:59.762 16:20:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:59.762 16:20:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:59.762 16:20:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:59.762 16:20:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:59.762 16:20:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:59.762 16:20:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:11:59.762 16:20:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:59.762 16:20:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:59.762 16:20:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:59.762 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:59.762 = sectsz=512 attr=2, projid32bit=1 00:11:59.762 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:59.762 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:59.762 data = bsize=4096 blocks=130560, imaxpct=25 00:11:59.762 = sunit=0 swidth=0 blks 00:11:59.762 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:59.762 log =internal log bsize=4096 blocks=16384, version=2 00:11:59.762 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:59.762 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:01.136 Discarding blocks...Done. 00:12:01.136 16:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:12:01.136 16:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:03.036 16:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:03.036 16:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:12:03.036 16:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:03.036 16:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:12:03.036 16:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:12:03.036 16:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:03.036 16:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3083606 00:12:03.036 16:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:03.036 16:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:03.036 16:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:03.036 16:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:03.036 00:12:03.036 real 0m3.043s 00:12:03.036 user 0m0.007s 00:12:03.036 sys 0m0.071s 00:12:03.036 16:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:03.036 16:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:03.036 ************************************ 00:12:03.036 END TEST filesystem_xfs 00:12:03.036 ************************************ 00:12:03.036 16:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:03.036 16:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:03.036 16:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:03.036 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.036 16:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:03.036 16:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:12:03.036 16:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:03.036 16:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:03.037 16:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:03.037 16:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:03.037 16:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:12:03.037 16:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:03.037 16:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.037 16:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:03.037 16:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.037 16:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:03.037 16:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3083606 00:12:03.037 16:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 3083606 ']' 00:12:03.037 16:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 3083606 00:12:03.037 16:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:12:03.037 16:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:03.037 16:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3083606 00:12:03.037 16:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:03.037 16:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:03.037 16:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3083606' 00:12:03.037 killing process with pid 3083606 00:12:03.037 16:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 3083606 00:12:03.037 16:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 3083606 00:12:05.566 16:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:05.566 00:12:05.566 real 0m22.056s 00:12:05.566 user 1m23.069s 00:12:05.566 sys 0m2.510s 00:12:05.566 16:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:05.566 16:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:05.566 ************************************ 00:12:05.566 END TEST nvmf_filesystem_no_in_capsule 00:12:05.566 ************************************ 00:12:05.566 16:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:12:05.566 16:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:05.566 16:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:05.566 16:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:05.566 ************************************ 00:12:05.566 START TEST nvmf_filesystem_in_capsule 00:12:05.566 ************************************ 00:12:05.566 16:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:12:05.566 16:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:12:05.566 16:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:05.566 16:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:12:05.566 16:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:05.566 16:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:05.566 16:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@505 -- # nvmfpid=3086487 00:12:05.566 16:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:05.566 16:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@506 -- # waitforlisten 3086487 00:12:05.566 16:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 3086487 ']' 00:12:05.566 16:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:05.566 16:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:05.566 16:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:05.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:05.566 16:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:05.566 16:20:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:05.824 [2024-09-29 16:20:06.170284] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:12:05.824 [2024-09-29 16:20:06.170439] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:05.824 [2024-09-29 16:20:06.311939] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:06.082 [2024-09-29 16:20:06.568636] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:06.082 [2024-09-29 16:20:06.568731] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:06.082 [2024-09-29 16:20:06.568762] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:06.083 [2024-09-29 16:20:06.568789] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:06.083 [2024-09-29 16:20:06.568809] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:06.083 [2024-09-29 16:20:06.568936] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:06.083 [2024-09-29 16:20:06.569004] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:06.083 [2024-09-29 16:20:06.569101] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.083 [2024-09-29 16:20:06.569108] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:12:06.648 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:06.648 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:12:06.648 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:12:06.648 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:06.648 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:06.648 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:06.648 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:06.648 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:12:06.648 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.648 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:06.648 [2024-09-29 16:20:07.179831] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:06.648 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.648 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:06.648 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.648 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:07.213 Malloc1 00:12:07.213 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.213 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:07.213 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.213 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:07.213 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.213 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:07.213 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.213 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:07.213 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.213 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:07.213 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.213 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:07.213 [2024-09-29 16:20:07.754903] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:07.213 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.213 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:07.213 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:12:07.213 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:12:07.213 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:12:07.213 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:12:07.213 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:07.213 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.213 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:07.213 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.213 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:12:07.213 { 00:12:07.213 "name": "Malloc1", 00:12:07.213 "aliases": [ 00:12:07.213 "ae4b7173-9e17-480a-83fb-ccaec0a00501" 00:12:07.213 ], 00:12:07.213 "product_name": "Malloc disk", 00:12:07.213 "block_size": 512, 00:12:07.213 "num_blocks": 1048576, 00:12:07.213 "uuid": "ae4b7173-9e17-480a-83fb-ccaec0a00501", 00:12:07.213 "assigned_rate_limits": { 00:12:07.213 "rw_ios_per_sec": 0, 00:12:07.213 "rw_mbytes_per_sec": 0, 00:12:07.213 "r_mbytes_per_sec": 0, 00:12:07.213 "w_mbytes_per_sec": 0 00:12:07.213 }, 00:12:07.213 "claimed": true, 00:12:07.213 "claim_type": "exclusive_write", 00:12:07.213 "zoned": false, 00:12:07.213 "supported_io_types": { 00:12:07.213 "read": true, 00:12:07.213 "write": true, 00:12:07.213 "unmap": true, 00:12:07.213 "flush": true, 00:12:07.213 "reset": true, 00:12:07.213 "nvme_admin": false, 00:12:07.213 "nvme_io": false, 00:12:07.213 "nvme_io_md": false, 00:12:07.213 "write_zeroes": true, 00:12:07.213 "zcopy": true, 00:12:07.213 "get_zone_info": false, 00:12:07.213 "zone_management": false, 00:12:07.213 "zone_append": false, 00:12:07.213 "compare": false, 00:12:07.213 "compare_and_write": false, 00:12:07.213 "abort": true, 00:12:07.213 "seek_hole": false, 00:12:07.213 "seek_data": false, 00:12:07.213 "copy": true, 00:12:07.213 "nvme_iov_md": false 00:12:07.213 }, 00:12:07.213 "memory_domains": [ 00:12:07.213 { 00:12:07.213 "dma_device_id": "system", 00:12:07.213 "dma_device_type": 1 00:12:07.213 }, 00:12:07.213 { 00:12:07.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.213 "dma_device_type": 2 00:12:07.213 } 00:12:07.213 ], 00:12:07.213 "driver_specific": {} 00:12:07.213 } 00:12:07.213 ]' 00:12:07.470 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:12:07.471 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:12:07.471 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:12:07.471 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:12:07.471 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:12:07.471 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:12:07.471 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:07.471 16:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:08.035 16:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:08.035 16:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:12:08.035 16:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:08.035 16:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:08.035 16:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:12:10.560 16:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:10.561 16:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:10.561 16:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:10.561 16:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:10.561 16:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:10.561 16:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:12:10.561 16:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:10.561 16:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:10.561 16:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:10.561 16:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:10.561 16:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:10.561 16:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:10.561 16:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:10.561 16:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:10.561 16:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:10.561 16:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:10.561 16:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:10.561 16:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:10.818 16:20:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:11.751 16:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:11.751 16:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:11.751 16:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:11.751 16:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:11.751 16:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:11.751 ************************************ 00:12:11.751 START TEST filesystem_in_capsule_ext4 00:12:11.751 ************************************ 00:12:11.751 16:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:11.751 16:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:11.751 16:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:11.751 16:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:11.751 16:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:12:11.751 16:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:11.751 16:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:12:11.751 16:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:12:11.751 16:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:12:11.751 16:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:12:11.751 16:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:11.751 mke2fs 1.47.0 (5-Feb-2023) 00:12:12.010 Discarding device blocks: 0/522240 done 00:12:12.010 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:12.010 Filesystem UUID: cacac8fd-3a2b-43c6-8a4d-d4a582487985 00:12:12.010 Superblock backups stored on blocks: 00:12:12.010 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:12.010 00:12:12.010 Allocating group tables: 0/64 done 00:12:12.010 Writing inode tables: 0/64 done 00:12:14.537 Creating journal (8192 blocks): done 00:12:14.795 Writing superblocks and filesystem accounting information: 0/64 done 00:12:14.795 00:12:14.795 16:20:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:12:14.795 16:20:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:21.355 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:21.355 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:21.355 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:21.355 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:21.355 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:21.355 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:21.355 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3086487 00:12:21.355 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:21.355 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:21.355 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:21.355 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:21.355 00:12:21.355 real 0m8.625s 00:12:21.355 user 0m0.014s 00:12:21.355 sys 0m0.071s 00:12:21.355 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:21.355 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:21.355 ************************************ 00:12:21.355 END TEST filesystem_in_capsule_ext4 00:12:21.355 ************************************ 00:12:21.355 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:21.355 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:21.355 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:21.355 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:21.355 ************************************ 00:12:21.355 START TEST filesystem_in_capsule_btrfs 00:12:21.355 ************************************ 00:12:21.355 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:21.355 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:21.355 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:21.355 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:21.355 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:12:21.355 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:21.355 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:12:21.355 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:12:21.355 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:12:21.355 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:12:21.355 16:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:21.355 btrfs-progs v6.8.1 00:12:21.355 See https://btrfs.readthedocs.io for more information. 00:12:21.355 00:12:21.355 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:21.355 NOTE: several default settings have changed in version 5.15, please make sure 00:12:21.355 this does not affect your deployments: 00:12:21.355 - DUP for metadata (-m dup) 00:12:21.355 - enabled no-holes (-O no-holes) 00:12:21.355 - enabled free-space-tree (-R free-space-tree) 00:12:21.355 00:12:21.355 Label: (null) 00:12:21.355 UUID: 71f13352-3fc9-4537-8b7b-e51d960b86e0 00:12:21.355 Node size: 16384 00:12:21.355 Sector size: 4096 (CPU page size: 4096) 00:12:21.355 Filesystem size: 510.00MiB 00:12:21.355 Block group profiles: 00:12:21.355 Data: single 8.00MiB 00:12:21.355 Metadata: DUP 32.00MiB 00:12:21.355 System: DUP 8.00MiB 00:12:21.355 SSD detected: yes 00:12:21.355 Zoned device: no 00:12:21.355 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:21.355 Checksum: crc32c 00:12:21.355 Number of devices: 1 00:12:21.355 Devices: 00:12:21.355 ID SIZE PATH 00:12:21.355 1 510.00MiB /dev/nvme0n1p1 00:12:21.355 00:12:21.355 16:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:12:21.355 16:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:21.355 16:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:21.355 16:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:21.355 16:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:21.355 16:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:21.355 16:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:21.355 16:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:21.355 16:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3086487 00:12:21.355 16:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:21.355 16:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:21.355 16:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:21.356 16:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:21.356 00:12:21.356 real 0m0.426s 00:12:21.356 user 0m0.025s 00:12:21.356 sys 0m0.088s 00:12:21.356 16:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:21.356 16:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:21.356 ************************************ 00:12:21.356 END TEST filesystem_in_capsule_btrfs 00:12:21.356 ************************************ 00:12:21.356 16:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:21.356 16:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:21.356 16:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:21.356 16:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:21.356 ************************************ 00:12:21.356 START TEST filesystem_in_capsule_xfs 00:12:21.356 ************************************ 00:12:21.356 16:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:12:21.356 16:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:21.356 16:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:21.356 16:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:21.356 16:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:12:21.356 16:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:21.356 16:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:12:21.356 16:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:12:21.356 16:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:12:21.356 16:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:12:21.356 16:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:21.356 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:21.356 = sectsz=512 attr=2, projid32bit=1 00:12:21.356 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:21.356 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:21.356 data = bsize=4096 blocks=130560, imaxpct=25 00:12:21.356 = sunit=0 swidth=0 blks 00:12:21.356 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:21.356 log =internal log bsize=4096 blocks=16384, version=2 00:12:21.356 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:21.356 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:22.290 Discarding blocks...Done. 00:12:22.290 16:20:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:12:22.290 16:20:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:24.190 16:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:24.190 16:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:24.190 16:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:24.190 16:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:24.190 16:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:24.190 16:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:24.190 16:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3086487 00:12:24.190 16:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:24.190 16:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:24.190 16:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:24.190 16:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:24.190 00:12:24.190 real 0m2.964s 00:12:24.190 user 0m0.019s 00:12:24.190 sys 0m0.054s 00:12:24.190 16:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:24.190 16:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:24.190 ************************************ 00:12:24.190 END TEST filesystem_in_capsule_xfs 00:12:24.190 ************************************ 00:12:24.190 16:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:24.190 16:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:24.190 16:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:24.448 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.448 16:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:24.448 16:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:12:24.448 16:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:24.448 16:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:24.448 16:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:24.448 16:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:24.448 16:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:12:24.448 16:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:24.448 16:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.448 16:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:24.448 16:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.448 16:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:24.448 16:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3086487 00:12:24.448 16:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 3086487 ']' 00:12:24.448 16:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 3086487 00:12:24.448 16:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:12:24.448 16:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:24.448 16:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3086487 00:12:24.448 16:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:24.448 16:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:24.448 16:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3086487' 00:12:24.448 killing process with pid 3086487 00:12:24.448 16:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 3086487 00:12:24.448 16:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 3086487 00:12:27.729 16:20:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:27.729 00:12:27.729 real 0m21.512s 00:12:27.729 user 1m20.805s 00:12:27.729 sys 0m2.641s 00:12:27.729 16:20:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:27.729 16:20:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:27.729 ************************************ 00:12:27.729 END TEST nvmf_filesystem_in_capsule 00:12:27.729 ************************************ 00:12:27.729 16:20:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:27.729 16:20:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@512 -- # nvmfcleanup 00:12:27.729 16:20:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:12:27.729 16:20:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:27.729 16:20:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:12:27.729 16:20:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:27.729 16:20:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:27.729 rmmod nvme_tcp 00:12:27.729 rmmod nvme_fabrics 00:12:27.729 rmmod nvme_keyring 00:12:27.729 16:20:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:27.729 16:20:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:12:27.729 16:20:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:12:27.729 16:20:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:12:27.729 16:20:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:12:27.729 16:20:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:12:27.729 16:20:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:12:27.729 16:20:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:12:27.729 16:20:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@787 -- # iptables-save 00:12:27.729 16:20:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:12:27.729 16:20:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@787 -- # iptables-restore 00:12:27.729 16:20:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:27.729 16:20:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:27.729 16:20:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:27.729 16:20:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:27.729 16:20:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:29.639 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:29.639 00:12:29.639 real 0m48.412s 00:12:29.639 user 2m44.999s 00:12:29.639 sys 0m6.878s 00:12:29.639 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:29.639 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:29.639 ************************************ 00:12:29.639 END TEST nvmf_filesystem 00:12:29.639 ************************************ 00:12:29.639 16:20:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:29.639 16:20:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:29.639 16:20:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:29.639 16:20:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:29.639 ************************************ 00:12:29.639 START TEST nvmf_target_discovery 00:12:29.639 ************************************ 00:12:29.639 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:29.639 * Looking for test storage... 00:12:29.639 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:29.639 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:29.639 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:12:29.639 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:29.639 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:29.639 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:29.639 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:29.639 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:29.639 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:29.639 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:29.639 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:29.639 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:29.639 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:29.639 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:29.639 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:29.639 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:29.639 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:29.639 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:29.639 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:29.639 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:29.639 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:29.639 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:29.639 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:29.639 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:29.639 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:29.639 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:29.639 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:29.639 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:29.639 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:29.639 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:29.639 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:29.639 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:29.639 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:29.639 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:29.639 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:29.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.639 --rc genhtml_branch_coverage=1 00:12:29.639 --rc genhtml_function_coverage=1 00:12:29.639 --rc genhtml_legend=1 00:12:29.639 --rc geninfo_all_blocks=1 00:12:29.639 --rc geninfo_unexecuted_blocks=1 00:12:29.639 00:12:29.639 ' 00:12:29.639 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:29.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.639 --rc genhtml_branch_coverage=1 00:12:29.639 --rc genhtml_function_coverage=1 00:12:29.639 --rc genhtml_legend=1 00:12:29.639 --rc geninfo_all_blocks=1 00:12:29.639 --rc geninfo_unexecuted_blocks=1 00:12:29.639 00:12:29.639 ' 00:12:29.639 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:29.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.639 --rc genhtml_branch_coverage=1 00:12:29.639 --rc genhtml_function_coverage=1 00:12:29.639 --rc genhtml_legend=1 00:12:29.639 --rc geninfo_all_blocks=1 00:12:29.639 --rc geninfo_unexecuted_blocks=1 00:12:29.639 00:12:29.639 ' 00:12:29.639 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:29.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.639 --rc genhtml_branch_coverage=1 00:12:29.639 --rc genhtml_function_coverage=1 00:12:29.639 --rc genhtml_legend=1 00:12:29.639 --rc geninfo_all_blocks=1 00:12:29.639 --rc geninfo_unexecuted_blocks=1 00:12:29.639 00:12:29.639 ' 00:12:29.639 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:29.639 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:29.639 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:29.639 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:29.639 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:29.639 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:29.639 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:29.639 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:29.639 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:29.639 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:29.639 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:29.639 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:29.639 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:29.639 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:29.639 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:29.639 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:29.639 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:29.639 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:29.639 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:29.639 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:29.639 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:29.639 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:29.639 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:29.640 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.640 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.640 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.640 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:29.640 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.640 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:29.640 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:29.640 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:29.640 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:29.640 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:29.640 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:29.640 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:29.640 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:29.640 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:29.640 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:29.640 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:29.640 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:29.640 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:29.640 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:29.640 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:29.640 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:29.640 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:12:29.640 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:29.640 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@472 -- # prepare_net_devs 00:12:29.640 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@434 -- # local -g is_hw=no 00:12:29.640 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@436 -- # remove_spdk_ns 00:12:29.640 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:29.640 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:29.640 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:29.640 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:12:29.640 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:12:29.640 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:12:29.640 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:31.629 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:31.629 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:31.629 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:31.629 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:31.629 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:31.629 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:31.629 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:31.629 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:31.629 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:31.629 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:31.629 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:31.629 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:31.629 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:31.629 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:31.629 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:31.629 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:31.629 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:31.629 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:31.629 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:31.629 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:31.629 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:31.629 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:31.629 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:31.629 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:31.629 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:31.629 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:31.629 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:12:31.629 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:31.630 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:31.630 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:31.630 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:31.630 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # is_hw=yes 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:31.630 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:31.630 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.337 ms 00:12:31.630 00:12:31.630 --- 10.0.0.2 ping statistics --- 00:12:31.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.630 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:31.630 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:31.630 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:12:31.630 00:12:31.630 --- 10.0.0.1 ping statistics --- 00:12:31.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.630 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # return 0 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:12:31.630 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:12:31.889 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:31.889 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:12:31.889 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:31.889 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:31.889 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@505 -- # nvmfpid=3091061 00:12:31.889 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:31.889 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@506 -- # waitforlisten 3091061 00:12:31.889 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 3091061 ']' 00:12:31.889 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.889 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:31.889 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:31.889 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:31.889 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:31.889 [2024-09-29 16:20:32.310463] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:12:31.889 [2024-09-29 16:20:32.310618] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:31.889 [2024-09-29 16:20:32.450113] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:32.147 [2024-09-29 16:20:32.685503] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:32.147 [2024-09-29 16:20:32.685582] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:32.147 [2024-09-29 16:20:32.685604] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:32.147 [2024-09-29 16:20:32.685625] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:32.147 [2024-09-29 16:20:32.685642] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:32.147 [2024-09-29 16:20:32.685784] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:32.147 [2024-09-29 16:20:32.685847] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:32.147 [2024-09-29 16:20:32.685891] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.147 [2024-09-29 16:20:32.685898] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:12:33.081 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:33.081 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:12:33.081 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:12:33.081 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:33.081 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.081 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:33.081 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:33.081 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.081 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.081 [2024-09-29 16:20:33.362440] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:33.081 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.081 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:33.081 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:33.081 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:33.081 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.081 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.081 Null1 00:12:33.081 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.081 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:33.081 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.081 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.081 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.081 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:33.081 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.081 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.081 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.081 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:33.081 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.081 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.081 [2024-09-29 16:20:33.404223] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:33.081 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.081 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:33.081 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:33.081 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.081 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.081 Null2 00:12:33.081 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.081 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:33.081 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.081 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.081 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.081 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:33.081 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.081 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.081 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.081 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:33.081 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.081 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.081 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.081 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:33.081 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:33.081 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.081 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.081 Null3 00:12:33.081 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.081 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:33.081 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.081 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.081 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.081 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:33.081 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.081 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.081 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.081 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:33.081 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.081 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.081 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.081 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:33.081 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:33.081 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.081 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.081 Null4 00:12:33.081 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.081 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:33.082 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.082 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.082 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.082 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:33.082 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.082 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.082 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.082 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:33.082 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.082 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.082 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.082 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:33.082 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.082 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.082 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.082 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:33.082 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.082 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.082 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.082 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:12:33.340 00:12:33.340 Discovery Log Number of Records 6, Generation counter 6 00:12:33.340 =====Discovery Log Entry 0====== 00:12:33.340 trtype: tcp 00:12:33.340 adrfam: ipv4 00:12:33.340 subtype: current discovery subsystem 00:12:33.340 treq: not required 00:12:33.340 portid: 0 00:12:33.340 trsvcid: 4420 00:12:33.340 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:33.340 traddr: 10.0.0.2 00:12:33.340 eflags: explicit discovery connections, duplicate discovery information 00:12:33.340 sectype: none 00:12:33.340 =====Discovery Log Entry 1====== 00:12:33.340 trtype: tcp 00:12:33.340 adrfam: ipv4 00:12:33.340 subtype: nvme subsystem 00:12:33.340 treq: not required 00:12:33.340 portid: 0 00:12:33.340 trsvcid: 4420 00:12:33.340 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:33.340 traddr: 10.0.0.2 00:12:33.340 eflags: none 00:12:33.340 sectype: none 00:12:33.340 =====Discovery Log Entry 2====== 00:12:33.340 trtype: tcp 00:12:33.340 adrfam: ipv4 00:12:33.340 subtype: nvme subsystem 00:12:33.340 treq: not required 00:12:33.340 portid: 0 00:12:33.340 trsvcid: 4420 00:12:33.340 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:33.340 traddr: 10.0.0.2 00:12:33.340 eflags: none 00:12:33.340 sectype: none 00:12:33.340 =====Discovery Log Entry 3====== 00:12:33.340 trtype: tcp 00:12:33.340 adrfam: ipv4 00:12:33.340 subtype: nvme subsystem 00:12:33.340 treq: not required 00:12:33.340 portid: 0 00:12:33.340 trsvcid: 4420 00:12:33.340 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:33.340 traddr: 10.0.0.2 00:12:33.340 eflags: none 00:12:33.340 sectype: none 00:12:33.340 =====Discovery Log Entry 4====== 00:12:33.340 trtype: tcp 00:12:33.340 adrfam: ipv4 00:12:33.340 subtype: nvme subsystem 00:12:33.340 treq: not required 00:12:33.340 portid: 0 00:12:33.340 trsvcid: 4420 00:12:33.340 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:33.340 traddr: 10.0.0.2 00:12:33.340 eflags: none 00:12:33.340 sectype: none 00:12:33.340 =====Discovery Log Entry 5====== 00:12:33.340 trtype: tcp 00:12:33.340 adrfam: ipv4 00:12:33.340 subtype: discovery subsystem referral 00:12:33.340 treq: not required 00:12:33.340 portid: 0 00:12:33.340 trsvcid: 4430 00:12:33.340 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:33.340 traddr: 10.0.0.2 00:12:33.340 eflags: none 00:12:33.340 sectype: none 00:12:33.340 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:33.340 Perform nvmf subsystem discovery via RPC 00:12:33.340 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:33.340 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.340 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.340 [ 00:12:33.340 { 00:12:33.340 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:33.340 "subtype": "Discovery", 00:12:33.340 "listen_addresses": [ 00:12:33.340 { 00:12:33.340 "trtype": "TCP", 00:12:33.340 "adrfam": "IPv4", 00:12:33.340 "traddr": "10.0.0.2", 00:12:33.340 "trsvcid": "4420" 00:12:33.340 } 00:12:33.340 ], 00:12:33.340 "allow_any_host": true, 00:12:33.340 "hosts": [] 00:12:33.340 }, 00:12:33.340 { 00:12:33.340 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:33.340 "subtype": "NVMe", 00:12:33.340 "listen_addresses": [ 00:12:33.340 { 00:12:33.340 "trtype": "TCP", 00:12:33.340 "adrfam": "IPv4", 00:12:33.340 "traddr": "10.0.0.2", 00:12:33.340 "trsvcid": "4420" 00:12:33.340 } 00:12:33.340 ], 00:12:33.340 "allow_any_host": true, 00:12:33.340 "hosts": [], 00:12:33.340 "serial_number": "SPDK00000000000001", 00:12:33.340 "model_number": "SPDK bdev Controller", 00:12:33.340 "max_namespaces": 32, 00:12:33.340 "min_cntlid": 1, 00:12:33.340 "max_cntlid": 65519, 00:12:33.340 "namespaces": [ 00:12:33.340 { 00:12:33.340 "nsid": 1, 00:12:33.340 "bdev_name": "Null1", 00:12:33.340 "name": "Null1", 00:12:33.340 "nguid": "0D5E313E30E74C16BB6AF49398D5F6FC", 00:12:33.340 "uuid": "0d5e313e-30e7-4c16-bb6a-f49398d5f6fc" 00:12:33.340 } 00:12:33.340 ] 00:12:33.340 }, 00:12:33.340 { 00:12:33.340 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:33.340 "subtype": "NVMe", 00:12:33.340 "listen_addresses": [ 00:12:33.340 { 00:12:33.340 "trtype": "TCP", 00:12:33.340 "adrfam": "IPv4", 00:12:33.341 "traddr": "10.0.0.2", 00:12:33.341 "trsvcid": "4420" 00:12:33.341 } 00:12:33.341 ], 00:12:33.341 "allow_any_host": true, 00:12:33.341 "hosts": [], 00:12:33.341 "serial_number": "SPDK00000000000002", 00:12:33.341 "model_number": "SPDK bdev Controller", 00:12:33.341 "max_namespaces": 32, 00:12:33.341 "min_cntlid": 1, 00:12:33.341 "max_cntlid": 65519, 00:12:33.341 "namespaces": [ 00:12:33.341 { 00:12:33.341 "nsid": 1, 00:12:33.341 "bdev_name": "Null2", 00:12:33.341 "name": "Null2", 00:12:33.341 "nguid": "5A6D691467F14996BEF2C7B92FC4F63A", 00:12:33.341 "uuid": "5a6d6914-67f1-4996-bef2-c7b92fc4f63a" 00:12:33.341 } 00:12:33.341 ] 00:12:33.341 }, 00:12:33.341 { 00:12:33.341 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:33.341 "subtype": "NVMe", 00:12:33.341 "listen_addresses": [ 00:12:33.341 { 00:12:33.341 "trtype": "TCP", 00:12:33.341 "adrfam": "IPv4", 00:12:33.341 "traddr": "10.0.0.2", 00:12:33.341 "trsvcid": "4420" 00:12:33.341 } 00:12:33.341 ], 00:12:33.341 "allow_any_host": true, 00:12:33.341 "hosts": [], 00:12:33.341 "serial_number": "SPDK00000000000003", 00:12:33.341 "model_number": "SPDK bdev Controller", 00:12:33.341 "max_namespaces": 32, 00:12:33.341 "min_cntlid": 1, 00:12:33.341 "max_cntlid": 65519, 00:12:33.341 "namespaces": [ 00:12:33.341 { 00:12:33.341 "nsid": 1, 00:12:33.341 "bdev_name": "Null3", 00:12:33.341 "name": "Null3", 00:12:33.341 "nguid": "425F8ED06709429A8A1956016DA82D02", 00:12:33.341 "uuid": "425f8ed0-6709-429a-8a19-56016da82d02" 00:12:33.341 } 00:12:33.341 ] 00:12:33.341 }, 00:12:33.341 { 00:12:33.341 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:33.341 "subtype": "NVMe", 00:12:33.341 "listen_addresses": [ 00:12:33.341 { 00:12:33.341 "trtype": "TCP", 00:12:33.341 "adrfam": "IPv4", 00:12:33.341 "traddr": "10.0.0.2", 00:12:33.341 "trsvcid": "4420" 00:12:33.341 } 00:12:33.341 ], 00:12:33.341 "allow_any_host": true, 00:12:33.341 "hosts": [], 00:12:33.341 "serial_number": "SPDK00000000000004", 00:12:33.341 "model_number": "SPDK bdev Controller", 00:12:33.341 "max_namespaces": 32, 00:12:33.341 "min_cntlid": 1, 00:12:33.341 "max_cntlid": 65519, 00:12:33.341 "namespaces": [ 00:12:33.341 { 00:12:33.341 "nsid": 1, 00:12:33.341 "bdev_name": "Null4", 00:12:33.341 "name": "Null4", 00:12:33.341 "nguid": "52666FD60E3A499D9E1AC96B3EF1C4F7", 00:12:33.341 "uuid": "52666fd6-0e3a-499d-9e1a-c96b3ef1c4f7" 00:12:33.341 } 00:12:33.341 ] 00:12:33.341 } 00:12:33.341 ] 00:12:33.341 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.341 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:33.341 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:33.341 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:33.341 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.341 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.341 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.341 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:33.341 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.341 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.341 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.341 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:33.341 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:33.341 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.341 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.341 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.341 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:33.341 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.341 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.341 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.341 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:33.341 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:33.341 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.341 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.341 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.341 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:33.341 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.341 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.341 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.341 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:33.341 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:33.341 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.341 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.341 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.341 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:33.341 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.341 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.341 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.341 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:33.341 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.341 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.341 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.341 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:33.341 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:33.341 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.341 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:33.341 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.341 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:33.341 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:33.341 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:33.341 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:33.341 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # nvmfcleanup 00:12:33.341 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:33.341 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:33.341 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:33.341 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:33.341 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:33.341 rmmod nvme_tcp 00:12:33.341 rmmod nvme_fabrics 00:12:33.341 rmmod nvme_keyring 00:12:33.341 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:33.341 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:33.341 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:33.341 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@513 -- # '[' -n 3091061 ']' 00:12:33.341 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@514 -- # killprocess 3091061 00:12:33.341 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 3091061 ']' 00:12:33.341 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 3091061 00:12:33.341 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:12:33.341 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:33.341 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3091061 00:12:33.599 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:33.599 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:33.599 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3091061' 00:12:33.599 killing process with pid 3091061 00:12:33.599 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 3091061 00:12:33.599 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 3091061 00:12:34.970 16:20:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:12:34.970 16:20:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:12:34.970 16:20:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:12:34.970 16:20:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:34.970 16:20:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@787 -- # iptables-save 00:12:34.970 16:20:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:12:34.970 16:20:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@787 -- # iptables-restore 00:12:34.971 16:20:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:34.971 16:20:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:34.971 16:20:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:34.971 16:20:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:34.971 16:20:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.872 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:36.872 00:12:36.872 real 0m7.506s 00:12:36.872 user 0m9.605s 00:12:36.872 sys 0m2.224s 00:12:36.872 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:36.872 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:36.872 ************************************ 00:12:36.872 END TEST nvmf_target_discovery 00:12:36.872 ************************************ 00:12:36.872 16:20:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:36.872 16:20:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:36.872 16:20:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:36.872 16:20:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:36.872 ************************************ 00:12:36.872 START TEST nvmf_referrals 00:12:36.872 ************************************ 00:12:36.872 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:36.872 * Looking for test storage... 00:12:36.872 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:36.872 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:36.872 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # lcov --version 00:12:36.872 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:37.130 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:37.130 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:37.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.131 --rc genhtml_branch_coverage=1 00:12:37.131 --rc genhtml_function_coverage=1 00:12:37.131 --rc genhtml_legend=1 00:12:37.131 --rc geninfo_all_blocks=1 00:12:37.131 --rc geninfo_unexecuted_blocks=1 00:12:37.131 00:12:37.131 ' 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:37.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.131 --rc genhtml_branch_coverage=1 00:12:37.131 --rc genhtml_function_coverage=1 00:12:37.131 --rc genhtml_legend=1 00:12:37.131 --rc geninfo_all_blocks=1 00:12:37.131 --rc geninfo_unexecuted_blocks=1 00:12:37.131 00:12:37.131 ' 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:37.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.131 --rc genhtml_branch_coverage=1 00:12:37.131 --rc genhtml_function_coverage=1 00:12:37.131 --rc genhtml_legend=1 00:12:37.131 --rc geninfo_all_blocks=1 00:12:37.131 --rc geninfo_unexecuted_blocks=1 00:12:37.131 00:12:37.131 ' 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:37.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.131 --rc genhtml_branch_coverage=1 00:12:37.131 --rc genhtml_function_coverage=1 00:12:37.131 --rc genhtml_legend=1 00:12:37.131 --rc geninfo_all_blocks=1 00:12:37.131 --rc geninfo_unexecuted_blocks=1 00:12:37.131 00:12:37.131 ' 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:37.131 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:37.131 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:12:37.132 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:37.132 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@472 -- # prepare_net_devs 00:12:37.132 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@434 -- # local -g is_hw=no 00:12:37.132 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@436 -- # remove_spdk_ns 00:12:37.132 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:37.132 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:37.132 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:37.132 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:12:37.132 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:12:37.132 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:37.132 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:39.033 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:39.033 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:39.033 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:39.033 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # is_hw=yes 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:39.033 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:39.034 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:39.034 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:39.034 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:39.034 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:39.292 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:39.292 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:39.292 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:39.292 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:39.292 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:39.292 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:39.292 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:39.292 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:39.292 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:39.292 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:12:39.292 00:12:39.292 --- 10.0.0.2 ping statistics --- 00:12:39.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:39.292 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:12:39.292 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:39.292 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:39.292 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:12:39.292 00:12:39.292 --- 10.0.0.1 ping statistics --- 00:12:39.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:39.292 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:12:39.292 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:39.292 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # return 0 00:12:39.292 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:12:39.292 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:39.292 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:12:39.292 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:12:39.292 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:39.292 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:12:39.292 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:12:39.292 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:39.292 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:12:39.292 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:39.292 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:39.292 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@505 -- # nvmfpid=3093416 00:12:39.292 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:39.292 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@506 -- # waitforlisten 3093416 00:12:39.292 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 3093416 ']' 00:12:39.292 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:39.292 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:39.292 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:39.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:39.292 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:39.292 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:39.292 [2024-09-29 16:20:39.820562] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:12:39.293 [2024-09-29 16:20:39.820715] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:39.550 [2024-09-29 16:20:39.964149] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:39.809 [2024-09-29 16:20:40.236652] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:39.809 [2024-09-29 16:20:40.236746] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:39.809 [2024-09-29 16:20:40.236772] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:39.809 [2024-09-29 16:20:40.236796] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:39.809 [2024-09-29 16:20:40.236816] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:39.809 [2024-09-29 16:20:40.237142] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:39.809 [2024-09-29 16:20:40.237238] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:39.809 [2024-09-29 16:20:40.237454] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.809 [2024-09-29 16:20:40.237454] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:12:40.375 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:40.375 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:12:40.375 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:12:40.375 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:40.375 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:40.375 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:40.375 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:40.375 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.375 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:40.375 [2024-09-29 16:20:40.812936] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:40.375 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.375 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:40.375 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.375 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:40.375 [2024-09-29 16:20:40.826922] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:40.375 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.375 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:40.375 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.375 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:40.375 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.375 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:40.375 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.375 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:40.375 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.375 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:40.375 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.375 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:40.375 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.375 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:40.375 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:40.376 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.376 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:40.376 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.376 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:40.376 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:40.376 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:40.376 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:40.376 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.376 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:40.376 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:40.376 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:40.376 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.376 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:40.376 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:40.376 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:40.376 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:40.376 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:40.634 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:40.634 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:40.634 16:20:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:40.634 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:40.634 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:40.634 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:40.634 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.634 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:40.634 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.634 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:40.634 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.634 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:40.634 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.634 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:40.634 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.634 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:40.634 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.634 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:40.634 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.634 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:40.634 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:40.634 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.892 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:40.892 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:40.892 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:40.892 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:40.892 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:40.892 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:40.892 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:40.892 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:40.892 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:40.892 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:40.892 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.892 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:40.892 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.892 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:40.892 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.892 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:40.892 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.892 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:40.892 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:40.892 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:40.892 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:40.892 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.892 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:40.892 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:40.892 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.150 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:41.150 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:41.150 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:41.150 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:41.150 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:41.150 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:41.150 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:41.150 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:41.150 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:41.150 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:41.150 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:41.150 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:41.150 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:41.150 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:41.150 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:41.408 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:41.408 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:41.408 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:41.408 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:41.408 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:41.408 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:41.666 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:41.666 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:41.666 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.666 16:20:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:41.666 16:20:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.666 16:20:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:41.666 16:20:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:41.666 16:20:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:41.666 16:20:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:41.666 16:20:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.666 16:20:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:41.666 16:20:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:41.666 16:20:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.666 16:20:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:41.666 16:20:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:41.666 16:20:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:41.666 16:20:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:41.666 16:20:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:41.666 16:20:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:41.666 16:20:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:41.666 16:20:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:41.666 16:20:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:41.666 16:20:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:41.666 16:20:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:41.666 16:20:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:41.666 16:20:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:41.666 16:20:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:41.924 16:20:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:41.924 16:20:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:41.924 16:20:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:41.924 16:20:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:41.924 16:20:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:41.924 16:20:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:41.924 16:20:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:42.182 16:20:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:42.182 16:20:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:42.182 16:20:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.182 16:20:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:42.182 16:20:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.182 16:20:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:42.182 16:20:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.182 16:20:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:42.182 16:20:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:42.182 16:20:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.182 16:20:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:42.182 16:20:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:42.182 16:20:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:42.182 16:20:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:42.182 16:20:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:42.182 16:20:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:42.182 16:20:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:42.440 16:20:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:42.440 16:20:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:42.440 16:20:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:42.440 16:20:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:42.440 16:20:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # nvmfcleanup 00:12:42.440 16:20:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:42.440 16:20:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:42.440 16:20:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:42.440 16:20:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:42.440 16:20:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:42.440 rmmod nvme_tcp 00:12:42.440 rmmod nvme_fabrics 00:12:42.440 rmmod nvme_keyring 00:12:42.440 16:20:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:42.440 16:20:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:42.440 16:20:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:42.440 16:20:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@513 -- # '[' -n 3093416 ']' 00:12:42.440 16:20:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@514 -- # killprocess 3093416 00:12:42.440 16:20:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 3093416 ']' 00:12:42.440 16:20:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 3093416 00:12:42.440 16:20:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:12:42.440 16:20:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:42.440 16:20:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3093416 00:12:42.440 16:20:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:42.440 16:20:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:42.440 16:20:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3093416' 00:12:42.440 killing process with pid 3093416 00:12:42.440 16:20:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 3093416 00:12:42.440 16:20:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 3093416 00:12:43.814 16:20:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:12:43.814 16:20:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:12:43.814 16:20:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:12:43.814 16:20:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:43.814 16:20:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@787 -- # iptables-save 00:12:43.814 16:20:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:12:43.814 16:20:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@787 -- # iptables-restore 00:12:43.814 16:20:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:43.814 16:20:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:43.814 16:20:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:43.814 16:20:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:43.814 16:20:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:45.720 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:45.720 00:12:45.720 real 0m8.847s 00:12:45.720 user 0m15.808s 00:12:45.720 sys 0m2.492s 00:12:45.720 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:45.720 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:45.720 ************************************ 00:12:45.720 END TEST nvmf_referrals 00:12:45.720 ************************************ 00:12:45.720 16:20:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:45.720 16:20:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:45.720 16:20:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:45.720 16:20:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:45.720 ************************************ 00:12:45.720 START TEST nvmf_connect_disconnect 00:12:45.720 ************************************ 00:12:45.720 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:45.720 * Looking for test storage... 00:12:45.720 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:45.720 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:45.720 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # lcov --version 00:12:45.720 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:45.979 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:45.979 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:45.979 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:45.979 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:45.979 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:45.979 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:45.979 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:45.979 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:45.979 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:45.979 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:45.979 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:45.979 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:45.979 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:45.979 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:45.979 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:45.979 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:45.979 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:45.979 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:45.979 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:45.979 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:45.979 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:45.979 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:45.979 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:45.979 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:45.979 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:45.979 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:45.979 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:45.979 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:45.979 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:45.979 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:45.979 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:45.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.979 --rc genhtml_branch_coverage=1 00:12:45.979 --rc genhtml_function_coverage=1 00:12:45.979 --rc genhtml_legend=1 00:12:45.979 --rc geninfo_all_blocks=1 00:12:45.979 --rc geninfo_unexecuted_blocks=1 00:12:45.979 00:12:45.979 ' 00:12:45.979 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:45.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.979 --rc genhtml_branch_coverage=1 00:12:45.979 --rc genhtml_function_coverage=1 00:12:45.979 --rc genhtml_legend=1 00:12:45.979 --rc geninfo_all_blocks=1 00:12:45.979 --rc geninfo_unexecuted_blocks=1 00:12:45.979 00:12:45.979 ' 00:12:45.979 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:45.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.979 --rc genhtml_branch_coverage=1 00:12:45.979 --rc genhtml_function_coverage=1 00:12:45.979 --rc genhtml_legend=1 00:12:45.979 --rc geninfo_all_blocks=1 00:12:45.979 --rc geninfo_unexecuted_blocks=1 00:12:45.979 00:12:45.979 ' 00:12:45.979 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:45.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.979 --rc genhtml_branch_coverage=1 00:12:45.979 --rc genhtml_function_coverage=1 00:12:45.979 --rc genhtml_legend=1 00:12:45.979 --rc geninfo_all_blocks=1 00:12:45.979 --rc geninfo_unexecuted_blocks=1 00:12:45.979 00:12:45.979 ' 00:12:45.979 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:45.979 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:45.979 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:45.979 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:45.979 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:45.979 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:45.979 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:45.979 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:45.979 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:45.979 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:45.979 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:45.979 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:45.979 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:45.979 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:45.979 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:45.979 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:45.979 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:45.979 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:45.979 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:45.979 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:45.979 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:45.979 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:45.979 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:45.980 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.980 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.980 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.980 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:45.980 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.980 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:45.980 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:45.980 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:45.980 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:45.980 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:45.980 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:45.980 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:45.980 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:45.980 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:45.980 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:45.980 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:45.980 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:45.980 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:45.980 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:45.980 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:12:45.980 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:45.980 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@472 -- # prepare_net_devs 00:12:45.980 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@434 -- # local -g is_hw=no 00:12:45.980 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@436 -- # remove_spdk_ns 00:12:45.980 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:45.980 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:45.980 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:45.980 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:12:45.980 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:12:45.980 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:45.980 16:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:47.884 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:47.884 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:47.884 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:47.884 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # is_hw=yes 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:47.884 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:47.885 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:47.885 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:47.885 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:47.885 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:47.885 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:47.885 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:47.885 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:47.885 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:47.885 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:47.885 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:47.885 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:48.144 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:48.144 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:48.144 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:48.144 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:48.144 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:48.144 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:48.144 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:48.144 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:48.144 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:48.144 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.327 ms 00:12:48.144 00:12:48.144 --- 10.0.0.2 ping statistics --- 00:12:48.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:48.144 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:12:48.144 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:48.144 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:48.144 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:12:48.144 00:12:48.144 --- 10.0.0.1 ping statistics --- 00:12:48.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:48.144 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:12:48.144 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:48.144 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # return 0 00:12:48.144 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:12:48.144 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:48.144 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:12:48.144 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:12:48.144 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:48.144 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:12:48.144 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:12:48.144 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:48.144 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:12:48.144 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:48.144 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:48.144 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@505 -- # nvmfpid=3095975 00:12:48.144 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:48.144 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@506 -- # waitforlisten 3095975 00:12:48.144 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 3095975 ']' 00:12:48.144 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:48.144 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:48.144 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:48.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:48.144 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:48.144 16:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:48.144 [2024-09-29 16:20:48.666721] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:12:48.144 [2024-09-29 16:20:48.666874] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:48.403 [2024-09-29 16:20:48.817811] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:48.660 [2024-09-29 16:20:49.076260] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:48.660 [2024-09-29 16:20:49.076345] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:48.660 [2024-09-29 16:20:49.076372] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:48.660 [2024-09-29 16:20:49.076396] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:48.660 [2024-09-29 16:20:49.076415] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:48.660 [2024-09-29 16:20:49.076559] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:48.660 [2024-09-29 16:20:49.076619] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:48.660 [2024-09-29 16:20:49.076669] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.660 [2024-09-29 16:20:49.076691] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:12:49.226 16:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:49.226 16:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:12:49.226 16:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:12:49.226 16:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:49.226 16:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:49.226 16:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:49.226 16:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:49.226 16:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.226 16:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:49.226 [2024-09-29 16:20:49.673744] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:49.226 16:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.226 16:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:49.226 16:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.226 16:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:49.226 16:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.226 16:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:49.226 16:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:49.226 16:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.226 16:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:49.226 16:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.226 16:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:49.226 16:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.226 16:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:49.226 16:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.226 16:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:49.226 16:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.226 16:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:49.226 [2024-09-29 16:20:49.777430] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:49.226 16:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.226 16:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:12:49.226 16:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:12:49.226 16:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:12:49.226 16:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:51.753 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:54.282 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.807 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.703 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.258 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.182 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.709 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.236 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:10.758 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.657 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:15.185 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:17.711 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:19.606 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.126 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:24.650 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:26.546 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:29.080 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:31.608 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:33.508 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:36.036 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:38.570 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:40.547 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.074 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:45.600 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:47.500 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:50.023 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:52.549 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:55.076 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:56.976 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:59.503 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:02.030 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:03.928 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:06.457 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:08.982 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:10.877 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:13.402 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:15.997 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:17.894 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:20.421 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:22.949 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.476 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.004 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:30.532 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:32.430 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:34.953 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:37.477 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.373 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:41.902 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:44.431 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.329 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:48.856 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:51.385 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.334 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:55.854 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:57.752 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:00.279 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:02.803 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:05.331 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:07.861 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:09.760 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:12.288 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:14.819 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:17.345 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:19.244 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:21.770 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:24.295 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:26.820 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:29.381 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:31.935 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:34.461 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:36.359 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:38.889 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:41.416 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:43.315 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:45.842 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:48.367 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:50.889 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:52.786 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:55.313 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:57.840 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:59.740 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:02.267 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:04.794 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:06.751 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:09.279 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:11.177 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:13.704 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:16.226 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:18.754 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:20.654 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:23.181 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:25.709 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:27.608 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:30.138 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:32.666 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:34.671 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:37.271 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:39.792 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:41.689 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:44.218 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:44.218 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:16:44.218 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:16:44.218 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # nvmfcleanup 00:16:44.218 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:16:44.218 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:44.218 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:16:44.218 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:44.218 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:44.218 rmmod nvme_tcp 00:16:44.218 rmmod nvme_fabrics 00:16:44.218 rmmod nvme_keyring 00:16:44.218 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:44.218 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:16:44.218 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:16:44.218 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@513 -- # '[' -n 3095975 ']' 00:16:44.218 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@514 -- # killprocess 3095975 00:16:44.218 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 3095975 ']' 00:16:44.218 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 3095975 00:16:44.218 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:16:44.218 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:44.218 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3095975 00:16:44.218 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:44.218 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:44.218 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3095975' 00:16:44.218 killing process with pid 3095975 00:16:44.218 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 3095975 00:16:44.218 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 3095975 00:16:45.593 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:16:45.593 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:16:45.593 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:16:45.593 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:16:45.593 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@787 -- # iptables-save 00:16:45.593 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:16:45.593 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@787 -- # iptables-restore 00:16:45.593 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:45.593 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:45.593 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:45.593 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:45.593 16:24:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:47.497 16:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:47.497 00:16:47.497 real 4m1.722s 00:16:47.497 user 15m13.800s 00:16:47.497 sys 0m38.733s 00:16:47.497 16:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:47.497 16:24:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:47.497 ************************************ 00:16:47.497 END TEST nvmf_connect_disconnect 00:16:47.497 ************************************ 00:16:47.497 16:24:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:47.497 16:24:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:47.497 16:24:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:47.497 16:24:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:47.497 ************************************ 00:16:47.497 START TEST nvmf_multitarget 00:16:47.497 ************************************ 00:16:47.497 16:24:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:47.497 * Looking for test storage... 00:16:47.497 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:47.497 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:47.497 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # lcov --version 00:16:47.497 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:47.756 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:47.756 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:47.756 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:47.756 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:47.756 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:16:47.756 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:16:47.756 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:16:47.756 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:16:47.756 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:16:47.756 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:16:47.756 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:16:47.756 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:47.756 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:16:47.756 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:16:47.756 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:47.756 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:47.756 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:16:47.756 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:16:47.756 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:47.756 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:16:47.756 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:16:47.756 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:16:47.756 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:16:47.756 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:47.756 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:16:47.756 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:16:47.756 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:47.756 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:47.756 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:16:47.756 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:47.756 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:47.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:47.756 --rc genhtml_branch_coverage=1 00:16:47.756 --rc genhtml_function_coverage=1 00:16:47.756 --rc genhtml_legend=1 00:16:47.756 --rc geninfo_all_blocks=1 00:16:47.756 --rc geninfo_unexecuted_blocks=1 00:16:47.756 00:16:47.756 ' 00:16:47.756 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:47.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:47.756 --rc genhtml_branch_coverage=1 00:16:47.756 --rc genhtml_function_coverage=1 00:16:47.756 --rc genhtml_legend=1 00:16:47.756 --rc geninfo_all_blocks=1 00:16:47.756 --rc geninfo_unexecuted_blocks=1 00:16:47.756 00:16:47.756 ' 00:16:47.756 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:47.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:47.756 --rc genhtml_branch_coverage=1 00:16:47.756 --rc genhtml_function_coverage=1 00:16:47.756 --rc genhtml_legend=1 00:16:47.756 --rc geninfo_all_blocks=1 00:16:47.756 --rc geninfo_unexecuted_blocks=1 00:16:47.756 00:16:47.756 ' 00:16:47.756 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:47.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:47.756 --rc genhtml_branch_coverage=1 00:16:47.756 --rc genhtml_function_coverage=1 00:16:47.756 --rc genhtml_legend=1 00:16:47.756 --rc geninfo_all_blocks=1 00:16:47.756 --rc geninfo_unexecuted_blocks=1 00:16:47.756 00:16:47.756 ' 00:16:47.756 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:47.756 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:16:47.756 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:47.756 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:47.756 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:47.756 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:47.756 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:47.756 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:47.756 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:47.756 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:47.756 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:47.756 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:47.756 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:47.756 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:47.756 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:47.756 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:47.756 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:47.756 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:47.756 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:47.756 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:16:47.756 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:47.756 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:47.756 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:47.756 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.756 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.757 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.757 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:16:47.757 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.757 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:16:47.757 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:47.757 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:47.757 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:47.757 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:47.757 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:47.757 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:47.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:47.757 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:47.757 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:47.757 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:47.757 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:47.757 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:16:47.757 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:16:47.757 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:47.757 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@472 -- # prepare_net_devs 00:16:47.757 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@434 -- # local -g is_hw=no 00:16:47.757 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@436 -- # remove_spdk_ns 00:16:47.757 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:47.757 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:47.757 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:47.757 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:16:47.757 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:16:47.757 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:16:47.757 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:50.288 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:50.288 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ up == up ]] 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:50.288 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ up == up ]] 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:50.288 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # is_hw=yes 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:50.288 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:50.289 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:50.289 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:50.289 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:50.289 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:50.289 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:50.289 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:50.289 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:50.289 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:50.289 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:50.289 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:50.289 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:50.289 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:50.289 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:50.289 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:50.289 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:50.289 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:50.289 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:50.289 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.307 ms 00:16:50.289 00:16:50.289 --- 10.0.0.2 ping statistics --- 00:16:50.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:50.289 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:16:50.289 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:50.289 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:50.289 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:16:50.289 00:16:50.289 --- 10.0.0.1 ping statistics --- 00:16:50.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:50.289 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:16:50.289 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:50.289 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # return 0 00:16:50.289 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:16:50.289 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:50.289 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:16:50.289 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:16:50.289 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:50.289 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:16:50.289 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:16:50.289 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:16:50.289 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:16:50.289 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:50.289 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:50.289 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@505 -- # nvmfpid=3128214 00:16:50.289 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:50.289 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@506 -- # waitforlisten 3128214 00:16:50.289 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 3128214 ']' 00:16:50.289 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:50.289 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:50.289 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:50.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:50.289 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:50.289 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:50.289 [2024-09-29 16:24:50.572471] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:16:50.289 [2024-09-29 16:24:50.572618] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:50.289 [2024-09-29 16:24:50.711211] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:50.547 [2024-09-29 16:24:50.967790] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:50.547 [2024-09-29 16:24:50.967880] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:50.547 [2024-09-29 16:24:50.967906] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:50.547 [2024-09-29 16:24:50.967931] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:50.547 [2024-09-29 16:24:50.967951] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:50.547 [2024-09-29 16:24:50.968101] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:50.547 [2024-09-29 16:24:50.968181] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:50.547 [2024-09-29 16:24:50.968276] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:50.547 [2024-09-29 16:24:50.968284] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:16:51.113 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:51.114 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:16:51.114 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:16:51.114 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:51.114 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:51.114 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:51.114 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:51.114 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:51.114 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:16:51.371 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:16:51.371 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:16:51.371 "nvmf_tgt_1" 00:16:51.371 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:16:51.371 "nvmf_tgt_2" 00:16:51.371 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:51.371 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:16:51.629 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:16:51.629 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:16:51.629 true 00:16:51.629 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:16:51.888 true 00:16:51.888 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:51.888 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:16:51.888 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:16:51.888 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:16:51.888 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:16:51.888 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # nvmfcleanup 00:16:51.888 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:16:51.888 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:51.888 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:16:51.888 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:51.888 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:51.888 rmmod nvme_tcp 00:16:51.888 rmmod nvme_fabrics 00:16:51.888 rmmod nvme_keyring 00:16:51.888 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:51.888 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:16:51.888 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:16:51.888 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@513 -- # '[' -n 3128214 ']' 00:16:51.888 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@514 -- # killprocess 3128214 00:16:51.888 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 3128214 ']' 00:16:51.888 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 3128214 00:16:51.888 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:16:51.888 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:51.888 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3128214 00:16:52.147 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:52.147 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:52.147 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3128214' 00:16:52.147 killing process with pid 3128214 00:16:52.147 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 3128214 00:16:52.147 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 3128214 00:16:53.521 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:16:53.521 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:16:53.521 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:16:53.521 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:16:53.521 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@787 -- # iptables-save 00:16:53.521 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:16:53.521 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@787 -- # iptables-restore 00:16:53.521 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:53.521 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:53.521 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:53.521 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:53.521 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:55.426 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:55.426 00:16:55.426 real 0m7.758s 00:16:55.426 user 0m11.692s 00:16:55.426 sys 0m2.246s 00:16:55.426 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:55.426 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:55.426 ************************************ 00:16:55.426 END TEST nvmf_multitarget 00:16:55.426 ************************************ 00:16:55.426 16:24:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:55.426 16:24:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:55.426 16:24:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:55.426 16:24:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:55.426 ************************************ 00:16:55.426 START TEST nvmf_rpc 00:16:55.426 ************************************ 00:16:55.426 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:55.426 * Looking for test storage... 00:16:55.426 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:55.426 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:55.426 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:16:55.426 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:55.426 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:55.426 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:55.426 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:55.426 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:55.426 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:16:55.426 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:16:55.426 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:16:55.426 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:16:55.426 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:16:55.426 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:16:55.426 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:16:55.426 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:55.426 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:16:55.426 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:16:55.426 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:55.426 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:55.426 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:16:55.426 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:16:55.426 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:55.426 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:16:55.426 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:55.426 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:16:55.426 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:16:55.426 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:55.426 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:16:55.426 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:55.426 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:55.426 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:55.426 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:16:55.426 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:55.426 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:55.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:55.426 --rc genhtml_branch_coverage=1 00:16:55.426 --rc genhtml_function_coverage=1 00:16:55.426 --rc genhtml_legend=1 00:16:55.426 --rc geninfo_all_blocks=1 00:16:55.426 --rc geninfo_unexecuted_blocks=1 00:16:55.426 00:16:55.426 ' 00:16:55.426 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:55.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:55.426 --rc genhtml_branch_coverage=1 00:16:55.426 --rc genhtml_function_coverage=1 00:16:55.426 --rc genhtml_legend=1 00:16:55.426 --rc geninfo_all_blocks=1 00:16:55.426 --rc geninfo_unexecuted_blocks=1 00:16:55.426 00:16:55.426 ' 00:16:55.426 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:55.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:55.426 --rc genhtml_branch_coverage=1 00:16:55.426 --rc genhtml_function_coverage=1 00:16:55.426 --rc genhtml_legend=1 00:16:55.426 --rc geninfo_all_blocks=1 00:16:55.426 --rc geninfo_unexecuted_blocks=1 00:16:55.426 00:16:55.426 ' 00:16:55.426 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:55.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:55.426 --rc genhtml_branch_coverage=1 00:16:55.426 --rc genhtml_function_coverage=1 00:16:55.426 --rc genhtml_legend=1 00:16:55.426 --rc geninfo_all_blocks=1 00:16:55.426 --rc geninfo_unexecuted_blocks=1 00:16:55.426 00:16:55.426 ' 00:16:55.426 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:55.426 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:16:55.426 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:55.426 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:55.426 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:55.426 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:55.426 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:55.426 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:55.426 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:55.426 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:55.426 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:55.426 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:55.426 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:55.426 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:55.426 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:55.426 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:55.426 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:55.426 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:55.426 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:55.426 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:16:55.426 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:55.426 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:55.426 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:55.426 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.426 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.427 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.427 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:16:55.427 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.427 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:16:55.427 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:55.427 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:55.427 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:55.427 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:55.427 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:55.427 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:55.427 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:55.427 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:55.427 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:55.427 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:55.427 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:16:55.427 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:16:55.427 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:16:55.427 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:55.427 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@472 -- # prepare_net_devs 00:16:55.427 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@434 -- # local -g is_hw=no 00:16:55.427 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@436 -- # remove_spdk_ns 00:16:55.427 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:55.427 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:55.427 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:55.427 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:16:55.427 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:16:55.427 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:16:55.427 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.958 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:57.958 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:16:57.958 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:57.958 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:57.958 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:57.958 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:57.958 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:57.958 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:16:57.958 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:57.958 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:16:57.958 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:16:57.958 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:16:57.958 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:16:57.958 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:16:57.958 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:16:57.958 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:57.958 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:57.958 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:57.958 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:57.959 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:57.959 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ up == up ]] 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:57.959 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ up == up ]] 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:57.959 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # is_hw=yes 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:57.959 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:57.959 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:16:57.959 00:16:57.959 --- 10.0.0.2 ping statistics --- 00:16:57.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:57.959 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:57.959 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:57.959 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:16:57.959 00:16:57.959 --- 10.0.0.1 ping statistics --- 00:16:57.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:57.959 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # return 0 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@505 -- # nvmfpid=3130613 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@506 -- # waitforlisten 3130613 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 3130613 ']' 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:57.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:57.959 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:58.217 [2024-09-29 16:24:58.527059] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:16:58.217 [2024-09-29 16:24:58.527191] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:58.217 [2024-09-29 16:24:58.664808] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:58.475 [2024-09-29 16:24:58.931112] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:58.475 [2024-09-29 16:24:58.931195] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:58.475 [2024-09-29 16:24:58.931221] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:58.475 [2024-09-29 16:24:58.931245] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:58.475 [2024-09-29 16:24:58.931265] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:58.475 [2024-09-29 16:24:58.931691] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:58.475 [2024-09-29 16:24:58.931739] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:58.475 [2024-09-29 16:24:58.931775] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:16:58.475 [2024-09-29 16:24:58.931766] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:59.039 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:59.039 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:16:59.039 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:16:59.039 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:59.039 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.039 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:59.039 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:16:59.039 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.039 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.039 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.039 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:16:59.039 "tick_rate": 2700000000, 00:16:59.039 "poll_groups": [ 00:16:59.039 { 00:16:59.039 "name": "nvmf_tgt_poll_group_000", 00:16:59.039 "admin_qpairs": 0, 00:16:59.039 "io_qpairs": 0, 00:16:59.039 "current_admin_qpairs": 0, 00:16:59.039 "current_io_qpairs": 0, 00:16:59.039 "pending_bdev_io": 0, 00:16:59.039 "completed_nvme_io": 0, 00:16:59.039 "transports": [] 00:16:59.039 }, 00:16:59.039 { 00:16:59.039 "name": "nvmf_tgt_poll_group_001", 00:16:59.039 "admin_qpairs": 0, 00:16:59.039 "io_qpairs": 0, 00:16:59.039 "current_admin_qpairs": 0, 00:16:59.039 "current_io_qpairs": 0, 00:16:59.039 "pending_bdev_io": 0, 00:16:59.039 "completed_nvme_io": 0, 00:16:59.039 "transports": [] 00:16:59.039 }, 00:16:59.039 { 00:16:59.039 "name": "nvmf_tgt_poll_group_002", 00:16:59.039 "admin_qpairs": 0, 00:16:59.039 "io_qpairs": 0, 00:16:59.039 "current_admin_qpairs": 0, 00:16:59.039 "current_io_qpairs": 0, 00:16:59.039 "pending_bdev_io": 0, 00:16:59.039 "completed_nvme_io": 0, 00:16:59.039 "transports": [] 00:16:59.039 }, 00:16:59.039 { 00:16:59.039 "name": "nvmf_tgt_poll_group_003", 00:16:59.039 "admin_qpairs": 0, 00:16:59.039 "io_qpairs": 0, 00:16:59.039 "current_admin_qpairs": 0, 00:16:59.039 "current_io_qpairs": 0, 00:16:59.039 "pending_bdev_io": 0, 00:16:59.039 "completed_nvme_io": 0, 00:16:59.039 "transports": [] 00:16:59.039 } 00:16:59.039 ] 00:16:59.039 }' 00:16:59.039 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:16:59.039 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:16:59.039 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:16:59.039 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:16:59.039 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:16:59.039 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:16:59.297 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:16:59.297 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:59.297 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.297 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.297 [2024-09-29 16:24:59.620813] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:59.297 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.297 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:16:59.297 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.297 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.297 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.297 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:16:59.297 "tick_rate": 2700000000, 00:16:59.297 "poll_groups": [ 00:16:59.297 { 00:16:59.297 "name": "nvmf_tgt_poll_group_000", 00:16:59.297 "admin_qpairs": 0, 00:16:59.297 "io_qpairs": 0, 00:16:59.297 "current_admin_qpairs": 0, 00:16:59.297 "current_io_qpairs": 0, 00:16:59.297 "pending_bdev_io": 0, 00:16:59.297 "completed_nvme_io": 0, 00:16:59.297 "transports": [ 00:16:59.297 { 00:16:59.297 "trtype": "TCP" 00:16:59.297 } 00:16:59.297 ] 00:16:59.297 }, 00:16:59.297 { 00:16:59.297 "name": "nvmf_tgt_poll_group_001", 00:16:59.297 "admin_qpairs": 0, 00:16:59.297 "io_qpairs": 0, 00:16:59.297 "current_admin_qpairs": 0, 00:16:59.297 "current_io_qpairs": 0, 00:16:59.297 "pending_bdev_io": 0, 00:16:59.297 "completed_nvme_io": 0, 00:16:59.297 "transports": [ 00:16:59.297 { 00:16:59.297 "trtype": "TCP" 00:16:59.297 } 00:16:59.297 ] 00:16:59.297 }, 00:16:59.297 { 00:16:59.297 "name": "nvmf_tgt_poll_group_002", 00:16:59.297 "admin_qpairs": 0, 00:16:59.297 "io_qpairs": 0, 00:16:59.297 "current_admin_qpairs": 0, 00:16:59.297 "current_io_qpairs": 0, 00:16:59.297 "pending_bdev_io": 0, 00:16:59.297 "completed_nvme_io": 0, 00:16:59.297 "transports": [ 00:16:59.297 { 00:16:59.297 "trtype": "TCP" 00:16:59.297 } 00:16:59.297 ] 00:16:59.297 }, 00:16:59.297 { 00:16:59.297 "name": "nvmf_tgt_poll_group_003", 00:16:59.298 "admin_qpairs": 0, 00:16:59.298 "io_qpairs": 0, 00:16:59.298 "current_admin_qpairs": 0, 00:16:59.298 "current_io_qpairs": 0, 00:16:59.298 "pending_bdev_io": 0, 00:16:59.298 "completed_nvme_io": 0, 00:16:59.298 "transports": [ 00:16:59.298 { 00:16:59.298 "trtype": "TCP" 00:16:59.298 } 00:16:59.298 ] 00:16:59.298 } 00:16:59.298 ] 00:16:59.298 }' 00:16:59.298 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:16:59.298 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:59.298 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:59.298 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:59.298 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:16:59.298 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:16:59.298 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:59.298 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:59.298 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:59.298 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:16:59.298 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:16:59.298 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:16:59.298 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:16:59.298 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:59.298 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.298 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.298 Malloc1 00:16:59.298 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.298 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:59.298 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.298 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.298 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.298 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:59.298 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.298 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.298 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.298 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:16:59.298 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.298 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.298 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.298 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:59.298 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.298 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.298 [2024-09-29 16:24:59.831398] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:59.298 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.298 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:16:59.298 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:16:59.298 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:16:59.298 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:16:59.298 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:59.298 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:16:59.298 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:59.298 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:16:59.298 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:59.298 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:16:59.298 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:16:59.298 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:16:59.298 [2024-09-29 16:24:59.854665] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:16:59.556 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:59.556 could not add new controller: failed to write to nvme-fabrics device 00:16:59.557 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:16:59.557 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:59.557 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:59.557 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:59.557 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:59.557 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.557 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.557 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.557 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:00.121 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:17:00.121 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:00.121 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:00.121 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:00.121 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:02.643 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:02.643 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:02.643 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:02.643 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:02.644 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:02.644 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:02.644 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:02.644 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:02.644 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:02.644 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:02.644 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:02.644 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:02.644 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:02.644 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:02.644 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:02.644 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:02.644 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.644 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:02.644 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.644 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:02.644 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:17:02.644 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:02.644 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:17:02.644 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:02.644 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:17:02.644 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:02.644 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:17:02.644 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:02.644 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:17:02.644 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:17:02.644 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:02.644 [2024-09-29 16:25:02.760810] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:17:02.644 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:02.644 could not add new controller: failed to write to nvme-fabrics device 00:17:02.644 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:17:02.644 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:02.644 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:02.644 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:02.644 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:17:02.644 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.644 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:02.644 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.644 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:03.209 16:25:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:17:03.209 16:25:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:03.209 16:25:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:03.209 16:25:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:03.209 16:25:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:05.107 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:05.107 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:05.107 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:05.107 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:05.107 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:05.107 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:05.107 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:05.107 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:05.107 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:05.107 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:05.107 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:05.107 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:05.107 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:05.107 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:05.107 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:05.107 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:05.108 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.108 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:05.108 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.108 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:17:05.108 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:05.108 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:05.108 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.108 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:05.108 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.108 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:05.108 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.108 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:05.108 [2024-09-29 16:25:05.662210] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:05.108 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.108 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:05.108 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.108 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:05.366 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.366 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:05.366 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.366 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:05.366 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.366 16:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:05.933 16:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:05.933 16:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:05.933 16:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:05.933 16:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:05.933 16:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:07.833 16:25:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:07.833 16:25:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:07.833 16:25:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:07.833 16:25:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:07.833 16:25:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:07.833 16:25:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:07.833 16:25:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:08.092 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:08.092 16:25:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:08.092 16:25:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:08.092 16:25:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:08.092 16:25:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:08.092 16:25:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:08.092 16:25:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:08.092 16:25:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:08.092 16:25:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:08.092 16:25:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.092 16:25:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:08.092 16:25:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.092 16:25:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:08.092 16:25:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.092 16:25:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:08.092 16:25:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.092 16:25:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:08.092 16:25:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:08.092 16:25:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.092 16:25:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:08.092 16:25:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.092 16:25:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:08.092 16:25:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.092 16:25:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:08.092 [2024-09-29 16:25:08.530388] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:08.092 16:25:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.092 16:25:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:08.092 16:25:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.092 16:25:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:08.092 16:25:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.092 16:25:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:08.092 16:25:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.092 16:25:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:08.092 16:25:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.092 16:25:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:09.027 16:25:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:09.027 16:25:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:09.027 16:25:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:09.027 16:25:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:09.027 16:25:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:10.925 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:10.925 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:10.925 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:10.925 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:10.925 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:10.925 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:10.925 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:10.925 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:10.925 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:10.925 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:10.925 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:10.925 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:10.925 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:10.925 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:10.925 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:10.925 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:10.925 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.925 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:11.183 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.183 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:11.183 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.183 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:11.184 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.184 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:11.184 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:11.184 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.184 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:11.184 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.184 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:11.184 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.184 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:11.184 [2024-09-29 16:25:11.510559] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:11.184 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.184 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:11.184 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.184 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:11.184 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.184 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:11.184 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.184 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:11.184 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.184 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:11.751 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:11.751 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:11.751 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:11.751 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:11.751 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:13.650 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:13.650 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:13.650 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:13.650 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:13.650 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:13.650 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:13.650 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:13.909 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:13.909 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:13.909 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:13.909 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:13.909 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:13.909 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:13.909 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:13.909 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:13.909 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:13.909 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.909 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.909 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.909 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:13.909 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.909 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.909 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.909 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:13.909 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:13.909 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.909 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.909 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.909 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:13.909 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.909 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.909 [2024-09-29 16:25:14.366054] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:13.909 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.909 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:13.909 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.909 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.909 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.909 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:13.909 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.909 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.909 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.909 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:14.843 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:14.843 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:14.843 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:14.843 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:14.843 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:16.743 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:16.743 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:16.743 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:16.743 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:16.743 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:16.743 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:16.743 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:16.743 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:17.001 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:17.001 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:17.001 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:17.001 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:17.001 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:17.001 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:17.001 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:17.001 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:17.001 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.001 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:17.001 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.001 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:17.001 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.001 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:17.001 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.001 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:17.001 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:17.001 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.001 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:17.001 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.001 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:17.001 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.001 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:17.001 [2024-09-29 16:25:17.357226] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:17.001 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.001 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:17.001 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.001 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:17.002 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.002 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:17.002 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.002 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:17.002 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.002 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:17.567 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:17.567 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:17.567 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:17.567 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:17.567 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:19.466 16:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:19.466 16:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:19.466 16:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:19.466 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:19.466 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:19.466 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:19.466 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:19.724 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:19.724 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:19.724 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:19.724 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:19.724 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:19.724 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:19.724 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:19.724 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:19.724 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:19.724 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.724 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.724 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.724 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:19.724 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.724 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.725 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.725 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:17:19.725 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:19.725 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:19.725 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.725 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.725 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.725 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:19.725 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.725 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.725 [2024-09-29 16:25:20.273440] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:19.725 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.725 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:19.725 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.725 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.725 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.725 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:19.725 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.725 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.984 [2024-09-29 16:25:20.321526] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.984 [2024-09-29 16:25:20.369732] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.984 [2024-09-29 16:25:20.417858] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.984 [2024-09-29 16:25:20.466043] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.984 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.985 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:19.985 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.985 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.985 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.985 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:19.985 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.985 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.985 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.985 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:17:19.985 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.985 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.985 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.985 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:17:19.985 "tick_rate": 2700000000, 00:17:19.985 "poll_groups": [ 00:17:19.985 { 00:17:19.985 "name": "nvmf_tgt_poll_group_000", 00:17:19.985 "admin_qpairs": 2, 00:17:19.985 "io_qpairs": 84, 00:17:19.985 "current_admin_qpairs": 0, 00:17:19.985 "current_io_qpairs": 0, 00:17:19.985 "pending_bdev_io": 0, 00:17:19.985 "completed_nvme_io": 142, 00:17:19.985 "transports": [ 00:17:19.985 { 00:17:19.985 "trtype": "TCP" 00:17:19.985 } 00:17:19.985 ] 00:17:19.985 }, 00:17:19.985 { 00:17:19.985 "name": "nvmf_tgt_poll_group_001", 00:17:19.985 "admin_qpairs": 2, 00:17:19.985 "io_qpairs": 84, 00:17:19.985 "current_admin_qpairs": 0, 00:17:19.985 "current_io_qpairs": 0, 00:17:19.985 "pending_bdev_io": 0, 00:17:19.985 "completed_nvme_io": 224, 00:17:19.985 "transports": [ 00:17:19.985 { 00:17:19.985 "trtype": "TCP" 00:17:19.985 } 00:17:19.985 ] 00:17:19.985 }, 00:17:19.985 { 00:17:19.985 "name": "nvmf_tgt_poll_group_002", 00:17:19.985 "admin_qpairs": 1, 00:17:19.985 "io_qpairs": 84, 00:17:19.985 "current_admin_qpairs": 0, 00:17:19.985 "current_io_qpairs": 0, 00:17:19.985 "pending_bdev_io": 0, 00:17:19.985 "completed_nvme_io": 184, 00:17:19.985 "transports": [ 00:17:19.985 { 00:17:19.985 "trtype": "TCP" 00:17:19.985 } 00:17:19.985 ] 00:17:19.985 }, 00:17:19.985 { 00:17:19.985 "name": "nvmf_tgt_poll_group_003", 00:17:19.985 "admin_qpairs": 2, 00:17:19.985 "io_qpairs": 84, 00:17:19.985 "current_admin_qpairs": 0, 00:17:19.985 "current_io_qpairs": 0, 00:17:19.985 "pending_bdev_io": 0, 00:17:19.985 "completed_nvme_io": 136, 00:17:19.985 "transports": [ 00:17:19.985 { 00:17:19.985 "trtype": "TCP" 00:17:19.985 } 00:17:19.985 ] 00:17:19.985 } 00:17:19.985 ] 00:17:19.985 }' 00:17:19.985 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:17:19.985 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:19.985 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:19.985 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:20.244 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:17:20.244 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:17:20.244 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:20.244 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:20.244 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:20.244 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:17:20.244 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:17:20.244 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:17:20.244 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:17:20.244 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # nvmfcleanup 00:17:20.244 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:17:20.244 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:20.244 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:17:20.244 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:20.244 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:20.244 rmmod nvme_tcp 00:17:20.244 rmmod nvme_fabrics 00:17:20.244 rmmod nvme_keyring 00:17:20.244 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:20.244 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:17:20.244 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:17:20.244 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@513 -- # '[' -n 3130613 ']' 00:17:20.244 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@514 -- # killprocess 3130613 00:17:20.244 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 3130613 ']' 00:17:20.244 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 3130613 00:17:20.244 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:17:20.244 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:20.244 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3130613 00:17:20.244 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:20.244 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:20.244 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3130613' 00:17:20.244 killing process with pid 3130613 00:17:20.244 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 3130613 00:17:20.244 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 3130613 00:17:21.618 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:17:21.618 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:17:21.618 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:17:21.618 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:17:21.618 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@787 -- # iptables-save 00:17:21.619 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:17:21.619 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@787 -- # iptables-restore 00:17:21.619 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:21.619 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:21.619 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:21.619 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:21.619 16:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:24.154 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:24.154 00:17:24.154 real 0m28.394s 00:17:24.154 user 1m30.617s 00:17:24.154 sys 0m4.679s 00:17:24.154 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:24.154 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.154 ************************************ 00:17:24.154 END TEST nvmf_rpc 00:17:24.154 ************************************ 00:17:24.154 16:25:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:24.154 16:25:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:24.154 16:25:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:24.154 16:25:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:24.154 ************************************ 00:17:24.154 START TEST nvmf_invalid 00:17:24.154 ************************************ 00:17:24.154 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:24.154 * Looking for test storage... 00:17:24.154 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:24.154 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:24.154 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # lcov --version 00:17:24.154 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:24.154 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:24.154 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:24.154 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:24.154 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:24.154 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:17:24.154 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:17:24.154 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:17:24.154 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:17:24.154 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:17:24.154 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:17:24.154 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:17:24.154 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:24.154 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:17:24.154 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:17:24.154 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:24.154 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:24.154 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:17:24.154 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:17:24.154 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:24.154 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:17:24.154 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:24.154 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:17:24.154 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:17:24.154 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:24.154 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:17:24.154 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:24.154 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:24.154 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:24.154 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:17:24.154 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:24.154 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:24.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:24.154 --rc genhtml_branch_coverage=1 00:17:24.154 --rc genhtml_function_coverage=1 00:17:24.154 --rc genhtml_legend=1 00:17:24.154 --rc geninfo_all_blocks=1 00:17:24.154 --rc geninfo_unexecuted_blocks=1 00:17:24.154 00:17:24.154 ' 00:17:24.154 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:24.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:24.154 --rc genhtml_branch_coverage=1 00:17:24.154 --rc genhtml_function_coverage=1 00:17:24.154 --rc genhtml_legend=1 00:17:24.154 --rc geninfo_all_blocks=1 00:17:24.154 --rc geninfo_unexecuted_blocks=1 00:17:24.154 00:17:24.154 ' 00:17:24.154 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:24.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:24.154 --rc genhtml_branch_coverage=1 00:17:24.154 --rc genhtml_function_coverage=1 00:17:24.154 --rc genhtml_legend=1 00:17:24.154 --rc geninfo_all_blocks=1 00:17:24.154 --rc geninfo_unexecuted_blocks=1 00:17:24.154 00:17:24.154 ' 00:17:24.154 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:24.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:24.154 --rc genhtml_branch_coverage=1 00:17:24.154 --rc genhtml_function_coverage=1 00:17:24.154 --rc genhtml_legend=1 00:17:24.154 --rc geninfo_all_blocks=1 00:17:24.154 --rc geninfo_unexecuted_blocks=1 00:17:24.154 00:17:24.154 ' 00:17:24.154 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:24.154 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:17:24.154 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:24.154 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:24.154 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:24.155 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:24.155 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:24.155 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:24.155 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:24.155 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:24.155 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:24.155 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:24.155 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:24.155 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:24.155 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:24.155 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:24.155 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:24.155 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:24.155 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:24.155 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:17:24.155 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:24.155 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:24.155 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:24.155 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.155 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.155 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.155 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:17:24.155 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.155 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:17:24.155 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:24.155 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:24.155 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:24.155 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:24.155 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:24.155 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:24.155 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:24.155 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:24.155 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:24.155 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:24.155 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:17:24.155 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:24.155 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:17:24.155 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:17:24.155 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:17:24.155 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:17:24.155 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:17:24.155 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:24.155 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@472 -- # prepare_net_devs 00:17:24.155 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@434 -- # local -g is_hw=no 00:17:24.155 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@436 -- # remove_spdk_ns 00:17:24.155 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:24.155 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:24.155 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:24.155 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:17:24.155 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:17:24.155 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:17:24.155 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:26.058 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:26.058 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ up == up ]] 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:26.058 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ up == up ]] 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:26.058 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # is_hw=yes 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:26.058 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:26.059 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:26.059 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:26.059 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:26.059 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:26.059 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:26.059 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:26.059 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:17:26.059 00:17:26.059 --- 10.0.0.2 ping statistics --- 00:17:26.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:26.059 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:17:26.059 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:26.059 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:26.059 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:17:26.059 00:17:26.059 --- 10.0.0.1 ping statistics --- 00:17:26.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:26.059 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:17:26.059 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:26.059 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # return 0 00:17:26.059 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:17:26.059 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:26.059 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:17:26.059 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:17:26.059 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:26.059 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:17:26.059 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:17:26.059 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:17:26.059 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:17:26.059 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:26.059 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:26.059 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@505 -- # nvmfpid=3135441 00:17:26.059 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:26.059 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@506 -- # waitforlisten 3135441 00:17:26.059 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 3135441 ']' 00:17:26.059 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:26.059 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:26.059 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:26.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:26.059 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:26.059 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:26.059 [2024-09-29 16:25:26.568141] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:17:26.059 [2024-09-29 16:25:26.568305] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:26.317 [2024-09-29 16:25:26.716309] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:26.574 [2024-09-29 16:25:26.991040] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:26.574 [2024-09-29 16:25:26.991127] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:26.574 [2024-09-29 16:25:26.991154] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:26.574 [2024-09-29 16:25:26.991190] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:26.574 [2024-09-29 16:25:26.991211] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:26.574 [2024-09-29 16:25:26.991349] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:26.574 [2024-09-29 16:25:26.991434] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:26.574 [2024-09-29 16:25:26.993358] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:17:26.574 [2024-09-29 16:25:26.993389] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:27.139 16:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:27.139 16:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:17:27.139 16:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:17:27.140 16:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:27.140 16:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:27.140 16:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:27.140 16:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:27.140 16:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode29734 00:17:27.399 [2024-09-29 16:25:27.797081] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:17:27.399 16:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:17:27.399 { 00:17:27.399 "nqn": "nqn.2016-06.io.spdk:cnode29734", 00:17:27.399 "tgt_name": "foobar", 00:17:27.399 "method": "nvmf_create_subsystem", 00:17:27.399 "req_id": 1 00:17:27.399 } 00:17:27.399 Got JSON-RPC error response 00:17:27.399 response: 00:17:27.399 { 00:17:27.399 "code": -32603, 00:17:27.399 "message": "Unable to find target foobar" 00:17:27.399 }' 00:17:27.399 16:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:17:27.399 { 00:17:27.399 "nqn": "nqn.2016-06.io.spdk:cnode29734", 00:17:27.399 "tgt_name": "foobar", 00:17:27.399 "method": "nvmf_create_subsystem", 00:17:27.399 "req_id": 1 00:17:27.399 } 00:17:27.399 Got JSON-RPC error response 00:17:27.399 response: 00:17:27.399 { 00:17:27.399 "code": -32603, 00:17:27.399 "message": "Unable to find target foobar" 00:17:27.399 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:17:27.399 16:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:17:27.399 16:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode314 00:17:27.685 [2024-09-29 16:25:28.078180] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode314: invalid serial number 'SPDKISFASTANDAWESOME' 00:17:27.685 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:17:27.685 { 00:17:27.685 "nqn": "nqn.2016-06.io.spdk:cnode314", 00:17:27.685 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:27.685 "method": "nvmf_create_subsystem", 00:17:27.685 "req_id": 1 00:17:27.685 } 00:17:27.685 Got JSON-RPC error response 00:17:27.685 response: 00:17:27.685 { 00:17:27.685 "code": -32602, 00:17:27.685 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:27.685 }' 00:17:27.685 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:17:27.685 { 00:17:27.685 "nqn": "nqn.2016-06.io.spdk:cnode314", 00:17:27.685 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:27.685 "method": "nvmf_create_subsystem", 00:17:27.685 "req_id": 1 00:17:27.685 } 00:17:27.685 Got JSON-RPC error response 00:17:27.685 response: 00:17:27.685 { 00:17:27.685 "code": -32602, 00:17:27.685 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:27.685 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:27.685 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:17:27.685 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode19481 00:17:27.992 [2024-09-29 16:25:28.351074] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19481: invalid model number 'SPDK_Controller' 00:17:27.992 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:17:27.992 { 00:17:27.992 "nqn": "nqn.2016-06.io.spdk:cnode19481", 00:17:27.992 "model_number": "SPDK_Controller\u001f", 00:17:27.992 "method": "nvmf_create_subsystem", 00:17:27.992 "req_id": 1 00:17:27.992 } 00:17:27.992 Got JSON-RPC error response 00:17:27.992 response: 00:17:27.992 { 00:17:27.992 "code": -32602, 00:17:27.992 "message": "Invalid MN SPDK_Controller\u001f" 00:17:27.992 }' 00:17:27.992 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:17:27.992 { 00:17:27.992 "nqn": "nqn.2016-06.io.spdk:cnode19481", 00:17:27.992 "model_number": "SPDK_Controller\u001f", 00:17:27.992 "method": "nvmf_create_subsystem", 00:17:27.992 "req_id": 1 00:17:27.992 } 00:17:27.992 Got JSON-RPC error response 00:17:27.992 response: 00:17:27.992 { 00:17:27.992 "code": -32602, 00:17:27.992 "message": "Invalid MN SPDK_Controller\u001f" 00:17:27.992 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:27.992 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:17:27.992 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ $ == \- ]] 00:17:27.993 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '$-'\''9tl9J8PX]J@@KqvJ7zbX' 00:17:28.536 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '?j5S\6[zsHV;hyF:DOT0<:)Z+wgG`Y>@@KqvJ7zbX' nqn.2016-06.io.spdk:cnode27080 00:17:28.793 [2024-09-29 16:25:29.141820] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27080: invalid model number '?j5S\6[zsHV;hyF:DOT0<:)Z+wgG`Y>@@KqvJ7zbX' 00:17:28.793 16:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:17:28.793 { 00:17:28.793 "nqn": "nqn.2016-06.io.spdk:cnode27080", 00:17:28.793 "model_number": "?j5S\\6[zsHV;hyF:DOT0<:)Z+wgG`Y>@@KqvJ7zbX", 00:17:28.793 "method": "nvmf_create_subsystem", 00:17:28.793 "req_id": 1 00:17:28.793 } 00:17:28.793 Got JSON-RPC error response 00:17:28.793 response: 00:17:28.793 { 00:17:28.793 "code": -32602, 00:17:28.793 "message": "Invalid MN ?j5S\\6[zsHV;hyF:DOT0<:)Z+wgG`Y>@@KqvJ7zbX" 00:17:28.793 }' 00:17:28.793 16:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:17:28.793 { 00:17:28.793 "nqn": "nqn.2016-06.io.spdk:cnode27080", 00:17:28.793 "model_number": "?j5S\\6[zsHV;hyF:DOT0<:)Z+wgG`Y>@@KqvJ7zbX", 00:17:28.793 "method": "nvmf_create_subsystem", 00:17:28.793 "req_id": 1 00:17:28.793 } 00:17:28.793 Got JSON-RPC error response 00:17:28.793 response: 00:17:28.793 { 00:17:28.793 "code": -32602, 00:17:28.793 "message": "Invalid MN ?j5S\\6[zsHV;hyF:DOT0<:)Z+wgG`Y>@@KqvJ7zbX" 00:17:28.793 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:28.793 16:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:17:29.050 [2024-09-29 16:25:29.418777] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:29.050 16:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:17:29.308 16:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:17:29.308 16:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:17:29.308 16:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:17:29.308 16:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:17:29.308 16:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:17:29.566 [2024-09-29 16:25:29.954125] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:17:29.566 16:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:17:29.566 { 00:17:29.566 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:29.566 "listen_address": { 00:17:29.566 "trtype": "tcp", 00:17:29.566 "traddr": "", 00:17:29.566 "trsvcid": "4421" 00:17:29.566 }, 00:17:29.566 "method": "nvmf_subsystem_remove_listener", 00:17:29.566 "req_id": 1 00:17:29.566 } 00:17:29.566 Got JSON-RPC error response 00:17:29.566 response: 00:17:29.566 { 00:17:29.566 "code": -32602, 00:17:29.566 "message": "Invalid parameters" 00:17:29.566 }' 00:17:29.566 16:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:17:29.566 { 00:17:29.566 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:29.566 "listen_address": { 00:17:29.566 "trtype": "tcp", 00:17:29.566 "traddr": "", 00:17:29.566 "trsvcid": "4421" 00:17:29.566 }, 00:17:29.566 "method": "nvmf_subsystem_remove_listener", 00:17:29.566 "req_id": 1 00:17:29.566 } 00:17:29.566 Got JSON-RPC error response 00:17:29.566 response: 00:17:29.566 { 00:17:29.566 "code": -32602, 00:17:29.566 "message": "Invalid parameters" 00:17:29.566 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:17:29.566 16:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13871 -i 0 00:17:29.824 [2024-09-29 16:25:30.247250] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13871: invalid cntlid range [0-65519] 00:17:29.824 16:25:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:17:29.824 { 00:17:29.824 "nqn": "nqn.2016-06.io.spdk:cnode13871", 00:17:29.824 "min_cntlid": 0, 00:17:29.824 "method": "nvmf_create_subsystem", 00:17:29.824 "req_id": 1 00:17:29.824 } 00:17:29.824 Got JSON-RPC error response 00:17:29.824 response: 00:17:29.824 { 00:17:29.824 "code": -32602, 00:17:29.824 "message": "Invalid cntlid range [0-65519]" 00:17:29.824 }' 00:17:29.824 16:25:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:17:29.824 { 00:17:29.824 "nqn": "nqn.2016-06.io.spdk:cnode13871", 00:17:29.824 "min_cntlid": 0, 00:17:29.824 "method": "nvmf_create_subsystem", 00:17:29.824 "req_id": 1 00:17:29.824 } 00:17:29.824 Got JSON-RPC error response 00:17:29.824 response: 00:17:29.824 { 00:17:29.824 "code": -32602, 00:17:29.824 "message": "Invalid cntlid range [0-65519]" 00:17:29.824 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:29.824 16:25:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3158 -i 65520 00:17:30.082 [2024-09-29 16:25:30.532131] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3158: invalid cntlid range [65520-65519] 00:17:30.082 16:25:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:17:30.082 { 00:17:30.082 "nqn": "nqn.2016-06.io.spdk:cnode3158", 00:17:30.082 "min_cntlid": 65520, 00:17:30.082 "method": "nvmf_create_subsystem", 00:17:30.082 "req_id": 1 00:17:30.082 } 00:17:30.082 Got JSON-RPC error response 00:17:30.082 response: 00:17:30.082 { 00:17:30.082 "code": -32602, 00:17:30.082 "message": "Invalid cntlid range [65520-65519]" 00:17:30.082 }' 00:17:30.082 16:25:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:17:30.082 { 00:17:30.082 "nqn": "nqn.2016-06.io.spdk:cnode3158", 00:17:30.082 "min_cntlid": 65520, 00:17:30.082 "method": "nvmf_create_subsystem", 00:17:30.082 "req_id": 1 00:17:30.082 } 00:17:30.082 Got JSON-RPC error response 00:17:30.082 response: 00:17:30.082 { 00:17:30.082 "code": -32602, 00:17:30.082 "message": "Invalid cntlid range [65520-65519]" 00:17:30.082 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:30.082 16:25:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27787 -I 0 00:17:30.341 [2024-09-29 16:25:30.809032] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27787: invalid cntlid range [1-0] 00:17:30.341 16:25:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:17:30.341 { 00:17:30.341 "nqn": "nqn.2016-06.io.spdk:cnode27787", 00:17:30.341 "max_cntlid": 0, 00:17:30.341 "method": "nvmf_create_subsystem", 00:17:30.341 "req_id": 1 00:17:30.341 } 00:17:30.341 Got JSON-RPC error response 00:17:30.341 response: 00:17:30.341 { 00:17:30.341 "code": -32602, 00:17:30.341 "message": "Invalid cntlid range [1-0]" 00:17:30.341 }' 00:17:30.341 16:25:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:17:30.341 { 00:17:30.341 "nqn": "nqn.2016-06.io.spdk:cnode27787", 00:17:30.341 "max_cntlid": 0, 00:17:30.341 "method": "nvmf_create_subsystem", 00:17:30.341 "req_id": 1 00:17:30.341 } 00:17:30.341 Got JSON-RPC error response 00:17:30.341 response: 00:17:30.341 { 00:17:30.341 "code": -32602, 00:17:30.341 "message": "Invalid cntlid range [1-0]" 00:17:30.341 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:30.341 16:25:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31095 -I 65520 00:17:30.599 [2024-09-29 16:25:31.086028] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31095: invalid cntlid range [1-65520] 00:17:30.599 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:17:30.599 { 00:17:30.599 "nqn": "nqn.2016-06.io.spdk:cnode31095", 00:17:30.599 "max_cntlid": 65520, 00:17:30.599 "method": "nvmf_create_subsystem", 00:17:30.599 "req_id": 1 00:17:30.599 } 00:17:30.599 Got JSON-RPC error response 00:17:30.599 response: 00:17:30.599 { 00:17:30.599 "code": -32602, 00:17:30.599 "message": "Invalid cntlid range [1-65520]" 00:17:30.599 }' 00:17:30.599 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:17:30.599 { 00:17:30.599 "nqn": "nqn.2016-06.io.spdk:cnode31095", 00:17:30.599 "max_cntlid": 65520, 00:17:30.599 "method": "nvmf_create_subsystem", 00:17:30.599 "req_id": 1 00:17:30.599 } 00:17:30.599 Got JSON-RPC error response 00:17:30.599 response: 00:17:30.599 { 00:17:30.599 "code": -32602, 00:17:30.599 "message": "Invalid cntlid range [1-65520]" 00:17:30.599 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:30.599 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode29794 -i 6 -I 5 00:17:30.857 [2024-09-29 16:25:31.371013] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29794: invalid cntlid range [6-5] 00:17:30.857 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:17:30.857 { 00:17:30.857 "nqn": "nqn.2016-06.io.spdk:cnode29794", 00:17:30.857 "min_cntlid": 6, 00:17:30.857 "max_cntlid": 5, 00:17:30.857 "method": "nvmf_create_subsystem", 00:17:30.857 "req_id": 1 00:17:30.857 } 00:17:30.857 Got JSON-RPC error response 00:17:30.857 response: 00:17:30.857 { 00:17:30.857 "code": -32602, 00:17:30.857 "message": "Invalid cntlid range [6-5]" 00:17:30.857 }' 00:17:30.857 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:17:30.857 { 00:17:30.857 "nqn": "nqn.2016-06.io.spdk:cnode29794", 00:17:30.857 "min_cntlid": 6, 00:17:30.857 "max_cntlid": 5, 00:17:30.857 "method": "nvmf_create_subsystem", 00:17:30.857 "req_id": 1 00:17:30.857 } 00:17:30.857 Got JSON-RPC error response 00:17:30.857 response: 00:17:30.857 { 00:17:30.857 "code": -32602, 00:17:30.857 "message": "Invalid cntlid range [6-5]" 00:17:30.857 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:30.857 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:17:31.115 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:17:31.115 { 00:17:31.115 "name": "foobar", 00:17:31.115 "method": "nvmf_delete_target", 00:17:31.115 "req_id": 1 00:17:31.115 } 00:17:31.115 Got JSON-RPC error response 00:17:31.115 response: 00:17:31.115 { 00:17:31.115 "code": -32602, 00:17:31.115 "message": "The specified target doesn'\''t exist, cannot delete it." 00:17:31.115 }' 00:17:31.115 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:17:31.115 { 00:17:31.115 "name": "foobar", 00:17:31.115 "method": "nvmf_delete_target", 00:17:31.115 "req_id": 1 00:17:31.115 } 00:17:31.115 Got JSON-RPC error response 00:17:31.115 response: 00:17:31.115 { 00:17:31.115 "code": -32602, 00:17:31.115 "message": "The specified target doesn't exist, cannot delete it." 00:17:31.115 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:17:31.115 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:17:31.115 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:17:31.115 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # nvmfcleanup 00:17:31.115 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:17:31.115 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:31.115 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:17:31.115 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:31.115 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:31.115 rmmod nvme_tcp 00:17:31.115 rmmod nvme_fabrics 00:17:31.115 rmmod nvme_keyring 00:17:31.115 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:31.115 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:17:31.115 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:17:31.115 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@513 -- # '[' -n 3135441 ']' 00:17:31.115 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@514 -- # killprocess 3135441 00:17:31.115 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 3135441 ']' 00:17:31.115 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 3135441 00:17:31.115 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:17:31.115 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:31.115 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3135441 00:17:31.115 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:31.115 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:31.115 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3135441' 00:17:31.115 killing process with pid 3135441 00:17:31.115 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 3135441 00:17:31.115 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 3135441 00:17:32.490 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:17:32.490 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:17:32.490 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:17:32.490 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:17:32.490 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@787 -- # iptables-save 00:17:32.490 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:17:32.490 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@787 -- # iptables-restore 00:17:32.490 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:32.490 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:32.490 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:32.490 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:32.490 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:34.395 16:25:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:34.395 00:17:34.395 real 0m10.693s 00:17:34.395 user 0m26.581s 00:17:34.395 sys 0m2.656s 00:17:34.395 16:25:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:34.395 16:25:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:34.395 ************************************ 00:17:34.395 END TEST nvmf_invalid 00:17:34.395 ************************************ 00:17:34.653 16:25:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:34.653 16:25:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:34.653 16:25:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:34.653 16:25:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:34.653 ************************************ 00:17:34.653 START TEST nvmf_connect_stress 00:17:34.653 ************************************ 00:17:34.653 16:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:34.653 * Looking for test storage... 00:17:34.653 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:34.653 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:34.653 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:17:34.653 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:34.653 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:34.653 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:34.653 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:34.653 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:34.653 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:17:34.653 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:17:34.653 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:17:34.653 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:17:34.653 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:17:34.653 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:17:34.653 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:17:34.653 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:34.653 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:17:34.653 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:17:34.653 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:34.653 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:34.653 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:17:34.653 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:17:34.653 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:34.653 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:17:34.653 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:17:34.653 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:17:34.653 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:17:34.653 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:34.653 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:17:34.653 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:17:34.653 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:34.653 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:34.653 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:17:34.653 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:34.653 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:34.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.653 --rc genhtml_branch_coverage=1 00:17:34.653 --rc genhtml_function_coverage=1 00:17:34.653 --rc genhtml_legend=1 00:17:34.653 --rc geninfo_all_blocks=1 00:17:34.653 --rc geninfo_unexecuted_blocks=1 00:17:34.653 00:17:34.653 ' 00:17:34.653 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:34.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.653 --rc genhtml_branch_coverage=1 00:17:34.654 --rc genhtml_function_coverage=1 00:17:34.654 --rc genhtml_legend=1 00:17:34.654 --rc geninfo_all_blocks=1 00:17:34.654 --rc geninfo_unexecuted_blocks=1 00:17:34.654 00:17:34.654 ' 00:17:34.654 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:34.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.654 --rc genhtml_branch_coverage=1 00:17:34.654 --rc genhtml_function_coverage=1 00:17:34.654 --rc genhtml_legend=1 00:17:34.654 --rc geninfo_all_blocks=1 00:17:34.654 --rc geninfo_unexecuted_blocks=1 00:17:34.654 00:17:34.654 ' 00:17:34.654 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:34.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.654 --rc genhtml_branch_coverage=1 00:17:34.654 --rc genhtml_function_coverage=1 00:17:34.654 --rc genhtml_legend=1 00:17:34.654 --rc geninfo_all_blocks=1 00:17:34.654 --rc geninfo_unexecuted_blocks=1 00:17:34.654 00:17:34.654 ' 00:17:34.654 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:34.654 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:17:34.654 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:34.654 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:34.654 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:34.654 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:34.654 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:34.654 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:34.654 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:34.654 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:34.654 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:34.654 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:34.654 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:34.654 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:34.654 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:34.654 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:34.654 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:34.654 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:34.654 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:34.654 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:17:34.654 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:34.654 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:34.654 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:34.654 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.654 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.654 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.654 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:17:34.654 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.654 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:17:34.654 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:34.654 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:34.654 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:34.654 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:34.654 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:34.654 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:34.654 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:34.654 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:34.654 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:34.654 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:34.654 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:17:34.654 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:17:34.654 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:34.654 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@472 -- # prepare_net_devs 00:17:34.654 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@434 -- # local -g is_hw=no 00:17:34.654 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@436 -- # remove_spdk_ns 00:17:34.654 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:34.654 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:34.654 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:34.654 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:17:34.654 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:17:34.654 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:17:34.654 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:37.184 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:37.184 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:37.184 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:37.184 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # is_hw=yes 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:37.184 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:37.185 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:37.185 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:37.185 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:37.185 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:37.185 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:37.185 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:37.185 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:37.185 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:37.185 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:37.185 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:37.185 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:37.185 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.359 ms 00:17:37.185 00:17:37.185 --- 10.0.0.2 ping statistics --- 00:17:37.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:37.185 rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms 00:17:37.185 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:37.185 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:37.185 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:17:37.185 00:17:37.185 --- 10.0.0.1 ping statistics --- 00:17:37.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:37.185 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:17:37.185 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:37.185 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # return 0 00:17:37.185 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:17:37.185 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:37.185 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:17:37.185 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:17:37.185 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:37.185 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:17:37.185 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:17:37.185 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:17:37.185 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:17:37.185 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:37.185 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:37.185 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@505 -- # nvmfpid=3138348 00:17:37.185 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:37.185 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@506 -- # waitforlisten 3138348 00:17:37.185 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 3138348 ']' 00:17:37.185 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:37.185 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:37.185 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:37.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:37.185 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:37.185 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:37.185 [2024-09-29 16:25:37.562115] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:17:37.185 [2024-09-29 16:25:37.562260] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:37.185 [2024-09-29 16:25:37.704324] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:37.443 [2024-09-29 16:25:37.965128] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:37.443 [2024-09-29 16:25:37.965239] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:37.443 [2024-09-29 16:25:37.965265] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:37.443 [2024-09-29 16:25:37.965290] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:37.443 [2024-09-29 16:25:37.965309] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:37.443 [2024-09-29 16:25:37.965442] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:37.443 [2024-09-29 16:25:37.965495] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:37.443 [2024-09-29 16:25:37.965501] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:17:38.009 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:38.009 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:17:38.009 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:17:38.009 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:38.009 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:38.009 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:38.009 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:38.009 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.009 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:38.009 [2024-09-29 16:25:38.566407] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:38.267 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.268 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:38.268 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.268 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:38.268 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.268 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:38.268 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.268 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:38.268 [2024-09-29 16:25:38.586713] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:38.268 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.268 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:38.268 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.268 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:38.268 NULL1 00:17:38.268 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.268 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3138503 00:17:38.268 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:38.268 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:38.268 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:17:38.268 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:17:38.268 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:38.268 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:38.268 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:38.268 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:38.268 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:38.268 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:38.268 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:38.268 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:38.268 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:38.268 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:38.268 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:38.268 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:38.268 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:38.268 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:38.268 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:38.268 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:38.268 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:38.268 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:38.268 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:38.268 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:38.268 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:38.268 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:38.268 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:38.268 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:38.268 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:38.268 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:38.268 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:38.268 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:38.268 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:38.268 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:38.268 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:38.268 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:38.268 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:38.268 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:38.268 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:38.268 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:38.268 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:38.268 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:38.268 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:38.268 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:38.268 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3138503 00:17:38.268 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:38.268 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.268 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:38.526 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.526 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3138503 00:17:38.526 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:38.526 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.526 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:38.785 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.785 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3138503 00:17:38.785 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:38.785 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.785 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:39.352 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.352 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3138503 00:17:39.352 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:39.352 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.352 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:39.609 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.609 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3138503 00:17:39.609 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:39.609 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.609 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:39.867 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.867 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3138503 00:17:39.867 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:39.867 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.867 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:40.124 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.124 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3138503 00:17:40.124 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:40.124 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.124 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:40.382 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.382 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3138503 00:17:40.382 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:40.382 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.383 16:25:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:40.949 16:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.949 16:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3138503 00:17:40.949 16:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:40.949 16:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.949 16:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:41.207 16:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.207 16:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3138503 00:17:41.207 16:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:41.207 16:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.207 16:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:41.465 16:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.465 16:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3138503 00:17:41.465 16:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:41.465 16:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.465 16:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:41.723 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.723 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3138503 00:17:41.723 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:41.723 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.723 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:41.981 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.981 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3138503 00:17:41.981 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:41.981 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.981 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:42.547 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.547 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3138503 00:17:42.547 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:42.547 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.547 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:42.805 16:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.805 16:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3138503 00:17:42.805 16:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:42.805 16:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.805 16:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:43.063 16:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.063 16:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3138503 00:17:43.063 16:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:43.063 16:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.063 16:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:43.321 16:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.321 16:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3138503 00:17:43.321 16:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:43.321 16:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.321 16:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:43.887 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.887 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3138503 00:17:43.887 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:43.887 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.887 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:44.145 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.145 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3138503 00:17:44.145 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:44.145 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.145 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:44.403 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.403 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3138503 00:17:44.403 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:44.403 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.403 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:44.661 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.661 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3138503 00:17:44.661 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:44.661 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.661 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:44.920 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.920 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3138503 00:17:44.920 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:44.920 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.920 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:45.486 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.486 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3138503 00:17:45.486 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:45.486 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.486 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:45.745 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.745 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3138503 00:17:45.745 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:45.745 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.745 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:46.002 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.002 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3138503 00:17:46.002 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:46.002 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.002 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:46.260 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.260 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3138503 00:17:46.260 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:46.260 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.260 16:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:46.517 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.517 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3138503 00:17:46.517 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:46.517 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.517 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:47.084 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.084 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3138503 00:17:47.084 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:47.084 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.084 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:47.341 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.341 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3138503 00:17:47.341 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:47.341 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.341 16:25:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:47.599 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.599 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3138503 00:17:47.599 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:47.599 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.599 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:47.856 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.856 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3138503 00:17:47.856 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:47.856 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.856 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:48.419 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.419 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3138503 00:17:48.419 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:48.419 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.419 16:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:48.419 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:48.679 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.679 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3138503 00:17:48.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3138503) - No such process 00:17:48.679 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3138503 00:17:48.679 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:48.679 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:48.679 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:17:48.679 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # nvmfcleanup 00:17:48.679 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:17:48.679 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:48.679 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:17:48.679 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:48.679 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:48.679 rmmod nvme_tcp 00:17:48.679 rmmod nvme_fabrics 00:17:48.679 rmmod nvme_keyring 00:17:48.679 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:48.679 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:17:48.679 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:17:48.679 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@513 -- # '[' -n 3138348 ']' 00:17:48.679 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@514 -- # killprocess 3138348 00:17:48.679 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 3138348 ']' 00:17:48.679 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 3138348 00:17:48.679 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:17:48.679 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:48.679 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3138348 00:17:48.679 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:48.679 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:48.679 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3138348' 00:17:48.679 killing process with pid 3138348 00:17:48.679 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 3138348 00:17:48.679 16:25:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 3138348 00:17:50.057 16:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:17:50.057 16:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:17:50.057 16:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:17:50.057 16:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:17:50.057 16:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@787 -- # iptables-save 00:17:50.057 16:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:17:50.057 16:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@787 -- # iptables-restore 00:17:50.057 16:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:50.057 16:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:50.057 16:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:50.057 16:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:50.057 16:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:51.961 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:51.961 00:17:51.961 real 0m17.448s 00:17:51.961 user 0m42.921s 00:17:51.961 sys 0m5.891s 00:17:51.961 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:51.961 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:51.961 ************************************ 00:17:51.961 END TEST nvmf_connect_stress 00:17:51.961 ************************************ 00:17:51.961 16:25:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:51.961 16:25:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:51.961 16:25:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:51.961 16:25:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:51.961 ************************************ 00:17:51.961 START TEST nvmf_fused_ordering 00:17:51.961 ************************************ 00:17:51.961 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:52.221 * Looking for test storage... 00:17:52.221 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:52.221 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:52.221 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # lcov --version 00:17:52.221 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:52.221 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:52.221 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:52.221 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:52.221 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:52.221 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:17:52.221 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:17:52.221 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:17:52.221 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:17:52.221 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:17:52.221 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:17:52.221 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:17:52.221 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:52.221 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:17:52.221 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:17:52.221 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:52.221 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:52.221 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:17:52.221 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:17:52.221 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:52.221 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:17:52.221 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:17:52.221 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:17:52.221 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:17:52.221 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:52.221 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:17:52.221 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:17:52.221 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:52.221 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:52.221 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:17:52.221 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:52.221 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:52.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.221 --rc genhtml_branch_coverage=1 00:17:52.221 --rc genhtml_function_coverage=1 00:17:52.221 --rc genhtml_legend=1 00:17:52.221 --rc geninfo_all_blocks=1 00:17:52.221 --rc geninfo_unexecuted_blocks=1 00:17:52.221 00:17:52.221 ' 00:17:52.221 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:52.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.221 --rc genhtml_branch_coverage=1 00:17:52.221 --rc genhtml_function_coverage=1 00:17:52.221 --rc genhtml_legend=1 00:17:52.221 --rc geninfo_all_blocks=1 00:17:52.221 --rc geninfo_unexecuted_blocks=1 00:17:52.221 00:17:52.221 ' 00:17:52.221 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:52.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.221 --rc genhtml_branch_coverage=1 00:17:52.221 --rc genhtml_function_coverage=1 00:17:52.221 --rc genhtml_legend=1 00:17:52.221 --rc geninfo_all_blocks=1 00:17:52.221 --rc geninfo_unexecuted_blocks=1 00:17:52.221 00:17:52.221 ' 00:17:52.221 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:52.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.221 --rc genhtml_branch_coverage=1 00:17:52.221 --rc genhtml_function_coverage=1 00:17:52.221 --rc genhtml_legend=1 00:17:52.221 --rc geninfo_all_blocks=1 00:17:52.221 --rc geninfo_unexecuted_blocks=1 00:17:52.221 00:17:52.221 ' 00:17:52.221 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:52.221 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:17:52.221 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:52.221 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:52.221 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:52.221 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:52.221 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:52.221 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:52.221 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:52.221 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:52.221 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:52.221 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:52.221 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:52.221 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:52.221 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:52.221 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:52.221 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:52.221 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:52.221 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:52.221 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:17:52.221 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:52.221 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:52.221 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:52.221 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.221 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.221 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.222 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:17:52.222 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.222 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:17:52.222 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:52.222 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:52.222 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:52.222 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:52.222 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:52.222 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:52.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:52.222 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:52.222 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:52.222 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:52.222 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:17:52.222 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:17:52.222 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:52.222 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@472 -- # prepare_net_devs 00:17:52.222 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@434 -- # local -g is_hw=no 00:17:52.222 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@436 -- # remove_spdk_ns 00:17:52.222 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:52.222 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:52.222 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:52.222 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:17:52.222 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:17:52.222 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:17:52.222 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:54.124 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:54.124 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:17:54.124 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:54.124 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:54.124 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:54.124 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:54.124 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:54.124 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:17:54.124 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:54.124 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:17:54.124 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:17:54.124 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:17:54.124 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:17:54.124 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:17:54.124 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:17:54.124 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:54.124 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:54.124 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:54.124 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:54.124 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:54.124 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:54.124 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:54.124 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:54.124 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:54.124 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:54.124 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:54.124 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:17:54.124 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:17:54.124 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:17:54.124 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:17:54.124 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:17:54.124 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:17:54.124 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:54.124 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:54.124 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:54.124 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:17:54.124 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:17:54.124 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:54.124 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:54.124 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:17:54.124 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:54.124 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:54.124 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:54.124 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:17:54.124 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:17:54.124 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:54.124 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:54.124 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:17:54.124 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:17:54.124 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:17:54.125 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:17:54.125 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:54.125 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:54.125 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:17:54.125 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:54.125 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ up == up ]] 00:17:54.125 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:54.125 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:54.125 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:54.125 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:54.125 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:54.125 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:54.125 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:54.125 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:17:54.125 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:54.125 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ up == up ]] 00:17:54.125 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:54.125 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:54.125 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:54.125 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:54.125 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:54.125 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:17:54.125 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # is_hw=yes 00:17:54.125 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:17:54.125 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:17:54.125 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:17:54.125 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:54.125 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:54.125 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:54.125 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:54.125 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:54.125 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:54.125 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:54.125 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:54.125 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:54.125 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:54.125 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:54.125 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:54.125 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:54.125 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:54.125 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:54.125 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:54.125 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:54.125 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:54.125 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:54.422 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:54.422 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:54.422 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:54.422 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:54.422 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:54.422 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:17:54.422 00:17:54.422 --- 10.0.0.2 ping statistics --- 00:17:54.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:54.422 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:17:54.422 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:54.423 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:54.423 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:17:54.423 00:17:54.423 --- 10.0.0.1 ping statistics --- 00:17:54.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:54.423 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:17:54.423 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:54.423 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # return 0 00:17:54.423 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:17:54.423 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:54.423 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:17:54.423 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:17:54.423 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:54.423 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:17:54.423 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:17:54.423 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:17:54.423 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:17:54.423 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:54.423 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:54.423 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@505 -- # nvmfpid=3141777 00:17:54.423 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:54.423 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@506 -- # waitforlisten 3141777 00:17:54.423 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 3141777 ']' 00:17:54.423 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:54.423 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:54.423 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:54.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:54.423 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:54.423 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:54.423 [2024-09-29 16:25:54.824795] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:17:54.423 [2024-09-29 16:25:54.824930] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:54.708 [2024-09-29 16:25:54.972823] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.708 [2024-09-29 16:25:55.237328] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:54.708 [2024-09-29 16:25:55.237408] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:54.708 [2024-09-29 16:25:55.237443] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:54.708 [2024-09-29 16:25:55.237467] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:54.708 [2024-09-29 16:25:55.237486] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:54.708 [2024-09-29 16:25:55.237543] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:55.274 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:55.274 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:17:55.274 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:17:55.274 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:55.274 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:55.274 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:55.274 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:55.274 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.274 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:55.274 [2024-09-29 16:25:55.796451] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:55.274 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.274 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:55.274 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.274 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:55.275 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.275 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:55.275 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.275 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:55.275 [2024-09-29 16:25:55.812735] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:55.275 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.275 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:55.275 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.275 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:55.275 NULL1 00:17:55.275 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.275 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:17:55.275 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.275 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:55.275 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.275 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:17:55.275 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.275 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:55.533 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.533 16:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:17:55.533 [2024-09-29 16:25:55.885319] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:17:55.533 [2024-09-29 16:25:55.885414] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3141936 ] 00:17:56.099 Attached to nqn.2016-06.io.spdk:cnode1 00:17:56.099 Namespace ID: 1 size: 1GB 00:17:56.099 fused_ordering(0) 00:17:56.099 fused_ordering(1) 00:17:56.099 fused_ordering(2) 00:17:56.099 fused_ordering(3) 00:17:56.099 fused_ordering(4) 00:17:56.099 fused_ordering(5) 00:17:56.099 fused_ordering(6) 00:17:56.099 fused_ordering(7) 00:17:56.099 fused_ordering(8) 00:17:56.099 fused_ordering(9) 00:17:56.099 fused_ordering(10) 00:17:56.099 fused_ordering(11) 00:17:56.099 fused_ordering(12) 00:17:56.099 fused_ordering(13) 00:17:56.099 fused_ordering(14) 00:17:56.099 fused_ordering(15) 00:17:56.099 fused_ordering(16) 00:17:56.099 fused_ordering(17) 00:17:56.099 fused_ordering(18) 00:17:56.099 fused_ordering(19) 00:17:56.099 fused_ordering(20) 00:17:56.099 fused_ordering(21) 00:17:56.099 fused_ordering(22) 00:17:56.099 fused_ordering(23) 00:17:56.099 fused_ordering(24) 00:17:56.099 fused_ordering(25) 00:17:56.099 fused_ordering(26) 00:17:56.099 fused_ordering(27) 00:17:56.099 fused_ordering(28) 00:17:56.099 fused_ordering(29) 00:17:56.099 fused_ordering(30) 00:17:56.099 fused_ordering(31) 00:17:56.099 fused_ordering(32) 00:17:56.099 fused_ordering(33) 00:17:56.099 fused_ordering(34) 00:17:56.099 fused_ordering(35) 00:17:56.099 fused_ordering(36) 00:17:56.099 fused_ordering(37) 00:17:56.099 fused_ordering(38) 00:17:56.099 fused_ordering(39) 00:17:56.099 fused_ordering(40) 00:17:56.099 fused_ordering(41) 00:17:56.099 fused_ordering(42) 00:17:56.099 fused_ordering(43) 00:17:56.099 fused_ordering(44) 00:17:56.099 fused_ordering(45) 00:17:56.099 fused_ordering(46) 00:17:56.099 fused_ordering(47) 00:17:56.099 fused_ordering(48) 00:17:56.099 fused_ordering(49) 00:17:56.099 fused_ordering(50) 00:17:56.099 fused_ordering(51) 00:17:56.099 fused_ordering(52) 00:17:56.099 fused_ordering(53) 00:17:56.099 fused_ordering(54) 00:17:56.099 fused_ordering(55) 00:17:56.099 fused_ordering(56) 00:17:56.099 fused_ordering(57) 00:17:56.099 fused_ordering(58) 00:17:56.099 fused_ordering(59) 00:17:56.099 fused_ordering(60) 00:17:56.099 fused_ordering(61) 00:17:56.099 fused_ordering(62) 00:17:56.099 fused_ordering(63) 00:17:56.099 fused_ordering(64) 00:17:56.099 fused_ordering(65) 00:17:56.099 fused_ordering(66) 00:17:56.099 fused_ordering(67) 00:17:56.099 fused_ordering(68) 00:17:56.099 fused_ordering(69) 00:17:56.099 fused_ordering(70) 00:17:56.099 fused_ordering(71) 00:17:56.099 fused_ordering(72) 00:17:56.099 fused_ordering(73) 00:17:56.099 fused_ordering(74) 00:17:56.099 fused_ordering(75) 00:17:56.099 fused_ordering(76) 00:17:56.099 fused_ordering(77) 00:17:56.099 fused_ordering(78) 00:17:56.099 fused_ordering(79) 00:17:56.099 fused_ordering(80) 00:17:56.099 fused_ordering(81) 00:17:56.099 fused_ordering(82) 00:17:56.099 fused_ordering(83) 00:17:56.099 fused_ordering(84) 00:17:56.099 fused_ordering(85) 00:17:56.099 fused_ordering(86) 00:17:56.099 fused_ordering(87) 00:17:56.099 fused_ordering(88) 00:17:56.099 fused_ordering(89) 00:17:56.099 fused_ordering(90) 00:17:56.099 fused_ordering(91) 00:17:56.099 fused_ordering(92) 00:17:56.099 fused_ordering(93) 00:17:56.099 fused_ordering(94) 00:17:56.099 fused_ordering(95) 00:17:56.099 fused_ordering(96) 00:17:56.099 fused_ordering(97) 00:17:56.099 fused_ordering(98) 00:17:56.099 fused_ordering(99) 00:17:56.099 fused_ordering(100) 00:17:56.099 fused_ordering(101) 00:17:56.099 fused_ordering(102) 00:17:56.099 fused_ordering(103) 00:17:56.099 fused_ordering(104) 00:17:56.099 fused_ordering(105) 00:17:56.099 fused_ordering(106) 00:17:56.099 fused_ordering(107) 00:17:56.099 fused_ordering(108) 00:17:56.099 fused_ordering(109) 00:17:56.099 fused_ordering(110) 00:17:56.099 fused_ordering(111) 00:17:56.099 fused_ordering(112) 00:17:56.099 fused_ordering(113) 00:17:56.099 fused_ordering(114) 00:17:56.099 fused_ordering(115) 00:17:56.099 fused_ordering(116) 00:17:56.099 fused_ordering(117) 00:17:56.099 fused_ordering(118) 00:17:56.099 fused_ordering(119) 00:17:56.099 fused_ordering(120) 00:17:56.099 fused_ordering(121) 00:17:56.099 fused_ordering(122) 00:17:56.099 fused_ordering(123) 00:17:56.099 fused_ordering(124) 00:17:56.099 fused_ordering(125) 00:17:56.099 fused_ordering(126) 00:17:56.099 fused_ordering(127) 00:17:56.099 fused_ordering(128) 00:17:56.099 fused_ordering(129) 00:17:56.099 fused_ordering(130) 00:17:56.099 fused_ordering(131) 00:17:56.099 fused_ordering(132) 00:17:56.099 fused_ordering(133) 00:17:56.099 fused_ordering(134) 00:17:56.099 fused_ordering(135) 00:17:56.099 fused_ordering(136) 00:17:56.099 fused_ordering(137) 00:17:56.099 fused_ordering(138) 00:17:56.099 fused_ordering(139) 00:17:56.099 fused_ordering(140) 00:17:56.099 fused_ordering(141) 00:17:56.099 fused_ordering(142) 00:17:56.099 fused_ordering(143) 00:17:56.099 fused_ordering(144) 00:17:56.099 fused_ordering(145) 00:17:56.099 fused_ordering(146) 00:17:56.099 fused_ordering(147) 00:17:56.099 fused_ordering(148) 00:17:56.099 fused_ordering(149) 00:17:56.099 fused_ordering(150) 00:17:56.099 fused_ordering(151) 00:17:56.099 fused_ordering(152) 00:17:56.099 fused_ordering(153) 00:17:56.099 fused_ordering(154) 00:17:56.099 fused_ordering(155) 00:17:56.099 fused_ordering(156) 00:17:56.099 fused_ordering(157) 00:17:56.099 fused_ordering(158) 00:17:56.099 fused_ordering(159) 00:17:56.099 fused_ordering(160) 00:17:56.099 fused_ordering(161) 00:17:56.099 fused_ordering(162) 00:17:56.099 fused_ordering(163) 00:17:56.099 fused_ordering(164) 00:17:56.099 fused_ordering(165) 00:17:56.099 fused_ordering(166) 00:17:56.099 fused_ordering(167) 00:17:56.099 fused_ordering(168) 00:17:56.099 fused_ordering(169) 00:17:56.100 fused_ordering(170) 00:17:56.100 fused_ordering(171) 00:17:56.100 fused_ordering(172) 00:17:56.100 fused_ordering(173) 00:17:56.100 fused_ordering(174) 00:17:56.100 fused_ordering(175) 00:17:56.100 fused_ordering(176) 00:17:56.100 fused_ordering(177) 00:17:56.100 fused_ordering(178) 00:17:56.100 fused_ordering(179) 00:17:56.100 fused_ordering(180) 00:17:56.100 fused_ordering(181) 00:17:56.100 fused_ordering(182) 00:17:56.100 fused_ordering(183) 00:17:56.100 fused_ordering(184) 00:17:56.100 fused_ordering(185) 00:17:56.100 fused_ordering(186) 00:17:56.100 fused_ordering(187) 00:17:56.100 fused_ordering(188) 00:17:56.100 fused_ordering(189) 00:17:56.100 fused_ordering(190) 00:17:56.100 fused_ordering(191) 00:17:56.100 fused_ordering(192) 00:17:56.100 fused_ordering(193) 00:17:56.100 fused_ordering(194) 00:17:56.100 fused_ordering(195) 00:17:56.100 fused_ordering(196) 00:17:56.100 fused_ordering(197) 00:17:56.100 fused_ordering(198) 00:17:56.100 fused_ordering(199) 00:17:56.100 fused_ordering(200) 00:17:56.100 fused_ordering(201) 00:17:56.100 fused_ordering(202) 00:17:56.100 fused_ordering(203) 00:17:56.100 fused_ordering(204) 00:17:56.100 fused_ordering(205) 00:17:56.358 fused_ordering(206) 00:17:56.358 fused_ordering(207) 00:17:56.358 fused_ordering(208) 00:17:56.358 fused_ordering(209) 00:17:56.358 fused_ordering(210) 00:17:56.358 fused_ordering(211) 00:17:56.358 fused_ordering(212) 00:17:56.358 fused_ordering(213) 00:17:56.358 fused_ordering(214) 00:17:56.358 fused_ordering(215) 00:17:56.358 fused_ordering(216) 00:17:56.358 fused_ordering(217) 00:17:56.358 fused_ordering(218) 00:17:56.358 fused_ordering(219) 00:17:56.358 fused_ordering(220) 00:17:56.358 fused_ordering(221) 00:17:56.358 fused_ordering(222) 00:17:56.358 fused_ordering(223) 00:17:56.358 fused_ordering(224) 00:17:56.358 fused_ordering(225) 00:17:56.358 fused_ordering(226) 00:17:56.358 fused_ordering(227) 00:17:56.358 fused_ordering(228) 00:17:56.358 fused_ordering(229) 00:17:56.358 fused_ordering(230) 00:17:56.358 fused_ordering(231) 00:17:56.358 fused_ordering(232) 00:17:56.358 fused_ordering(233) 00:17:56.358 fused_ordering(234) 00:17:56.358 fused_ordering(235) 00:17:56.358 fused_ordering(236) 00:17:56.358 fused_ordering(237) 00:17:56.358 fused_ordering(238) 00:17:56.358 fused_ordering(239) 00:17:56.358 fused_ordering(240) 00:17:56.358 fused_ordering(241) 00:17:56.358 fused_ordering(242) 00:17:56.358 fused_ordering(243) 00:17:56.358 fused_ordering(244) 00:17:56.358 fused_ordering(245) 00:17:56.358 fused_ordering(246) 00:17:56.358 fused_ordering(247) 00:17:56.358 fused_ordering(248) 00:17:56.358 fused_ordering(249) 00:17:56.358 fused_ordering(250) 00:17:56.358 fused_ordering(251) 00:17:56.358 fused_ordering(252) 00:17:56.358 fused_ordering(253) 00:17:56.358 fused_ordering(254) 00:17:56.358 fused_ordering(255) 00:17:56.358 fused_ordering(256) 00:17:56.358 fused_ordering(257) 00:17:56.358 fused_ordering(258) 00:17:56.358 fused_ordering(259) 00:17:56.358 fused_ordering(260) 00:17:56.358 fused_ordering(261) 00:17:56.358 fused_ordering(262) 00:17:56.358 fused_ordering(263) 00:17:56.358 fused_ordering(264) 00:17:56.358 fused_ordering(265) 00:17:56.358 fused_ordering(266) 00:17:56.358 fused_ordering(267) 00:17:56.358 fused_ordering(268) 00:17:56.358 fused_ordering(269) 00:17:56.358 fused_ordering(270) 00:17:56.358 fused_ordering(271) 00:17:56.358 fused_ordering(272) 00:17:56.358 fused_ordering(273) 00:17:56.358 fused_ordering(274) 00:17:56.358 fused_ordering(275) 00:17:56.358 fused_ordering(276) 00:17:56.358 fused_ordering(277) 00:17:56.358 fused_ordering(278) 00:17:56.358 fused_ordering(279) 00:17:56.358 fused_ordering(280) 00:17:56.358 fused_ordering(281) 00:17:56.358 fused_ordering(282) 00:17:56.358 fused_ordering(283) 00:17:56.358 fused_ordering(284) 00:17:56.358 fused_ordering(285) 00:17:56.358 fused_ordering(286) 00:17:56.358 fused_ordering(287) 00:17:56.358 fused_ordering(288) 00:17:56.358 fused_ordering(289) 00:17:56.358 fused_ordering(290) 00:17:56.358 fused_ordering(291) 00:17:56.358 fused_ordering(292) 00:17:56.358 fused_ordering(293) 00:17:56.358 fused_ordering(294) 00:17:56.358 fused_ordering(295) 00:17:56.358 fused_ordering(296) 00:17:56.358 fused_ordering(297) 00:17:56.358 fused_ordering(298) 00:17:56.358 fused_ordering(299) 00:17:56.358 fused_ordering(300) 00:17:56.358 fused_ordering(301) 00:17:56.358 fused_ordering(302) 00:17:56.358 fused_ordering(303) 00:17:56.358 fused_ordering(304) 00:17:56.358 fused_ordering(305) 00:17:56.358 fused_ordering(306) 00:17:56.358 fused_ordering(307) 00:17:56.358 fused_ordering(308) 00:17:56.358 fused_ordering(309) 00:17:56.358 fused_ordering(310) 00:17:56.358 fused_ordering(311) 00:17:56.358 fused_ordering(312) 00:17:56.358 fused_ordering(313) 00:17:56.358 fused_ordering(314) 00:17:56.358 fused_ordering(315) 00:17:56.358 fused_ordering(316) 00:17:56.358 fused_ordering(317) 00:17:56.358 fused_ordering(318) 00:17:56.358 fused_ordering(319) 00:17:56.358 fused_ordering(320) 00:17:56.358 fused_ordering(321) 00:17:56.358 fused_ordering(322) 00:17:56.358 fused_ordering(323) 00:17:56.358 fused_ordering(324) 00:17:56.358 fused_ordering(325) 00:17:56.358 fused_ordering(326) 00:17:56.358 fused_ordering(327) 00:17:56.358 fused_ordering(328) 00:17:56.358 fused_ordering(329) 00:17:56.358 fused_ordering(330) 00:17:56.358 fused_ordering(331) 00:17:56.358 fused_ordering(332) 00:17:56.358 fused_ordering(333) 00:17:56.358 fused_ordering(334) 00:17:56.358 fused_ordering(335) 00:17:56.358 fused_ordering(336) 00:17:56.358 fused_ordering(337) 00:17:56.358 fused_ordering(338) 00:17:56.358 fused_ordering(339) 00:17:56.358 fused_ordering(340) 00:17:56.358 fused_ordering(341) 00:17:56.358 fused_ordering(342) 00:17:56.358 fused_ordering(343) 00:17:56.358 fused_ordering(344) 00:17:56.358 fused_ordering(345) 00:17:56.358 fused_ordering(346) 00:17:56.358 fused_ordering(347) 00:17:56.358 fused_ordering(348) 00:17:56.358 fused_ordering(349) 00:17:56.358 fused_ordering(350) 00:17:56.358 fused_ordering(351) 00:17:56.358 fused_ordering(352) 00:17:56.358 fused_ordering(353) 00:17:56.359 fused_ordering(354) 00:17:56.359 fused_ordering(355) 00:17:56.359 fused_ordering(356) 00:17:56.359 fused_ordering(357) 00:17:56.359 fused_ordering(358) 00:17:56.359 fused_ordering(359) 00:17:56.359 fused_ordering(360) 00:17:56.359 fused_ordering(361) 00:17:56.359 fused_ordering(362) 00:17:56.359 fused_ordering(363) 00:17:56.359 fused_ordering(364) 00:17:56.359 fused_ordering(365) 00:17:56.359 fused_ordering(366) 00:17:56.359 fused_ordering(367) 00:17:56.359 fused_ordering(368) 00:17:56.359 fused_ordering(369) 00:17:56.359 fused_ordering(370) 00:17:56.359 fused_ordering(371) 00:17:56.359 fused_ordering(372) 00:17:56.359 fused_ordering(373) 00:17:56.359 fused_ordering(374) 00:17:56.359 fused_ordering(375) 00:17:56.359 fused_ordering(376) 00:17:56.359 fused_ordering(377) 00:17:56.359 fused_ordering(378) 00:17:56.359 fused_ordering(379) 00:17:56.359 fused_ordering(380) 00:17:56.359 fused_ordering(381) 00:17:56.359 fused_ordering(382) 00:17:56.359 fused_ordering(383) 00:17:56.359 fused_ordering(384) 00:17:56.359 fused_ordering(385) 00:17:56.359 fused_ordering(386) 00:17:56.359 fused_ordering(387) 00:17:56.359 fused_ordering(388) 00:17:56.359 fused_ordering(389) 00:17:56.359 fused_ordering(390) 00:17:56.359 fused_ordering(391) 00:17:56.359 fused_ordering(392) 00:17:56.359 fused_ordering(393) 00:17:56.359 fused_ordering(394) 00:17:56.359 fused_ordering(395) 00:17:56.359 fused_ordering(396) 00:17:56.359 fused_ordering(397) 00:17:56.359 fused_ordering(398) 00:17:56.359 fused_ordering(399) 00:17:56.359 fused_ordering(400) 00:17:56.359 fused_ordering(401) 00:17:56.359 fused_ordering(402) 00:17:56.359 fused_ordering(403) 00:17:56.359 fused_ordering(404) 00:17:56.359 fused_ordering(405) 00:17:56.359 fused_ordering(406) 00:17:56.359 fused_ordering(407) 00:17:56.359 fused_ordering(408) 00:17:56.359 fused_ordering(409) 00:17:56.359 fused_ordering(410) 00:17:56.926 fused_ordering(411) 00:17:56.926 fused_ordering(412) 00:17:56.926 fused_ordering(413) 00:17:56.926 fused_ordering(414) 00:17:56.926 fused_ordering(415) 00:17:56.926 fused_ordering(416) 00:17:56.926 fused_ordering(417) 00:17:56.926 fused_ordering(418) 00:17:56.926 fused_ordering(419) 00:17:56.926 fused_ordering(420) 00:17:56.926 fused_ordering(421) 00:17:56.926 fused_ordering(422) 00:17:56.926 fused_ordering(423) 00:17:56.926 fused_ordering(424) 00:17:56.926 fused_ordering(425) 00:17:56.926 fused_ordering(426) 00:17:56.926 fused_ordering(427) 00:17:56.926 fused_ordering(428) 00:17:56.926 fused_ordering(429) 00:17:56.926 fused_ordering(430) 00:17:56.926 fused_ordering(431) 00:17:56.926 fused_ordering(432) 00:17:56.926 fused_ordering(433) 00:17:56.926 fused_ordering(434) 00:17:56.926 fused_ordering(435) 00:17:56.926 fused_ordering(436) 00:17:56.926 fused_ordering(437) 00:17:56.926 fused_ordering(438) 00:17:56.926 fused_ordering(439) 00:17:56.926 fused_ordering(440) 00:17:56.926 fused_ordering(441) 00:17:56.926 fused_ordering(442) 00:17:56.926 fused_ordering(443) 00:17:56.926 fused_ordering(444) 00:17:56.926 fused_ordering(445) 00:17:56.926 fused_ordering(446) 00:17:56.926 fused_ordering(447) 00:17:56.926 fused_ordering(448) 00:17:56.926 fused_ordering(449) 00:17:56.926 fused_ordering(450) 00:17:56.926 fused_ordering(451) 00:17:56.926 fused_ordering(452) 00:17:56.926 fused_ordering(453) 00:17:56.926 fused_ordering(454) 00:17:56.926 fused_ordering(455) 00:17:56.926 fused_ordering(456) 00:17:56.926 fused_ordering(457) 00:17:56.926 fused_ordering(458) 00:17:56.926 fused_ordering(459) 00:17:56.926 fused_ordering(460) 00:17:56.926 fused_ordering(461) 00:17:56.926 fused_ordering(462) 00:17:56.926 fused_ordering(463) 00:17:56.926 fused_ordering(464) 00:17:56.926 fused_ordering(465) 00:17:56.926 fused_ordering(466) 00:17:56.926 fused_ordering(467) 00:17:56.926 fused_ordering(468) 00:17:56.926 fused_ordering(469) 00:17:56.926 fused_ordering(470) 00:17:56.926 fused_ordering(471) 00:17:56.926 fused_ordering(472) 00:17:56.926 fused_ordering(473) 00:17:56.926 fused_ordering(474) 00:17:56.926 fused_ordering(475) 00:17:56.926 fused_ordering(476) 00:17:56.926 fused_ordering(477) 00:17:56.926 fused_ordering(478) 00:17:56.926 fused_ordering(479) 00:17:56.926 fused_ordering(480) 00:17:56.926 fused_ordering(481) 00:17:56.926 fused_ordering(482) 00:17:56.926 fused_ordering(483) 00:17:56.926 fused_ordering(484) 00:17:56.926 fused_ordering(485) 00:17:56.926 fused_ordering(486) 00:17:56.926 fused_ordering(487) 00:17:56.926 fused_ordering(488) 00:17:56.926 fused_ordering(489) 00:17:56.926 fused_ordering(490) 00:17:56.926 fused_ordering(491) 00:17:56.926 fused_ordering(492) 00:17:56.926 fused_ordering(493) 00:17:56.926 fused_ordering(494) 00:17:56.926 fused_ordering(495) 00:17:56.926 fused_ordering(496) 00:17:56.926 fused_ordering(497) 00:17:56.926 fused_ordering(498) 00:17:56.926 fused_ordering(499) 00:17:56.926 fused_ordering(500) 00:17:56.926 fused_ordering(501) 00:17:56.926 fused_ordering(502) 00:17:56.926 fused_ordering(503) 00:17:56.926 fused_ordering(504) 00:17:56.926 fused_ordering(505) 00:17:56.926 fused_ordering(506) 00:17:56.926 fused_ordering(507) 00:17:56.926 fused_ordering(508) 00:17:56.926 fused_ordering(509) 00:17:56.926 fused_ordering(510) 00:17:56.926 fused_ordering(511) 00:17:56.926 fused_ordering(512) 00:17:56.926 fused_ordering(513) 00:17:56.926 fused_ordering(514) 00:17:56.926 fused_ordering(515) 00:17:56.926 fused_ordering(516) 00:17:56.926 fused_ordering(517) 00:17:56.926 fused_ordering(518) 00:17:56.926 fused_ordering(519) 00:17:56.926 fused_ordering(520) 00:17:56.926 fused_ordering(521) 00:17:56.926 fused_ordering(522) 00:17:56.926 fused_ordering(523) 00:17:56.926 fused_ordering(524) 00:17:56.926 fused_ordering(525) 00:17:56.926 fused_ordering(526) 00:17:56.926 fused_ordering(527) 00:17:56.926 fused_ordering(528) 00:17:56.926 fused_ordering(529) 00:17:56.926 fused_ordering(530) 00:17:56.926 fused_ordering(531) 00:17:56.926 fused_ordering(532) 00:17:56.926 fused_ordering(533) 00:17:56.926 fused_ordering(534) 00:17:56.926 fused_ordering(535) 00:17:56.926 fused_ordering(536) 00:17:56.926 fused_ordering(537) 00:17:56.926 fused_ordering(538) 00:17:56.926 fused_ordering(539) 00:17:56.926 fused_ordering(540) 00:17:56.926 fused_ordering(541) 00:17:56.926 fused_ordering(542) 00:17:56.926 fused_ordering(543) 00:17:56.926 fused_ordering(544) 00:17:56.926 fused_ordering(545) 00:17:56.926 fused_ordering(546) 00:17:56.926 fused_ordering(547) 00:17:56.926 fused_ordering(548) 00:17:56.926 fused_ordering(549) 00:17:56.926 fused_ordering(550) 00:17:56.926 fused_ordering(551) 00:17:56.926 fused_ordering(552) 00:17:56.926 fused_ordering(553) 00:17:56.926 fused_ordering(554) 00:17:56.926 fused_ordering(555) 00:17:56.926 fused_ordering(556) 00:17:56.926 fused_ordering(557) 00:17:56.926 fused_ordering(558) 00:17:56.926 fused_ordering(559) 00:17:56.926 fused_ordering(560) 00:17:56.926 fused_ordering(561) 00:17:56.926 fused_ordering(562) 00:17:56.926 fused_ordering(563) 00:17:56.926 fused_ordering(564) 00:17:56.926 fused_ordering(565) 00:17:56.926 fused_ordering(566) 00:17:56.926 fused_ordering(567) 00:17:56.926 fused_ordering(568) 00:17:56.926 fused_ordering(569) 00:17:56.926 fused_ordering(570) 00:17:56.926 fused_ordering(571) 00:17:56.926 fused_ordering(572) 00:17:56.926 fused_ordering(573) 00:17:56.926 fused_ordering(574) 00:17:56.926 fused_ordering(575) 00:17:56.926 fused_ordering(576) 00:17:56.926 fused_ordering(577) 00:17:56.926 fused_ordering(578) 00:17:56.926 fused_ordering(579) 00:17:56.926 fused_ordering(580) 00:17:56.926 fused_ordering(581) 00:17:56.926 fused_ordering(582) 00:17:56.926 fused_ordering(583) 00:17:56.926 fused_ordering(584) 00:17:56.926 fused_ordering(585) 00:17:56.926 fused_ordering(586) 00:17:56.926 fused_ordering(587) 00:17:56.926 fused_ordering(588) 00:17:56.926 fused_ordering(589) 00:17:56.926 fused_ordering(590) 00:17:56.926 fused_ordering(591) 00:17:56.926 fused_ordering(592) 00:17:56.926 fused_ordering(593) 00:17:56.926 fused_ordering(594) 00:17:56.926 fused_ordering(595) 00:17:56.926 fused_ordering(596) 00:17:56.926 fused_ordering(597) 00:17:56.926 fused_ordering(598) 00:17:56.926 fused_ordering(599) 00:17:56.926 fused_ordering(600) 00:17:56.926 fused_ordering(601) 00:17:56.926 fused_ordering(602) 00:17:56.926 fused_ordering(603) 00:17:56.926 fused_ordering(604) 00:17:56.927 fused_ordering(605) 00:17:56.927 fused_ordering(606) 00:17:56.927 fused_ordering(607) 00:17:56.927 fused_ordering(608) 00:17:56.927 fused_ordering(609) 00:17:56.927 fused_ordering(610) 00:17:56.927 fused_ordering(611) 00:17:56.927 fused_ordering(612) 00:17:56.927 fused_ordering(613) 00:17:56.927 fused_ordering(614) 00:17:56.927 fused_ordering(615) 00:17:57.860 fused_ordering(616) 00:17:57.860 fused_ordering(617) 00:17:57.860 fused_ordering(618) 00:17:57.860 fused_ordering(619) 00:17:57.860 fused_ordering(620) 00:17:57.860 fused_ordering(621) 00:17:57.860 fused_ordering(622) 00:17:57.860 fused_ordering(623) 00:17:57.860 fused_ordering(624) 00:17:57.860 fused_ordering(625) 00:17:57.860 fused_ordering(626) 00:17:57.860 fused_ordering(627) 00:17:57.860 fused_ordering(628) 00:17:57.860 fused_ordering(629) 00:17:57.860 fused_ordering(630) 00:17:57.860 fused_ordering(631) 00:17:57.860 fused_ordering(632) 00:17:57.860 fused_ordering(633) 00:17:57.860 fused_ordering(634) 00:17:57.860 fused_ordering(635) 00:17:57.860 fused_ordering(636) 00:17:57.860 fused_ordering(637) 00:17:57.860 fused_ordering(638) 00:17:57.860 fused_ordering(639) 00:17:57.860 fused_ordering(640) 00:17:57.860 fused_ordering(641) 00:17:57.860 fused_ordering(642) 00:17:57.860 fused_ordering(643) 00:17:57.860 fused_ordering(644) 00:17:57.860 fused_ordering(645) 00:17:57.860 fused_ordering(646) 00:17:57.860 fused_ordering(647) 00:17:57.860 fused_ordering(648) 00:17:57.860 fused_ordering(649) 00:17:57.860 fused_ordering(650) 00:17:57.860 fused_ordering(651) 00:17:57.860 fused_ordering(652) 00:17:57.860 fused_ordering(653) 00:17:57.860 fused_ordering(654) 00:17:57.860 fused_ordering(655) 00:17:57.860 fused_ordering(656) 00:17:57.860 fused_ordering(657) 00:17:57.860 fused_ordering(658) 00:17:57.860 fused_ordering(659) 00:17:57.860 fused_ordering(660) 00:17:57.860 fused_ordering(661) 00:17:57.860 fused_ordering(662) 00:17:57.860 fused_ordering(663) 00:17:57.860 fused_ordering(664) 00:17:57.860 fused_ordering(665) 00:17:57.860 fused_ordering(666) 00:17:57.860 fused_ordering(667) 00:17:57.860 fused_ordering(668) 00:17:57.860 fused_ordering(669) 00:17:57.860 fused_ordering(670) 00:17:57.861 fused_ordering(671) 00:17:57.861 fused_ordering(672) 00:17:57.861 fused_ordering(673) 00:17:57.861 fused_ordering(674) 00:17:57.861 fused_ordering(675) 00:17:57.861 fused_ordering(676) 00:17:57.861 fused_ordering(677) 00:17:57.861 fused_ordering(678) 00:17:57.861 fused_ordering(679) 00:17:57.861 fused_ordering(680) 00:17:57.861 fused_ordering(681) 00:17:57.861 fused_ordering(682) 00:17:57.861 fused_ordering(683) 00:17:57.861 fused_ordering(684) 00:17:57.861 fused_ordering(685) 00:17:57.861 fused_ordering(686) 00:17:57.861 fused_ordering(687) 00:17:57.861 fused_ordering(688) 00:17:57.861 fused_ordering(689) 00:17:57.861 fused_ordering(690) 00:17:57.861 fused_ordering(691) 00:17:57.861 fused_ordering(692) 00:17:57.861 fused_ordering(693) 00:17:57.861 fused_ordering(694) 00:17:57.861 fused_ordering(695) 00:17:57.861 fused_ordering(696) 00:17:57.861 fused_ordering(697) 00:17:57.861 fused_ordering(698) 00:17:57.861 fused_ordering(699) 00:17:57.861 fused_ordering(700) 00:17:57.861 fused_ordering(701) 00:17:57.861 fused_ordering(702) 00:17:57.861 fused_ordering(703) 00:17:57.861 fused_ordering(704) 00:17:57.861 fused_ordering(705) 00:17:57.861 fused_ordering(706) 00:17:57.861 fused_ordering(707) 00:17:57.861 fused_ordering(708) 00:17:57.861 fused_ordering(709) 00:17:57.861 fused_ordering(710) 00:17:57.861 fused_ordering(711) 00:17:57.861 fused_ordering(712) 00:17:57.861 fused_ordering(713) 00:17:57.861 fused_ordering(714) 00:17:57.861 fused_ordering(715) 00:17:57.861 fused_ordering(716) 00:17:57.861 fused_ordering(717) 00:17:57.861 fused_ordering(718) 00:17:57.861 fused_ordering(719) 00:17:57.861 fused_ordering(720) 00:17:57.861 fused_ordering(721) 00:17:57.861 fused_ordering(722) 00:17:57.861 fused_ordering(723) 00:17:57.861 fused_ordering(724) 00:17:57.861 fused_ordering(725) 00:17:57.861 fused_ordering(726) 00:17:57.861 fused_ordering(727) 00:17:57.861 fused_ordering(728) 00:17:57.861 fused_ordering(729) 00:17:57.861 fused_ordering(730) 00:17:57.861 fused_ordering(731) 00:17:57.861 fused_ordering(732) 00:17:57.861 fused_ordering(733) 00:17:57.861 fused_ordering(734) 00:17:57.861 fused_ordering(735) 00:17:57.861 fused_ordering(736) 00:17:57.861 fused_ordering(737) 00:17:57.861 fused_ordering(738) 00:17:57.861 fused_ordering(739) 00:17:57.861 fused_ordering(740) 00:17:57.861 fused_ordering(741) 00:17:57.861 fused_ordering(742) 00:17:57.861 fused_ordering(743) 00:17:57.861 fused_ordering(744) 00:17:57.861 fused_ordering(745) 00:17:57.861 fused_ordering(746) 00:17:57.861 fused_ordering(747) 00:17:57.861 fused_ordering(748) 00:17:57.861 fused_ordering(749) 00:17:57.861 fused_ordering(750) 00:17:57.861 fused_ordering(751) 00:17:57.861 fused_ordering(752) 00:17:57.861 fused_ordering(753) 00:17:57.861 fused_ordering(754) 00:17:57.861 fused_ordering(755) 00:17:57.861 fused_ordering(756) 00:17:57.861 fused_ordering(757) 00:17:57.861 fused_ordering(758) 00:17:57.861 fused_ordering(759) 00:17:57.861 fused_ordering(760) 00:17:57.861 fused_ordering(761) 00:17:57.861 fused_ordering(762) 00:17:57.861 fused_ordering(763) 00:17:57.861 fused_ordering(764) 00:17:57.861 fused_ordering(765) 00:17:57.861 fused_ordering(766) 00:17:57.861 fused_ordering(767) 00:17:57.861 fused_ordering(768) 00:17:57.861 fused_ordering(769) 00:17:57.861 fused_ordering(770) 00:17:57.861 fused_ordering(771) 00:17:57.861 fused_ordering(772) 00:17:57.861 fused_ordering(773) 00:17:57.861 fused_ordering(774) 00:17:57.861 fused_ordering(775) 00:17:57.861 fused_ordering(776) 00:17:57.861 fused_ordering(777) 00:17:57.861 fused_ordering(778) 00:17:57.861 fused_ordering(779) 00:17:57.861 fused_ordering(780) 00:17:57.861 fused_ordering(781) 00:17:57.861 fused_ordering(782) 00:17:57.861 fused_ordering(783) 00:17:57.861 fused_ordering(784) 00:17:57.861 fused_ordering(785) 00:17:57.861 fused_ordering(786) 00:17:57.861 fused_ordering(787) 00:17:57.861 fused_ordering(788) 00:17:57.861 fused_ordering(789) 00:17:57.861 fused_ordering(790) 00:17:57.861 fused_ordering(791) 00:17:57.861 fused_ordering(792) 00:17:57.861 fused_ordering(793) 00:17:57.861 fused_ordering(794) 00:17:57.861 fused_ordering(795) 00:17:57.861 fused_ordering(796) 00:17:57.861 fused_ordering(797) 00:17:57.861 fused_ordering(798) 00:17:57.861 fused_ordering(799) 00:17:57.861 fused_ordering(800) 00:17:57.861 fused_ordering(801) 00:17:57.861 fused_ordering(802) 00:17:57.861 fused_ordering(803) 00:17:57.861 fused_ordering(804) 00:17:57.861 fused_ordering(805) 00:17:57.861 fused_ordering(806) 00:17:57.861 fused_ordering(807) 00:17:57.861 fused_ordering(808) 00:17:57.861 fused_ordering(809) 00:17:57.861 fused_ordering(810) 00:17:57.861 fused_ordering(811) 00:17:57.861 fused_ordering(812) 00:17:57.861 fused_ordering(813) 00:17:57.861 fused_ordering(814) 00:17:57.861 fused_ordering(815) 00:17:57.861 fused_ordering(816) 00:17:57.861 fused_ordering(817) 00:17:57.861 fused_ordering(818) 00:17:57.861 fused_ordering(819) 00:17:57.861 fused_ordering(820) 00:17:58.798 fused_ordering(821) 00:17:58.798 fused_ordering(822) 00:17:58.798 fused_ordering(823) 00:17:58.798 fused_ordering(824) 00:17:58.798 fused_ordering(825) 00:17:58.798 fused_ordering(826) 00:17:58.798 fused_ordering(827) 00:17:58.798 fused_ordering(828) 00:17:58.798 fused_ordering(829) 00:17:58.798 fused_ordering(830) 00:17:58.798 fused_ordering(831) 00:17:58.798 fused_ordering(832) 00:17:58.798 fused_ordering(833) 00:17:58.798 fused_ordering(834) 00:17:58.798 fused_ordering(835) 00:17:58.798 fused_ordering(836) 00:17:58.798 fused_ordering(837) 00:17:58.798 fused_ordering(838) 00:17:58.798 fused_ordering(839) 00:17:58.798 fused_ordering(840) 00:17:58.798 fused_ordering(841) 00:17:58.798 fused_ordering(842) 00:17:58.798 fused_ordering(843) 00:17:58.798 fused_ordering(844) 00:17:58.798 fused_ordering(845) 00:17:58.798 fused_ordering(846) 00:17:58.798 fused_ordering(847) 00:17:58.798 fused_ordering(848) 00:17:58.798 fused_ordering(849) 00:17:58.798 fused_ordering(850) 00:17:58.798 fused_ordering(851) 00:17:58.798 fused_ordering(852) 00:17:58.798 fused_ordering(853) 00:17:58.798 fused_ordering(854) 00:17:58.798 fused_ordering(855) 00:17:58.798 fused_ordering(856) 00:17:58.798 fused_ordering(857) 00:17:58.798 fused_ordering(858) 00:17:58.798 fused_ordering(859) 00:17:58.798 fused_ordering(860) 00:17:58.798 fused_ordering(861) 00:17:58.798 fused_ordering(862) 00:17:58.798 fused_ordering(863) 00:17:58.798 fused_ordering(864) 00:17:58.798 fused_ordering(865) 00:17:58.798 fused_ordering(866) 00:17:58.798 fused_ordering(867) 00:17:58.798 fused_ordering(868) 00:17:58.798 fused_ordering(869) 00:17:58.798 fused_ordering(870) 00:17:58.798 fused_ordering(871) 00:17:58.798 fused_ordering(872) 00:17:58.798 fused_ordering(873) 00:17:58.798 fused_ordering(874) 00:17:58.798 fused_ordering(875) 00:17:58.798 fused_ordering(876) 00:17:58.798 fused_ordering(877) 00:17:58.798 fused_ordering(878) 00:17:58.798 fused_ordering(879) 00:17:58.798 fused_ordering(880) 00:17:58.798 fused_ordering(881) 00:17:58.798 fused_ordering(882) 00:17:58.798 fused_ordering(883) 00:17:58.798 fused_ordering(884) 00:17:58.798 fused_ordering(885) 00:17:58.798 fused_ordering(886) 00:17:58.798 fused_ordering(887) 00:17:58.798 fused_ordering(888) 00:17:58.798 fused_ordering(889) 00:17:58.798 fused_ordering(890) 00:17:58.798 fused_ordering(891) 00:17:58.798 fused_ordering(892) 00:17:58.798 fused_ordering(893) 00:17:58.798 fused_ordering(894) 00:17:58.798 fused_ordering(895) 00:17:58.798 fused_ordering(896) 00:17:58.798 fused_ordering(897) 00:17:58.798 fused_ordering(898) 00:17:58.798 fused_ordering(899) 00:17:58.798 fused_ordering(900) 00:17:58.798 fused_ordering(901) 00:17:58.798 fused_ordering(902) 00:17:58.798 fused_ordering(903) 00:17:58.798 fused_ordering(904) 00:17:58.798 fused_ordering(905) 00:17:58.798 fused_ordering(906) 00:17:58.798 fused_ordering(907) 00:17:58.798 fused_ordering(908) 00:17:58.798 fused_ordering(909) 00:17:58.798 fused_ordering(910) 00:17:58.798 fused_ordering(911) 00:17:58.798 fused_ordering(912) 00:17:58.798 fused_ordering(913) 00:17:58.798 fused_ordering(914) 00:17:58.798 fused_ordering(915) 00:17:58.798 fused_ordering(916) 00:17:58.798 fused_ordering(917) 00:17:58.798 fused_ordering(918) 00:17:58.798 fused_ordering(919) 00:17:58.798 fused_ordering(920) 00:17:58.798 fused_ordering(921) 00:17:58.798 fused_ordering(922) 00:17:58.798 fused_ordering(923) 00:17:58.798 fused_ordering(924) 00:17:58.798 fused_ordering(925) 00:17:58.798 fused_ordering(926) 00:17:58.798 fused_ordering(927) 00:17:58.798 fused_ordering(928) 00:17:58.798 fused_ordering(929) 00:17:58.798 fused_ordering(930) 00:17:58.798 fused_ordering(931) 00:17:58.798 fused_ordering(932) 00:17:58.798 fused_ordering(933) 00:17:58.798 fused_ordering(934) 00:17:58.798 fused_ordering(935) 00:17:58.798 fused_ordering(936) 00:17:58.798 fused_ordering(937) 00:17:58.798 fused_ordering(938) 00:17:58.798 fused_ordering(939) 00:17:58.798 fused_ordering(940) 00:17:58.798 fused_ordering(941) 00:17:58.798 fused_ordering(942) 00:17:58.798 fused_ordering(943) 00:17:58.798 fused_ordering(944) 00:17:58.798 fused_ordering(945) 00:17:58.798 fused_ordering(946) 00:17:58.798 fused_ordering(947) 00:17:58.798 fused_ordering(948) 00:17:58.798 fused_ordering(949) 00:17:58.798 fused_ordering(950) 00:17:58.798 fused_ordering(951) 00:17:58.798 fused_ordering(952) 00:17:58.798 fused_ordering(953) 00:17:58.798 fused_ordering(954) 00:17:58.798 fused_ordering(955) 00:17:58.798 fused_ordering(956) 00:17:58.798 fused_ordering(957) 00:17:58.798 fused_ordering(958) 00:17:58.799 fused_ordering(959) 00:17:58.799 fused_ordering(960) 00:17:58.799 fused_ordering(961) 00:17:58.799 fused_ordering(962) 00:17:58.799 fused_ordering(963) 00:17:58.799 fused_ordering(964) 00:17:58.799 fused_ordering(965) 00:17:58.799 fused_ordering(966) 00:17:58.799 fused_ordering(967) 00:17:58.799 fused_ordering(968) 00:17:58.799 fused_ordering(969) 00:17:58.799 fused_ordering(970) 00:17:58.799 fused_ordering(971) 00:17:58.799 fused_ordering(972) 00:17:58.799 fused_ordering(973) 00:17:58.799 fused_ordering(974) 00:17:58.799 fused_ordering(975) 00:17:58.799 fused_ordering(976) 00:17:58.799 fused_ordering(977) 00:17:58.799 fused_ordering(978) 00:17:58.799 fused_ordering(979) 00:17:58.799 fused_ordering(980) 00:17:58.799 fused_ordering(981) 00:17:58.799 fused_ordering(982) 00:17:58.799 fused_ordering(983) 00:17:58.799 fused_ordering(984) 00:17:58.799 fused_ordering(985) 00:17:58.799 fused_ordering(986) 00:17:58.799 fused_ordering(987) 00:17:58.799 fused_ordering(988) 00:17:58.799 fused_ordering(989) 00:17:58.799 fused_ordering(990) 00:17:58.799 fused_ordering(991) 00:17:58.799 fused_ordering(992) 00:17:58.799 fused_ordering(993) 00:17:58.799 fused_ordering(994) 00:17:58.799 fused_ordering(995) 00:17:58.799 fused_ordering(996) 00:17:58.799 fused_ordering(997) 00:17:58.799 fused_ordering(998) 00:17:58.799 fused_ordering(999) 00:17:58.799 fused_ordering(1000) 00:17:58.799 fused_ordering(1001) 00:17:58.799 fused_ordering(1002) 00:17:58.799 fused_ordering(1003) 00:17:58.799 fused_ordering(1004) 00:17:58.799 fused_ordering(1005) 00:17:58.799 fused_ordering(1006) 00:17:58.799 fused_ordering(1007) 00:17:58.799 fused_ordering(1008) 00:17:58.799 fused_ordering(1009) 00:17:58.799 fused_ordering(1010) 00:17:58.799 fused_ordering(1011) 00:17:58.799 fused_ordering(1012) 00:17:58.799 fused_ordering(1013) 00:17:58.799 fused_ordering(1014) 00:17:58.799 fused_ordering(1015) 00:17:58.799 fused_ordering(1016) 00:17:58.799 fused_ordering(1017) 00:17:58.799 fused_ordering(1018) 00:17:58.799 fused_ordering(1019) 00:17:58.799 fused_ordering(1020) 00:17:58.799 fused_ordering(1021) 00:17:58.799 fused_ordering(1022) 00:17:58.799 fused_ordering(1023) 00:17:58.799 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:17:58.799 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:17:58.799 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # nvmfcleanup 00:17:58.799 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:17:58.799 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:58.799 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:17:58.799 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:58.799 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:58.799 rmmod nvme_tcp 00:17:58.799 rmmod nvme_fabrics 00:17:58.799 rmmod nvme_keyring 00:17:58.799 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:58.799 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:17:58.799 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:17:58.799 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@513 -- # '[' -n 3141777 ']' 00:17:58.799 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@514 -- # killprocess 3141777 00:17:58.799 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 3141777 ']' 00:17:58.799 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 3141777 00:17:58.799 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:17:58.799 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:58.799 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3141777 00:17:58.799 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:58.799 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:58.799 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3141777' 00:17:58.799 killing process with pid 3141777 00:17:58.799 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 3141777 00:17:58.799 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 3141777 00:18:00.175 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:18:00.175 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:18:00.175 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:18:00.175 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:18:00.175 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@787 -- # iptables-save 00:18:00.175 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:18:00.175 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@787 -- # iptables-restore 00:18:00.175 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:00.175 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:00.175 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:00.175 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:00.175 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:02.079 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:02.079 00:18:02.079 real 0m10.127s 00:18:02.079 user 0m8.352s 00:18:02.079 sys 0m3.611s 00:18:02.079 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:02.079 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:02.079 ************************************ 00:18:02.079 END TEST nvmf_fused_ordering 00:18:02.079 ************************************ 00:18:02.079 16:26:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:18:02.079 16:26:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:02.079 16:26:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:02.339 16:26:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:02.339 ************************************ 00:18:02.339 START TEST nvmf_ns_masking 00:18:02.339 ************************************ 00:18:02.339 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:18:02.339 * Looking for test storage... 00:18:02.339 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:02.339 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:02.339 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # lcov --version 00:18:02.339 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:02.339 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:02.339 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:02.339 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:02.339 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:02.339 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:18:02.339 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:18:02.339 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:18:02.339 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:18:02.339 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:18:02.339 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:18:02.339 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:18:02.339 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:02.339 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:18:02.339 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:18:02.339 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:02.339 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:02.339 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:18:02.339 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:18:02.339 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:02.339 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:18:02.339 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:18:02.339 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:18:02.339 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:18:02.339 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:02.339 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:18:02.339 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:18:02.339 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:02.339 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:02.339 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:18:02.339 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:02.339 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:02.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.339 --rc genhtml_branch_coverage=1 00:18:02.339 --rc genhtml_function_coverage=1 00:18:02.339 --rc genhtml_legend=1 00:18:02.339 --rc geninfo_all_blocks=1 00:18:02.339 --rc geninfo_unexecuted_blocks=1 00:18:02.339 00:18:02.339 ' 00:18:02.339 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:02.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.339 --rc genhtml_branch_coverage=1 00:18:02.339 --rc genhtml_function_coverage=1 00:18:02.339 --rc genhtml_legend=1 00:18:02.339 --rc geninfo_all_blocks=1 00:18:02.339 --rc geninfo_unexecuted_blocks=1 00:18:02.339 00:18:02.339 ' 00:18:02.339 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:02.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.339 --rc genhtml_branch_coverage=1 00:18:02.339 --rc genhtml_function_coverage=1 00:18:02.339 --rc genhtml_legend=1 00:18:02.340 --rc geninfo_all_blocks=1 00:18:02.340 --rc geninfo_unexecuted_blocks=1 00:18:02.340 00:18:02.340 ' 00:18:02.340 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:02.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.340 --rc genhtml_branch_coverage=1 00:18:02.340 --rc genhtml_function_coverage=1 00:18:02.340 --rc genhtml_legend=1 00:18:02.340 --rc geninfo_all_blocks=1 00:18:02.340 --rc geninfo_unexecuted_blocks=1 00:18:02.340 00:18:02.340 ' 00:18:02.340 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:02.340 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:18:02.340 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:02.340 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:02.340 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:02.340 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:02.340 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:02.340 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:02.340 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:02.340 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:02.340 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:02.340 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:02.340 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:02.340 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:02.340 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:02.340 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:02.340 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:02.340 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:02.340 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:02.340 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:18:02.340 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:02.340 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:02.340 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:02.340 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.340 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.340 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.340 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:18:02.340 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.340 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:18:02.340 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:02.340 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:02.340 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:02.340 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:02.340 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:02.340 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:02.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:02.340 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:02.340 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:02.340 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:02.340 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:02.340 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:18:02.340 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:18:02.340 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:18:02.340 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=4dbd695d-a7a9-40ce-a680-76e2d62d9997 00:18:02.340 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:18:02.340 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=c36870ac-6352-4717-accc-b51a11225545 00:18:02.340 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:18:02.340 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:18:02.340 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:18:02.340 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:18:02.340 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=78655fc7-ac48-4c14-a3f4-f06d77bda99f 00:18:02.340 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:18:02.340 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:18:02.340 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:02.340 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@472 -- # prepare_net_devs 00:18:02.340 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@434 -- # local -g is_hw=no 00:18:02.340 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@436 -- # remove_spdk_ns 00:18:02.340 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:02.340 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:02.340 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:02.340 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:18:02.340 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:18:02.340 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:18:02.340 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:04.873 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:04.873 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:18:04.873 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:04.873 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:04.873 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:04.873 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:04.873 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:04.873 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:18:04.873 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:04.873 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:18:04.873 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:18:04.873 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:18:04.873 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:18:04.873 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:18:04.873 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:18:04.873 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:04.873 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:04.873 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:04.873 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:04.873 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:04.873 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:04.873 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:04.873 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:04.873 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:04.874 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:04.874 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:04.874 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:18:04.874 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:18:04.874 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:18:04.874 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:18:04.874 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:18:04.874 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:18:04.874 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:18:04.874 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:04.874 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:04.874 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:18:04.874 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:18:04.874 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:04.874 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:04.874 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:18:04.874 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:18:04.874 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:04.874 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:04.874 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:18:04.874 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:18:04.874 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:04.874 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:04.874 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:18:04.874 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:18:04.874 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:18:04.874 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:18:04.874 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:18:04.874 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:04.874 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:18:04.874 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:04.874 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ up == up ]] 00:18:04.874 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:18:04.874 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:04.874 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:04.874 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:04.874 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:18:04.874 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:18:04.874 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:04.874 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:18:04.874 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:04.874 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ up == up ]] 00:18:04.874 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:18:04.874 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:04.874 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:04.874 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:04.874 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:18:04.874 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:18:04.874 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # is_hw=yes 00:18:04.874 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:18:04.874 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:18:04.874 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:18:04.874 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:04.874 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:04.874 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:04.874 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:04.874 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:04.874 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:04.874 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:04.874 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:04.874 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:04.874 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:04.874 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:04.874 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:04.874 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:04.874 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:04.874 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:04.874 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:04.874 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:04.874 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:04.874 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:04.874 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:04.874 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:04.874 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:04.874 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:04.874 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:04.874 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:18:04.874 00:18:04.874 --- 10.0.0.2 ping statistics --- 00:18:04.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:04.875 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:18:04.875 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:04.875 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:04.875 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:18:04.875 00:18:04.875 --- 10.0.0.1 ping statistics --- 00:18:04.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:04.875 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:18:04.875 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:04.875 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # return 0 00:18:04.875 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:18:04.875 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:04.875 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:18:04.875 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:18:04.875 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:04.875 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:18:04.875 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:18:04.875 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:18:04.875 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:18:04.875 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:04.875 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:04.875 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@505 -- # nvmfpid=3144410 00:18:04.875 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@506 -- # waitforlisten 3144410 00:18:04.875 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:04.875 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 3144410 ']' 00:18:04.875 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:04.875 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:04.875 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:04.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:04.875 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:04.875 16:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:04.875 [2024-09-29 16:26:05.287304] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:18:04.875 [2024-09-29 16:26:05.287446] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:04.875 [2024-09-29 16:26:05.421416] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.133 [2024-09-29 16:26:05.676320] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:05.133 [2024-09-29 16:26:05.676408] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:05.133 [2024-09-29 16:26:05.676435] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:05.133 [2024-09-29 16:26:05.676458] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:05.133 [2024-09-29 16:26:05.676477] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:05.133 [2024-09-29 16:26:05.676533] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:06.067 16:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:06.067 16:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:18:06.067 16:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:18:06.067 16:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:06.067 16:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:06.067 16:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:06.067 16:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:06.067 [2024-09-29 16:26:06.609272] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:06.325 16:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:18:06.325 16:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:18:06.325 16:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:06.583 Malloc1 00:18:06.583 16:26:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:06.841 Malloc2 00:18:06.841 16:26:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:07.099 16:26:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:18:07.356 16:26:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:07.613 [2024-09-29 16:26:08.152998] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:07.613 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:18:07.613 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 78655fc7-ac48-4c14-a3f4-f06d77bda99f -a 10.0.0.2 -s 4420 -i 4 00:18:07.871 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:18:07.871 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:18:07.871 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:07.871 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:07.871 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:18:10.398 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:10.398 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:10.398 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:10.398 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:10.398 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:10.398 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:18:10.398 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:10.398 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:10.398 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:10.398 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:10.398 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:18:10.398 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:10.398 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:10.398 [ 0]:0x1 00:18:10.398 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:10.398 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:10.398 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8423697d9304430ebff2628c16b124aa 00:18:10.398 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8423697d9304430ebff2628c16b124aa != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:10.398 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:18:10.398 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:18:10.398 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:10.398 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:10.398 [ 0]:0x1 00:18:10.398 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:10.398 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:10.398 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8423697d9304430ebff2628c16b124aa 00:18:10.398 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8423697d9304430ebff2628c16b124aa != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:10.398 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:18:10.398 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:10.398 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:10.398 [ 1]:0x2 00:18:10.398 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:10.398 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:10.398 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=974238f17d6743fb8d19a589ac004f5a 00:18:10.398 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 974238f17d6743fb8d19a589ac004f5a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:10.398 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:18:10.398 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:10.656 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:10.656 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:10.914 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:18:11.171 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:18:11.171 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 78655fc7-ac48-4c14-a3f4-f06d77bda99f -a 10.0.0.2 -s 4420 -i 4 00:18:11.428 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:18:11.428 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:18:11.428 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:11.428 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:18:11.428 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:18:11.428 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:18:13.323 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:13.323 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:13.323 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:13.323 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:13.323 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:13.323 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:18:13.323 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:13.323 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:13.581 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:13.581 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:13.581 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:18:13.581 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:13.581 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:18:13.581 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:18:13.581 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:13.581 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:18:13.581 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:13.581 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:18:13.581 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:13.581 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:13.581 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:13.581 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:13.581 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:13.581 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:13.581 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:13.581 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:13.581 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:13.581 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:13.581 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:18:13.581 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:13.581 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:13.581 [ 0]:0x2 00:18:13.581 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:13.581 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:13.581 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=974238f17d6743fb8d19a589ac004f5a 00:18:13.581 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 974238f17d6743fb8d19a589ac004f5a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:13.581 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:14.146 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:18:14.146 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:14.146 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:14.146 [ 0]:0x1 00:18:14.146 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:14.146 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:14.146 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8423697d9304430ebff2628c16b124aa 00:18:14.146 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8423697d9304430ebff2628c16b124aa != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:14.146 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:18:14.146 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:14.146 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:14.146 [ 1]:0x2 00:18:14.146 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:14.146 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:14.146 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=974238f17d6743fb8d19a589ac004f5a 00:18:14.146 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 974238f17d6743fb8d19a589ac004f5a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:14.146 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:14.403 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:18:14.403 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:14.403 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:18:14.403 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:18:14.403 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:14.403 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:18:14.403 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:14.403 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:18:14.403 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:14.403 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:14.403 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:14.403 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:14.403 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:14.403 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:14.403 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:14.403 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:14.403 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:14.403 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:14.403 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:18:14.403 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:14.403 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:14.403 [ 0]:0x2 00:18:14.403 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:14.403 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:14.403 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=974238f17d6743fb8d19a589ac004f5a 00:18:14.403 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 974238f17d6743fb8d19a589ac004f5a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:14.403 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:18:14.403 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:14.659 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:14.659 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:14.917 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:18:14.917 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 78655fc7-ac48-4c14-a3f4-f06d77bda99f -a 10.0.0.2 -s 4420 -i 4 00:18:14.917 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:14.917 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:18:14.917 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:14.917 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:18:14.917 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:18:14.917 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:18:17.482 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:17.482 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:17.482 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:17.482 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:18:17.482 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:17.482 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:18:17.482 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:17.482 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:17.482 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:17.482 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:17.482 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:18:17.482 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:17.482 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:17.482 [ 0]:0x1 00:18:17.482 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:17.482 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:17.482 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8423697d9304430ebff2628c16b124aa 00:18:17.482 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8423697d9304430ebff2628c16b124aa != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:17.482 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:18:17.482 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:17.482 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:17.482 [ 1]:0x2 00:18:17.482 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:17.482 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:17.482 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=974238f17d6743fb8d19a589ac004f5a 00:18:17.482 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 974238f17d6743fb8d19a589ac004f5a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:17.482 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:17.482 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:18:17.482 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:17.482 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:18:17.482 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:18:17.482 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:17.482 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:18:17.482 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:17.482 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:18:17.482 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:17.482 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:17.482 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:17.482 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:17.482 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:17.482 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:17.482 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:17.482 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:17.482 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:17.482 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:17.483 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:18:17.483 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:17.483 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:17.483 [ 0]:0x2 00:18:17.483 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:17.483 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:17.740 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=974238f17d6743fb8d19a589ac004f5a 00:18:17.740 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 974238f17d6743fb8d19a589ac004f5a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:17.740 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:17.740 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:17.740 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:17.740 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:17.740 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:17.740 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:17.740 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:17.740 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:17.740 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:17.740 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:17.740 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:17.740 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:17.998 [2024-09-29 16:26:18.354252] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:18:17.998 request: 00:18:17.998 { 00:18:17.998 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:17.998 "nsid": 2, 00:18:17.998 "host": "nqn.2016-06.io.spdk:host1", 00:18:17.998 "method": "nvmf_ns_remove_host", 00:18:17.998 "req_id": 1 00:18:17.998 } 00:18:17.998 Got JSON-RPC error response 00:18:17.998 response: 00:18:17.998 { 00:18:17.998 "code": -32602, 00:18:17.998 "message": "Invalid parameters" 00:18:17.998 } 00:18:17.998 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:17.999 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:17.999 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:17.999 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:17.999 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:18:17.999 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:17.999 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:18:17.999 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:18:17.999 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:17.999 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:18:17.999 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:17.999 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:18:17.999 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:17.999 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:17.999 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:17.999 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:17.999 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:17.999 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:17.999 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:17.999 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:17.999 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:17.999 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:17.999 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:18:17.999 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:17.999 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:17.999 [ 0]:0x2 00:18:17.999 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:17.999 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:17.999 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=974238f17d6743fb8d19a589ac004f5a 00:18:17.999 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 974238f17d6743fb8d19a589ac004f5a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:17.999 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:18:17.999 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:18.257 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:18.257 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3146154 00:18:18.257 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:18:18.257 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:18:18.257 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3146154 /var/tmp/host.sock 00:18:18.257 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 3146154 ']' 00:18:18.257 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:18:18.257 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:18.257 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:18.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:18.257 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:18.257 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:18.257 [2024-09-29 16:26:18.745937] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:18:18.257 [2024-09-29 16:26:18.746085] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3146154 ] 00:18:18.514 [2024-09-29 16:26:18.870812] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.772 [2024-09-29 16:26:19.119473] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:19.707 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:19.707 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:18:19.707 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:19.965 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:20.222 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 4dbd695d-a7a9-40ce-a680-76e2d62d9997 00:18:20.222 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@783 -- # tr -d - 00:18:20.223 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 4DBD695DA7A940CEA68076E2D62D9997 -i 00:18:20.480 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid c36870ac-6352-4717-accc-b51a11225545 00:18:20.480 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@783 -- # tr -d - 00:18:20.480 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g C36870AC63524717ACCCB51A11225545 -i 00:18:20.738 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:20.996 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:18:21.563 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:21.563 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:21.821 nvme0n1 00:18:21.821 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:21.821 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:22.386 nvme1n2 00:18:22.387 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:18:22.387 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:18:22.387 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:18:22.387 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:18:22.387 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:18:22.645 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:18:22.645 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:18:22.645 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:18:22.645 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:18:22.906 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 4dbd695d-a7a9-40ce-a680-76e2d62d9997 == \4\d\b\d\6\9\5\d\-\a\7\a\9\-\4\0\c\e\-\a\6\8\0\-\7\6\e\2\d\6\2\d\9\9\9\7 ]] 00:18:22.906 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:18:22.906 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:18:22.906 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:18:23.164 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ c36870ac-6352-4717-accc-b51a11225545 == \c\3\6\8\7\0\a\c\-\6\3\5\2\-\4\7\1\7\-\a\c\c\c\-\b\5\1\a\1\1\2\2\5\5\4\5 ]] 00:18:23.164 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 3146154 00:18:23.164 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 3146154 ']' 00:18:23.164 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 3146154 00:18:23.164 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:18:23.164 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:23.164 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3146154 00:18:23.423 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:23.423 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:23.423 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3146154' 00:18:23.423 killing process with pid 3146154 00:18:23.423 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 3146154 00:18:23.423 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 3146154 00:18:25.953 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:25.953 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:18:25.953 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:18:25.953 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # nvmfcleanup 00:18:25.953 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:18:25.953 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:25.953 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:18:25.953 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:25.953 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:25.953 rmmod nvme_tcp 00:18:25.953 rmmod nvme_fabrics 00:18:25.953 rmmod nvme_keyring 00:18:25.953 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:25.953 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:18:25.953 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:18:25.953 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@513 -- # '[' -n 3144410 ']' 00:18:25.953 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@514 -- # killprocess 3144410 00:18:25.953 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 3144410 ']' 00:18:25.953 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 3144410 00:18:25.953 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:18:25.953 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:25.953 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3144410 00:18:26.211 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:26.211 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:26.212 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3144410' 00:18:26.212 killing process with pid 3144410 00:18:26.212 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 3144410 00:18:26.212 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 3144410 00:18:28.116 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:18:28.116 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:18:28.116 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:18:28.116 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:18:28.116 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # iptables-save 00:18:28.116 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:18:28.116 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # iptables-restore 00:18:28.116 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:28.116 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:28.116 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:28.116 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:28.116 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:30.026 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:30.026 00:18:30.026 real 0m27.556s 00:18:30.026 user 0m38.098s 00:18:30.026 sys 0m4.765s 00:18:30.026 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:30.026 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:30.026 ************************************ 00:18:30.026 END TEST nvmf_ns_masking 00:18:30.026 ************************************ 00:18:30.026 16:26:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:18:30.026 16:26:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:30.026 16:26:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:30.026 16:26:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:30.026 16:26:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:30.026 ************************************ 00:18:30.026 START TEST nvmf_nvme_cli 00:18:30.026 ************************************ 00:18:30.026 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:30.026 * Looking for test storage... 00:18:30.026 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:30.026 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:30.026 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # lcov --version 00:18:30.026 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:30.026 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:30.026 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:30.026 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:30.026 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:30.026 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:18:30.026 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:18:30.026 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:18:30.026 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:18:30.026 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:18:30.026 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:18:30.026 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:18:30.026 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:30.026 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:18:30.026 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:18:30.026 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:30.026 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:30.026 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:18:30.026 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:18:30.026 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:30.026 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:18:30.026 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:18:30.026 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:18:30.026 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:18:30.026 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:30.026 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:18:30.026 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:18:30.026 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:30.026 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:30.026 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:18:30.026 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:30.026 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:30.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:30.026 --rc genhtml_branch_coverage=1 00:18:30.026 --rc genhtml_function_coverage=1 00:18:30.026 --rc genhtml_legend=1 00:18:30.026 --rc geninfo_all_blocks=1 00:18:30.026 --rc geninfo_unexecuted_blocks=1 00:18:30.026 00:18:30.026 ' 00:18:30.026 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:30.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:30.026 --rc genhtml_branch_coverage=1 00:18:30.026 --rc genhtml_function_coverage=1 00:18:30.026 --rc genhtml_legend=1 00:18:30.026 --rc geninfo_all_blocks=1 00:18:30.026 --rc geninfo_unexecuted_blocks=1 00:18:30.026 00:18:30.026 ' 00:18:30.026 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:30.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:30.026 --rc genhtml_branch_coverage=1 00:18:30.026 --rc genhtml_function_coverage=1 00:18:30.026 --rc genhtml_legend=1 00:18:30.026 --rc geninfo_all_blocks=1 00:18:30.026 --rc geninfo_unexecuted_blocks=1 00:18:30.026 00:18:30.026 ' 00:18:30.026 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:30.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:30.026 --rc genhtml_branch_coverage=1 00:18:30.026 --rc genhtml_function_coverage=1 00:18:30.026 --rc genhtml_legend=1 00:18:30.026 --rc geninfo_all_blocks=1 00:18:30.026 --rc geninfo_unexecuted_blocks=1 00:18:30.026 00:18:30.026 ' 00:18:30.026 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:30.026 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:18:30.027 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:30.027 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:30.027 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:30.027 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:30.027 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:30.027 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:30.027 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:30.027 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:30.027 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:30.027 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:30.027 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:30.027 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:30.027 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:30.027 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:30.027 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:30.027 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:30.027 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:30.027 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:18:30.027 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:30.027 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:30.027 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:30.027 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.027 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.027 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.027 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:18:30.027 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.027 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:18:30.027 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:30.027 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:30.027 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:30.027 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:30.027 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:30.027 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:30.027 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:30.027 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:30.027 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:30.027 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:30.027 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:30.027 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:30.027 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:18:30.027 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:18:30.027 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:18:30.027 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:30.027 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@472 -- # prepare_net_devs 00:18:30.027 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@434 -- # local -g is_hw=no 00:18:30.027 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@436 -- # remove_spdk_ns 00:18:30.027 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:30.027 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:30.027 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:30.027 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:18:30.027 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:18:30.027 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:18:30.027 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:31.945 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:31.945 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:18:31.945 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:31.945 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:31.945 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:31.945 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:31.945 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:31.945 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:31.946 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:31.946 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ up == up ]] 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:31.946 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ up == up ]] 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:31.946 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # is_hw=yes 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:31.946 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:31.946 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.373 ms 00:18:31.946 00:18:31.946 --- 10.0.0.2 ping statistics --- 00:18:31.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:31.946 rtt min/avg/max/mdev = 0.373/0.373/0.373/0.000 ms 00:18:31.946 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:31.946 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:31.946 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:18:31.946 00:18:31.946 --- 10.0.0.1 ping statistics --- 00:18:31.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:31.947 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:18:31.947 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:31.947 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # return 0 00:18:31.947 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:18:31.947 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:31.947 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:18:31.947 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:18:31.947 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:31.947 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:18:31.947 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:18:31.947 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:18:31.947 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:18:31.947 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:31.947 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:31.947 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@505 -- # nvmfpid=3149182 00:18:31.947 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:31.947 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@506 -- # waitforlisten 3149182 00:18:31.947 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 3149182 ']' 00:18:31.947 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:31.947 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:31.947 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:31.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:31.947 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:31.947 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:32.232 [2024-09-29 16:26:32.576220] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:18:32.232 [2024-09-29 16:26:32.576384] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:32.232 [2024-09-29 16:26:32.724897] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:32.494 [2024-09-29 16:26:32.993916] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:32.494 [2024-09-29 16:26:32.994003] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:32.494 [2024-09-29 16:26:32.994028] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:32.494 [2024-09-29 16:26:32.994053] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:32.494 [2024-09-29 16:26:32.994072] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:32.494 [2024-09-29 16:26:32.994213] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:32.494 [2024-09-29 16:26:32.994270] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:32.494 [2024-09-29 16:26:32.994491] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:32.494 [2024-09-29 16:26:32.994495] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:18:33.060 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:33.060 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:18:33.060 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:18:33.060 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:33.060 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:33.060 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:33.060 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:33.060 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.060 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:33.060 [2024-09-29 16:26:33.578591] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:33.060 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.060 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:33.060 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.060 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:33.319 Malloc0 00:18:33.319 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.319 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:33.319 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.319 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:33.319 Malloc1 00:18:33.319 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.319 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:18:33.319 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.319 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:33.319 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.319 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:33.319 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.319 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:33.319 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.319 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:33.319 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.319 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:33.319 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.319 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:33.319 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.319 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:33.319 [2024-09-29 16:26:33.764617] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:33.319 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.319 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:33.319 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.319 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:33.319 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.319 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:18:33.578 00:18:33.578 Discovery Log Number of Records 2, Generation counter 2 00:18:33.578 =====Discovery Log Entry 0====== 00:18:33.578 trtype: tcp 00:18:33.578 adrfam: ipv4 00:18:33.578 subtype: current discovery subsystem 00:18:33.578 treq: not required 00:18:33.578 portid: 0 00:18:33.578 trsvcid: 4420 00:18:33.578 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:33.578 traddr: 10.0.0.2 00:18:33.578 eflags: explicit discovery connections, duplicate discovery information 00:18:33.578 sectype: none 00:18:33.578 =====Discovery Log Entry 1====== 00:18:33.578 trtype: tcp 00:18:33.578 adrfam: ipv4 00:18:33.578 subtype: nvme subsystem 00:18:33.578 treq: not required 00:18:33.578 portid: 0 00:18:33.578 trsvcid: 4420 00:18:33.578 subnqn: nqn.2016-06.io.spdk:cnode1 00:18:33.578 traddr: 10.0.0.2 00:18:33.578 eflags: none 00:18:33.578 sectype: none 00:18:33.578 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:18:33.578 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:18:33.578 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@546 -- # local dev _ 00:18:33.578 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:33.578 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@545 -- # nvme list 00:18:33.578 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ Node == /dev/nvme* ]] 00:18:33.578 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:33.578 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ --------------------- == /dev/nvme* ]] 00:18:33.578 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:33.578 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:18:33.578 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:34.145 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:34.145 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:18:34.145 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:34.145 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:18:34.145 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:18:34.145 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:18:36.675 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:36.675 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:36.675 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:36.675 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:18:36.675 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:36.675 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:18:36.675 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:18:36.675 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@546 -- # local dev _ 00:18:36.675 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:36.675 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@545 -- # nvme list 00:18:36.675 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ Node == /dev/nvme* ]] 00:18:36.675 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:36.675 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ --------------------- == /dev/nvme* ]] 00:18:36.675 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:36.675 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:36.675 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # echo /dev/nvme0n1 00:18:36.675 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:36.675 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:36.675 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # echo /dev/nvme0n2 00:18:36.675 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:36.675 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:18:36.675 /dev/nvme0n2 ]] 00:18:36.675 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:18:36.675 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:18:36.675 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@546 -- # local dev _ 00:18:36.675 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:36.675 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@545 -- # nvme list 00:18:36.675 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ Node == /dev/nvme* ]] 00:18:36.675 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:36.675 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ --------------------- == /dev/nvme* ]] 00:18:36.675 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:36.675 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:36.675 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # echo /dev/nvme0n1 00:18:36.675 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:36.675 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:36.675 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # echo /dev/nvme0n2 00:18:36.675 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:36.675 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:18:36.675 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:36.675 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:36.675 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:36.675 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:18:36.675 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:36.675 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:36.675 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:36.675 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:36.675 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:18:36.675 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:18:36.675 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:36.675 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.675 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:36.675 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.675 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:18:36.675 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:18:36.675 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # nvmfcleanup 00:18:36.675 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:18:36.675 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:36.675 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:18:36.675 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:36.675 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:36.675 rmmod nvme_tcp 00:18:36.675 rmmod nvme_fabrics 00:18:36.675 rmmod nvme_keyring 00:18:36.675 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:36.675 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:18:36.675 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:18:36.675 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@513 -- # '[' -n 3149182 ']' 00:18:36.675 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@514 -- # killprocess 3149182 00:18:36.675 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 3149182 ']' 00:18:36.675 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 3149182 00:18:36.675 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:18:36.675 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:36.675 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3149182 00:18:36.675 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:36.675 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:36.675 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3149182' 00:18:36.675 killing process with pid 3149182 00:18:36.676 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 3149182 00:18:36.676 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 3149182 00:18:38.048 16:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:18:38.048 16:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:18:38.048 16:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:18:38.048 16:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:18:38.048 16:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@787 -- # iptables-save 00:18:38.048 16:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@787 -- # iptables-restore 00:18:38.048 16:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:18:38.048 16:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:38.048 16:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:38.048 16:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:38.048 16:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:38.048 16:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:40.588 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:40.588 00:18:40.588 real 0m10.303s 00:18:40.588 user 0m21.165s 00:18:40.588 sys 0m2.388s 00:18:40.588 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:40.588 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:40.588 ************************************ 00:18:40.588 END TEST nvmf_nvme_cli 00:18:40.588 ************************************ 00:18:40.588 16:26:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:18:40.588 16:26:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:40.588 16:26:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:40.588 16:26:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:40.588 16:26:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:40.588 ************************************ 00:18:40.588 START TEST nvmf_auth_target 00:18:40.588 ************************************ 00:18:40.588 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:40.588 * Looking for test storage... 00:18:40.588 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:40.588 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:40.588 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lcov --version 00:18:40.588 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:40.588 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:40.588 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:40.588 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:40.588 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:40.588 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:18:40.588 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:18:40.588 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:18:40.588 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:18:40.588 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:18:40.588 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:18:40.588 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:18:40.588 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:40.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.589 --rc genhtml_branch_coverage=1 00:18:40.589 --rc genhtml_function_coverage=1 00:18:40.589 --rc genhtml_legend=1 00:18:40.589 --rc geninfo_all_blocks=1 00:18:40.589 --rc geninfo_unexecuted_blocks=1 00:18:40.589 00:18:40.589 ' 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:40.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.589 --rc genhtml_branch_coverage=1 00:18:40.589 --rc genhtml_function_coverage=1 00:18:40.589 --rc genhtml_legend=1 00:18:40.589 --rc geninfo_all_blocks=1 00:18:40.589 --rc geninfo_unexecuted_blocks=1 00:18:40.589 00:18:40.589 ' 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:40.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.589 --rc genhtml_branch_coverage=1 00:18:40.589 --rc genhtml_function_coverage=1 00:18:40.589 --rc genhtml_legend=1 00:18:40.589 --rc geninfo_all_blocks=1 00:18:40.589 --rc geninfo_unexecuted_blocks=1 00:18:40.589 00:18:40.589 ' 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:40.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.589 --rc genhtml_branch_coverage=1 00:18:40.589 --rc genhtml_function_coverage=1 00:18:40.589 --rc genhtml_legend=1 00:18:40.589 --rc geninfo_all_blocks=1 00:18:40.589 --rc geninfo_unexecuted_blocks=1 00:18:40.589 00:18:40.589 ' 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:40.589 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:40.589 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:18:40.590 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:18:40.590 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:18:40.590 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:42.493 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:42.493 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:42.493 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:42.493 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # is_hw=yes 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:42.493 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:42.493 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:42.493 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:42.493 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:42.493 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:42.493 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:42.493 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:18:42.493 00:18:42.493 --- 10.0.0.2 ping statistics --- 00:18:42.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:42.493 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:18:42.493 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:42.493 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:42.493 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:18:42.493 00:18:42.493 --- 10.0.0.1 ping statistics --- 00:18:42.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:42.493 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:18:42.494 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:42.494 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # return 0 00:18:42.494 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:18:42.494 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:42.494 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:18:42.494 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:18:42.494 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:42.494 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:18:42.494 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:18:42.752 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:18:42.752 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:18:42.752 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:42.752 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.752 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # nvmfpid=3151917 00:18:42.752 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:18:42.752 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # waitforlisten 3151917 00:18:42.752 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3151917 ']' 00:18:42.752 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:42.752 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:42.752 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:42.752 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:42.752 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.686 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:43.686 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:43.686 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:18:43.686 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:43.686 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.686 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:43.945 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=3152098 00:18:43.945 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:18:43.945 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:43.945 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:18:43.945 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:18:43.945 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:43.945 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:18:43.945 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=null 00:18:43.945 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:18:43.945 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:43.945 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=816179b2b89418fa5a134fd216bd47baf5be30c0b52a0045 00:18:43.945 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:18:43.945 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.dHI 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 816179b2b89418fa5a134fd216bd47baf5be30c0b52a0045 0 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 816179b2b89418fa5a134fd216bd47baf5be30c0b52a0045 0 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=816179b2b89418fa5a134fd216bd47baf5be30c0b52a0045 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=0 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.dHI 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.dHI 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.dHI 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha512 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=64 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=5a137f89ac7ab66d71cc22eea653d804703601b032887ca1d4c74594d3620e80 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.ddv 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 5a137f89ac7ab66d71cc22eea653d804703601b032887ca1d4c74594d3620e80 3 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 5a137f89ac7ab66d71cc22eea653d804703601b032887ca1d4c74594d3620e80 3 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=5a137f89ac7ab66d71cc22eea653d804703601b032887ca1d4c74594d3620e80 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=3 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.ddv 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.ddv 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.ddv 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha256 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=32 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=9414c8c929628d1959b7d4353090719c 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.dh6 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 9414c8c929628d1959b7d4353090719c 1 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 9414c8c929628d1959b7d4353090719c 1 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=9414c8c929628d1959b7d4353090719c 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=1 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.dh6 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.dh6 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.dh6 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha384 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=f96b5ce28ca8972dba66d5a3db130f35d547ddf446707ed3 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.y0V 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key f96b5ce28ca8972dba66d5a3db130f35d547ddf446707ed3 2 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 f96b5ce28ca8972dba66d5a3db130f35d547ddf446707ed3 2 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=f96b5ce28ca8972dba66d5a3db130f35d547ddf446707ed3 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=2 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.y0V 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.y0V 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.y0V 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha384 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=ed536b3193b95d76c661aaeeb2d14f6db799436309fd7a3d 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.bxT 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key ed536b3193b95d76c661aaeeb2d14f6db799436309fd7a3d 2 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 ed536b3193b95d76c661aaeeb2d14f6db799436309fd7a3d 2 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=ed536b3193b95d76c661aaeeb2d14f6db799436309fd7a3d 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=2 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.bxT 00:18:43.946 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.bxT 00:18:43.947 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.bxT 00:18:43.947 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:18:43.947 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:18:43.947 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:43.947 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:18:43.947 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha256 00:18:43.947 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=32 00:18:43.947 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:43.947 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=7a38c789944305fce4a2150d622e6036 00:18:43.947 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:18:43.947 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.L5D 00:18:43.947 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 7a38c789944305fce4a2150d622e6036 1 00:18:43.947 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 7a38c789944305fce4a2150d622e6036 1 00:18:43.947 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:18:43.947 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:18:43.947 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=7a38c789944305fce4a2150d622e6036 00:18:43.947 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=1 00:18:43.947 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:18:44.206 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.L5D 00:18:44.206 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.L5D 00:18:44.206 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.L5D 00:18:44.206 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:18:44.206 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:18:44.206 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:44.206 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:18:44.206 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha512 00:18:44.206 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=64 00:18:44.206 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:44.206 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=bd32d236e4a194adf16c5af57d79ab901852bc7c1c3522403ed662a3bc80c8d7 00:18:44.206 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:18:44.206 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.L7P 00:18:44.206 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key bd32d236e4a194adf16c5af57d79ab901852bc7c1c3522403ed662a3bc80c8d7 3 00:18:44.206 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 bd32d236e4a194adf16c5af57d79ab901852bc7c1c3522403ed662a3bc80c8d7 3 00:18:44.206 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:18:44.206 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:18:44.206 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=bd32d236e4a194adf16c5af57d79ab901852bc7c1c3522403ed662a3bc80c8d7 00:18:44.206 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=3 00:18:44.206 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:18:44.206 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.L7P 00:18:44.206 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.L7P 00:18:44.206 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.L7P 00:18:44.206 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:18:44.206 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 3151917 00:18:44.206 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3151917 ']' 00:18:44.206 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:44.206 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:44.206 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:44.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:44.206 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:44.206 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.466 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:44.466 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:44.466 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 3152098 /var/tmp/host.sock 00:18:44.466 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3152098 ']' 00:18:44.466 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:18:44.466 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:44.466 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:44.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:44.466 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:44.466 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.401 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:45.401 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:45.401 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:18:45.401 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.401 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.401 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.401 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:45.401 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.dHI 00:18:45.401 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.401 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.401 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.401 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.dHI 00:18:45.401 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.dHI 00:18:45.658 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.ddv ]] 00:18:45.658 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ddv 00:18:45.658 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.658 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.658 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.658 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ddv 00:18:45.658 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ddv 00:18:45.917 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:45.917 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.dh6 00:18:45.917 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.917 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.917 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.917 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.dh6 00:18:45.917 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.dh6 00:18:46.175 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.y0V ]] 00:18:46.175 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.y0V 00:18:46.175 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.175 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.175 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.175 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.y0V 00:18:46.175 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.y0V 00:18:46.433 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:46.433 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.bxT 00:18:46.433 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.433 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.433 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.433 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.bxT 00:18:46.434 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.bxT 00:18:46.691 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.L5D ]] 00:18:46.691 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.L5D 00:18:46.691 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.691 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.691 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.691 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.L5D 00:18:46.691 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.L5D 00:18:46.950 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:46.950 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.L7P 00:18:46.950 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.950 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.950 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.950 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.L7P 00:18:46.950 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.L7P 00:18:47.208 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:18:47.208 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:47.208 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:47.208 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:47.208 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:47.208 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:47.466 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:18:47.466 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:47.466 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:47.466 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:47.466 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:47.466 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:47.466 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:47.466 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.466 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.466 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.466 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:47.466 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:47.466 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:48.032 00:18:48.032 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:48.032 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:48.032 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.290 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.290 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.290 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.290 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.290 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.290 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:48.290 { 00:18:48.290 "cntlid": 1, 00:18:48.290 "qid": 0, 00:18:48.290 "state": "enabled", 00:18:48.290 "thread": "nvmf_tgt_poll_group_000", 00:18:48.290 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:48.290 "listen_address": { 00:18:48.290 "trtype": "TCP", 00:18:48.290 "adrfam": "IPv4", 00:18:48.290 "traddr": "10.0.0.2", 00:18:48.290 "trsvcid": "4420" 00:18:48.290 }, 00:18:48.290 "peer_address": { 00:18:48.290 "trtype": "TCP", 00:18:48.290 "adrfam": "IPv4", 00:18:48.290 "traddr": "10.0.0.1", 00:18:48.290 "trsvcid": "60146" 00:18:48.290 }, 00:18:48.290 "auth": { 00:18:48.290 "state": "completed", 00:18:48.290 "digest": "sha256", 00:18:48.290 "dhgroup": "null" 00:18:48.290 } 00:18:48.290 } 00:18:48.290 ]' 00:18:48.290 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:48.290 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:48.290 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:48.290 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:48.290 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:48.290 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:48.290 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.290 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.548 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODE2MTc5YjJiODk0MThmYTVhMTM0ZmQyMTZiZDQ3YmFmNWJlMzBjMGI1MmEwMDQ1QODthw==: --dhchap-ctrl-secret DHHC-1:03:NWExMzdmODlhYzdhYjY2ZDcxY2MyMmVlYTY1M2Q4MDQ3MDM2MDFiMDMyODg3Y2ExZDRjNzQ1OTRkMzYyMGU4MD8roSo=: 00:18:48.548 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ODE2MTc5YjJiODk0MThmYTVhMTM0ZmQyMTZiZDQ3YmFmNWJlMzBjMGI1MmEwMDQ1QODthw==: --dhchap-ctrl-secret DHHC-1:03:NWExMzdmODlhYzdhYjY2ZDcxY2MyMmVlYTY1M2Q4MDQ3MDM2MDFiMDMyODg3Y2ExZDRjNzQ1OTRkMzYyMGU4MD8roSo=: 00:18:49.480 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:49.480 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:49.480 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:49.480 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.481 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.481 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.481 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:49.481 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:49.481 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:49.739 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:18:49.739 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:49.739 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:49.739 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:49.739 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:49.739 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:49.739 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:49.739 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.739 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.739 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.739 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:49.739 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:49.739 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:50.305 00:18:50.305 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:50.305 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:50.305 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.564 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.564 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.564 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.564 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.564 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.564 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:50.564 { 00:18:50.564 "cntlid": 3, 00:18:50.564 "qid": 0, 00:18:50.564 "state": "enabled", 00:18:50.564 "thread": "nvmf_tgt_poll_group_000", 00:18:50.564 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:50.564 "listen_address": { 00:18:50.564 "trtype": "TCP", 00:18:50.564 "adrfam": "IPv4", 00:18:50.564 "traddr": "10.0.0.2", 00:18:50.564 "trsvcid": "4420" 00:18:50.564 }, 00:18:50.564 "peer_address": { 00:18:50.564 "trtype": "TCP", 00:18:50.564 "adrfam": "IPv4", 00:18:50.564 "traddr": "10.0.0.1", 00:18:50.564 "trsvcid": "60170" 00:18:50.564 }, 00:18:50.564 "auth": { 00:18:50.564 "state": "completed", 00:18:50.564 "digest": "sha256", 00:18:50.564 "dhgroup": "null" 00:18:50.564 } 00:18:50.564 } 00:18:50.564 ]' 00:18:50.564 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:50.564 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:50.564 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:50.564 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:50.564 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:50.564 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:50.564 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.564 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.822 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTQxNGM4YzkyOTYyOGQxOTU5YjdkNDM1MzA5MDcxOWPUhFu+: --dhchap-ctrl-secret DHHC-1:02:Zjk2YjVjZTI4Y2E4OTcyZGJhNjZkNWEzZGIxMzBmMzVkNTQ3ZGRmNDQ2NzA3ZWQzUSvQ0g==: 00:18:50.822 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OTQxNGM4YzkyOTYyOGQxOTU5YjdkNDM1MzA5MDcxOWPUhFu+: --dhchap-ctrl-secret DHHC-1:02:Zjk2YjVjZTI4Y2E4OTcyZGJhNjZkNWEzZGIxMzBmMzVkNTQ3ZGRmNDQ2NzA3ZWQzUSvQ0g==: 00:18:51.756 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.756 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.756 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:51.756 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.756 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.756 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.756 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:51.756 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:51.756 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:52.014 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:18:52.014 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:52.014 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:52.014 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:52.014 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:52.014 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:52.014 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.014 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.014 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.014 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.014 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.014 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.014 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.581 00:18:52.581 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:52.581 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:52.581 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.839 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.839 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.839 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.839 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.839 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.839 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:52.839 { 00:18:52.839 "cntlid": 5, 00:18:52.839 "qid": 0, 00:18:52.839 "state": "enabled", 00:18:52.839 "thread": "nvmf_tgt_poll_group_000", 00:18:52.839 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:52.839 "listen_address": { 00:18:52.839 "trtype": "TCP", 00:18:52.839 "adrfam": "IPv4", 00:18:52.839 "traddr": "10.0.0.2", 00:18:52.839 "trsvcid": "4420" 00:18:52.839 }, 00:18:52.839 "peer_address": { 00:18:52.839 "trtype": "TCP", 00:18:52.839 "adrfam": "IPv4", 00:18:52.839 "traddr": "10.0.0.1", 00:18:52.839 "trsvcid": "60206" 00:18:52.839 }, 00:18:52.839 "auth": { 00:18:52.839 "state": "completed", 00:18:52.840 "digest": "sha256", 00:18:52.840 "dhgroup": "null" 00:18:52.840 } 00:18:52.840 } 00:18:52.840 ]' 00:18:52.840 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:52.840 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:52.840 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:52.840 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:52.840 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:52.840 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.840 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.840 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.098 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWQ1MzZiMzE5M2I5NWQ3NmM2NjFhYWVlYjJkMTRmNmRiNzk5NDM2MzA5ZmQ3YTNk3WThYw==: --dhchap-ctrl-secret DHHC-1:01:N2EzOGM3ODk5NDQzMDVmY2U0YTIxNTBkNjIyZTYwMza52AGC: 00:18:53.098 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:ZWQ1MzZiMzE5M2I5NWQ3NmM2NjFhYWVlYjJkMTRmNmRiNzk5NDM2MzA5ZmQ3YTNk3WThYw==: --dhchap-ctrl-secret DHHC-1:01:N2EzOGM3ODk5NDQzMDVmY2U0YTIxNTBkNjIyZTYwMza52AGC: 00:18:54.032 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.032 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.032 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:54.032 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.032 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.032 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.032 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:54.032 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:54.032 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:54.290 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:18:54.290 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:54.290 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:54.290 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:54.291 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:54.291 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.291 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:54.291 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.291 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.291 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.291 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:54.291 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:54.291 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:54.857 00:18:54.857 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:54.857 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.857 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:54.857 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.857 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.857 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.857 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.115 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.115 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:55.115 { 00:18:55.115 "cntlid": 7, 00:18:55.115 "qid": 0, 00:18:55.115 "state": "enabled", 00:18:55.115 "thread": "nvmf_tgt_poll_group_000", 00:18:55.115 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:55.115 "listen_address": { 00:18:55.115 "trtype": "TCP", 00:18:55.115 "adrfam": "IPv4", 00:18:55.115 "traddr": "10.0.0.2", 00:18:55.115 "trsvcid": "4420" 00:18:55.115 }, 00:18:55.115 "peer_address": { 00:18:55.115 "trtype": "TCP", 00:18:55.115 "adrfam": "IPv4", 00:18:55.115 "traddr": "10.0.0.1", 00:18:55.115 "trsvcid": "60244" 00:18:55.115 }, 00:18:55.115 "auth": { 00:18:55.115 "state": "completed", 00:18:55.115 "digest": "sha256", 00:18:55.115 "dhgroup": "null" 00:18:55.115 } 00:18:55.115 } 00:18:55.115 ]' 00:18:55.115 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:55.115 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:55.115 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:55.115 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:55.115 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:55.115 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.115 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.115 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.373 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmQzMmQyMzZlNGExOTRhZGYxNmM1YWY1N2Q3OWFiOTAxODUyYmM3YzFjMzUyMjQwM2VkNjYyYTNiYzgwYzhkNwO/ryk=: 00:18:55.373 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YmQzMmQyMzZlNGExOTRhZGYxNmM1YWY1N2Q3OWFiOTAxODUyYmM3YzFjMzUyMjQwM2VkNjYyYTNiYzgwYzhkNwO/ryk=: 00:18:56.306 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.306 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.306 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:56.306 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.306 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.306 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.306 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:56.306 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:56.306 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:56.306 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:56.871 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:18:56.871 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:56.871 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:56.871 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:56.871 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:56.871 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:56.871 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.871 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.871 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.871 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.871 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.871 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.871 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.130 00:18:57.130 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:57.130 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.130 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:57.388 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.388 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.388 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.388 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.388 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.388 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:57.388 { 00:18:57.388 "cntlid": 9, 00:18:57.388 "qid": 0, 00:18:57.388 "state": "enabled", 00:18:57.388 "thread": "nvmf_tgt_poll_group_000", 00:18:57.388 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:57.388 "listen_address": { 00:18:57.388 "trtype": "TCP", 00:18:57.388 "adrfam": "IPv4", 00:18:57.388 "traddr": "10.0.0.2", 00:18:57.388 "trsvcid": "4420" 00:18:57.388 }, 00:18:57.388 "peer_address": { 00:18:57.388 "trtype": "TCP", 00:18:57.388 "adrfam": "IPv4", 00:18:57.388 "traddr": "10.0.0.1", 00:18:57.388 "trsvcid": "44436" 00:18:57.388 }, 00:18:57.388 "auth": { 00:18:57.388 "state": "completed", 00:18:57.388 "digest": "sha256", 00:18:57.388 "dhgroup": "ffdhe2048" 00:18:57.388 } 00:18:57.388 } 00:18:57.388 ]' 00:18:57.388 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:57.388 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:57.388 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:57.388 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:57.388 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:57.388 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.388 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.388 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.647 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODE2MTc5YjJiODk0MThmYTVhMTM0ZmQyMTZiZDQ3YmFmNWJlMzBjMGI1MmEwMDQ1QODthw==: --dhchap-ctrl-secret DHHC-1:03:NWExMzdmODlhYzdhYjY2ZDcxY2MyMmVlYTY1M2Q4MDQ3MDM2MDFiMDMyODg3Y2ExZDRjNzQ1OTRkMzYyMGU4MD8roSo=: 00:18:57.647 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ODE2MTc5YjJiODk0MThmYTVhMTM0ZmQyMTZiZDQ3YmFmNWJlMzBjMGI1MmEwMDQ1QODthw==: --dhchap-ctrl-secret DHHC-1:03:NWExMzdmODlhYzdhYjY2ZDcxY2MyMmVlYTY1M2Q4MDQ3MDM2MDFiMDMyODg3Y2ExZDRjNzQ1OTRkMzYyMGU4MD8roSo=: 00:18:58.580 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.580 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.580 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:58.580 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.580 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.580 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.580 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:58.580 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:58.580 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:59.145 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:18:59.145 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:59.145 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:59.145 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:59.145 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:59.145 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:59.145 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:59.145 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.145 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.145 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.145 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:59.145 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:59.145 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:59.403 00:18:59.403 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:59.403 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:59.403 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.662 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.662 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.662 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.662 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.662 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.662 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:59.662 { 00:18:59.662 "cntlid": 11, 00:18:59.662 "qid": 0, 00:18:59.662 "state": "enabled", 00:18:59.662 "thread": "nvmf_tgt_poll_group_000", 00:18:59.662 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:59.662 "listen_address": { 00:18:59.662 "trtype": "TCP", 00:18:59.662 "adrfam": "IPv4", 00:18:59.662 "traddr": "10.0.0.2", 00:18:59.662 "trsvcid": "4420" 00:18:59.662 }, 00:18:59.662 "peer_address": { 00:18:59.662 "trtype": "TCP", 00:18:59.662 "adrfam": "IPv4", 00:18:59.662 "traddr": "10.0.0.1", 00:18:59.662 "trsvcid": "44452" 00:18:59.662 }, 00:18:59.662 "auth": { 00:18:59.662 "state": "completed", 00:18:59.662 "digest": "sha256", 00:18:59.662 "dhgroup": "ffdhe2048" 00:18:59.662 } 00:18:59.662 } 00:18:59.662 ]' 00:18:59.662 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:59.662 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:59.662 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:59.662 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:59.662 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:59.662 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.662 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.662 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.920 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTQxNGM4YzkyOTYyOGQxOTU5YjdkNDM1MzA5MDcxOWPUhFu+: --dhchap-ctrl-secret DHHC-1:02:Zjk2YjVjZTI4Y2E4OTcyZGJhNjZkNWEzZGIxMzBmMzVkNTQ3ZGRmNDQ2NzA3ZWQzUSvQ0g==: 00:18:59.920 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OTQxNGM4YzkyOTYyOGQxOTU5YjdkNDM1MzA5MDcxOWPUhFu+: --dhchap-ctrl-secret DHHC-1:02:Zjk2YjVjZTI4Y2E4OTcyZGJhNjZkNWEzZGIxMzBmMzVkNTQ3ZGRmNDQ2NzA3ZWQzUSvQ0g==: 00:19:00.853 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.112 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.112 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:01.112 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.112 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.112 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.112 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:01.112 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:01.112 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:01.370 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:19:01.370 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:01.370 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:01.370 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:01.370 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:01.370 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:01.370 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:01.370 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.370 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.370 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.370 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:01.370 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:01.371 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:01.628 00:19:01.628 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:01.628 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:01.628 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.886 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.887 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.887 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.887 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.887 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.887 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:01.887 { 00:19:01.887 "cntlid": 13, 00:19:01.887 "qid": 0, 00:19:01.887 "state": "enabled", 00:19:01.887 "thread": "nvmf_tgt_poll_group_000", 00:19:01.887 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:01.887 "listen_address": { 00:19:01.887 "trtype": "TCP", 00:19:01.887 "adrfam": "IPv4", 00:19:01.887 "traddr": "10.0.0.2", 00:19:01.887 "trsvcid": "4420" 00:19:01.887 }, 00:19:01.887 "peer_address": { 00:19:01.887 "trtype": "TCP", 00:19:01.887 "adrfam": "IPv4", 00:19:01.887 "traddr": "10.0.0.1", 00:19:01.887 "trsvcid": "44484" 00:19:01.887 }, 00:19:01.887 "auth": { 00:19:01.887 "state": "completed", 00:19:01.887 "digest": "sha256", 00:19:01.887 "dhgroup": "ffdhe2048" 00:19:01.887 } 00:19:01.887 } 00:19:01.887 ]' 00:19:01.887 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:01.887 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:01.887 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:01.887 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:01.887 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:02.144 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.144 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.144 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.401 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWQ1MzZiMzE5M2I5NWQ3NmM2NjFhYWVlYjJkMTRmNmRiNzk5NDM2MzA5ZmQ3YTNk3WThYw==: --dhchap-ctrl-secret DHHC-1:01:N2EzOGM3ODk5NDQzMDVmY2U0YTIxNTBkNjIyZTYwMza52AGC: 00:19:02.401 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:ZWQ1MzZiMzE5M2I5NWQ3NmM2NjFhYWVlYjJkMTRmNmRiNzk5NDM2MzA5ZmQ3YTNk3WThYw==: --dhchap-ctrl-secret DHHC-1:01:N2EzOGM3ODk5NDQzMDVmY2U0YTIxNTBkNjIyZTYwMza52AGC: 00:19:03.397 16:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.397 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.397 16:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:03.397 16:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.397 16:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.397 16:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.397 16:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:03.397 16:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:03.397 16:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:03.656 16:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:19:03.656 16:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:03.656 16:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:03.656 16:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:03.656 16:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:03.656 16:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:03.656 16:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:03.656 16:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.656 16:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.656 16:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.656 16:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:03.656 16:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:03.656 16:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:03.914 00:19:03.914 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:03.914 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:03.914 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.171 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.171 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.171 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.171 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.171 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.171 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:04.171 { 00:19:04.171 "cntlid": 15, 00:19:04.171 "qid": 0, 00:19:04.171 "state": "enabled", 00:19:04.171 "thread": "nvmf_tgt_poll_group_000", 00:19:04.171 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:04.171 "listen_address": { 00:19:04.171 "trtype": "TCP", 00:19:04.171 "adrfam": "IPv4", 00:19:04.172 "traddr": "10.0.0.2", 00:19:04.172 "trsvcid": "4420" 00:19:04.172 }, 00:19:04.172 "peer_address": { 00:19:04.172 "trtype": "TCP", 00:19:04.172 "adrfam": "IPv4", 00:19:04.172 "traddr": "10.0.0.1", 00:19:04.172 "trsvcid": "44512" 00:19:04.172 }, 00:19:04.172 "auth": { 00:19:04.172 "state": "completed", 00:19:04.172 "digest": "sha256", 00:19:04.172 "dhgroup": "ffdhe2048" 00:19:04.172 } 00:19:04.172 } 00:19:04.172 ]' 00:19:04.172 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:04.172 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:04.172 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:04.172 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:04.172 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:04.429 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.429 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.429 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:04.686 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmQzMmQyMzZlNGExOTRhZGYxNmM1YWY1N2Q3OWFiOTAxODUyYmM3YzFjMzUyMjQwM2VkNjYyYTNiYzgwYzhkNwO/ryk=: 00:19:04.686 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YmQzMmQyMzZlNGExOTRhZGYxNmM1YWY1N2Q3OWFiOTAxODUyYmM3YzFjMzUyMjQwM2VkNjYyYTNiYzgwYzhkNwO/ryk=: 00:19:05.617 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.617 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.617 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:05.617 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.617 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.617 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.617 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:05.617 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:05.617 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:05.617 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:05.875 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:19:05.875 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:05.875 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:05.875 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:05.875 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:05.875 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:05.875 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:05.875 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.875 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.875 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.875 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:05.875 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:05.875 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.134 00:19:06.134 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:06.134 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:06.134 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.699 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.699 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.699 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.699 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.699 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.699 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:06.699 { 00:19:06.699 "cntlid": 17, 00:19:06.699 "qid": 0, 00:19:06.699 "state": "enabled", 00:19:06.699 "thread": "nvmf_tgt_poll_group_000", 00:19:06.699 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:06.699 "listen_address": { 00:19:06.699 "trtype": "TCP", 00:19:06.699 "adrfam": "IPv4", 00:19:06.699 "traddr": "10.0.0.2", 00:19:06.699 "trsvcid": "4420" 00:19:06.699 }, 00:19:06.699 "peer_address": { 00:19:06.699 "trtype": "TCP", 00:19:06.699 "adrfam": "IPv4", 00:19:06.699 "traddr": "10.0.0.1", 00:19:06.699 "trsvcid": "44546" 00:19:06.699 }, 00:19:06.699 "auth": { 00:19:06.699 "state": "completed", 00:19:06.699 "digest": "sha256", 00:19:06.699 "dhgroup": "ffdhe3072" 00:19:06.699 } 00:19:06.699 } 00:19:06.699 ]' 00:19:06.699 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:06.699 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:06.699 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:06.699 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:06.699 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:06.699 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.699 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.699 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.957 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODE2MTc5YjJiODk0MThmYTVhMTM0ZmQyMTZiZDQ3YmFmNWJlMzBjMGI1MmEwMDQ1QODthw==: --dhchap-ctrl-secret DHHC-1:03:NWExMzdmODlhYzdhYjY2ZDcxY2MyMmVlYTY1M2Q4MDQ3MDM2MDFiMDMyODg3Y2ExZDRjNzQ1OTRkMzYyMGU4MD8roSo=: 00:19:06.957 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ODE2MTc5YjJiODk0MThmYTVhMTM0ZmQyMTZiZDQ3YmFmNWJlMzBjMGI1MmEwMDQ1QODthw==: --dhchap-ctrl-secret DHHC-1:03:NWExMzdmODlhYzdhYjY2ZDcxY2MyMmVlYTY1M2Q4MDQ3MDM2MDFiMDMyODg3Y2ExZDRjNzQ1OTRkMzYyMGU4MD8roSo=: 00:19:07.890 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.890 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.890 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:07.890 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.890 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.890 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.890 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:07.890 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:07.890 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:08.149 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:19:08.149 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:08.149 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:08.149 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:08.149 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:08.149 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.149 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.149 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.149 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.149 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.149 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.149 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.149 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.715 00:19:08.715 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:08.715 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.715 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:08.973 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.973 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.973 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.973 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.973 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.973 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:08.973 { 00:19:08.973 "cntlid": 19, 00:19:08.973 "qid": 0, 00:19:08.973 "state": "enabled", 00:19:08.973 "thread": "nvmf_tgt_poll_group_000", 00:19:08.973 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:08.973 "listen_address": { 00:19:08.973 "trtype": "TCP", 00:19:08.973 "adrfam": "IPv4", 00:19:08.973 "traddr": "10.0.0.2", 00:19:08.973 "trsvcid": "4420" 00:19:08.973 }, 00:19:08.973 "peer_address": { 00:19:08.973 "trtype": "TCP", 00:19:08.973 "adrfam": "IPv4", 00:19:08.973 "traddr": "10.0.0.1", 00:19:08.973 "trsvcid": "41568" 00:19:08.973 }, 00:19:08.973 "auth": { 00:19:08.973 "state": "completed", 00:19:08.973 "digest": "sha256", 00:19:08.973 "dhgroup": "ffdhe3072" 00:19:08.973 } 00:19:08.973 } 00:19:08.973 ]' 00:19:08.973 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:08.973 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:08.973 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:08.973 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:08.973 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:08.973 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.973 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.973 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.231 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTQxNGM4YzkyOTYyOGQxOTU5YjdkNDM1MzA5MDcxOWPUhFu+: --dhchap-ctrl-secret DHHC-1:02:Zjk2YjVjZTI4Y2E4OTcyZGJhNjZkNWEzZGIxMzBmMzVkNTQ3ZGRmNDQ2NzA3ZWQzUSvQ0g==: 00:19:09.231 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OTQxNGM4YzkyOTYyOGQxOTU5YjdkNDM1MzA5MDcxOWPUhFu+: --dhchap-ctrl-secret DHHC-1:02:Zjk2YjVjZTI4Y2E4OTcyZGJhNjZkNWEzZGIxMzBmMzVkNTQ3ZGRmNDQ2NzA3ZWQzUSvQ0g==: 00:19:10.164 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.164 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.164 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:10.164 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.164 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.164 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.164 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:10.164 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:10.164 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:10.730 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:19:10.730 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:10.730 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:10.730 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:10.730 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:10.730 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.730 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.730 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.730 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.730 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.730 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.730 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.730 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.988 00:19:10.988 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:10.988 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.988 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:11.246 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.246 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.246 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.246 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.246 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.246 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:11.246 { 00:19:11.246 "cntlid": 21, 00:19:11.246 "qid": 0, 00:19:11.246 "state": "enabled", 00:19:11.246 "thread": "nvmf_tgt_poll_group_000", 00:19:11.246 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:11.246 "listen_address": { 00:19:11.246 "trtype": "TCP", 00:19:11.246 "adrfam": "IPv4", 00:19:11.246 "traddr": "10.0.0.2", 00:19:11.246 "trsvcid": "4420" 00:19:11.246 }, 00:19:11.246 "peer_address": { 00:19:11.246 "trtype": "TCP", 00:19:11.246 "adrfam": "IPv4", 00:19:11.246 "traddr": "10.0.0.1", 00:19:11.246 "trsvcid": "41600" 00:19:11.246 }, 00:19:11.246 "auth": { 00:19:11.246 "state": "completed", 00:19:11.246 "digest": "sha256", 00:19:11.246 "dhgroup": "ffdhe3072" 00:19:11.246 } 00:19:11.246 } 00:19:11.246 ]' 00:19:11.246 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:11.246 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:11.246 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:11.246 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:11.246 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:11.246 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.246 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.246 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.504 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWQ1MzZiMzE5M2I5NWQ3NmM2NjFhYWVlYjJkMTRmNmRiNzk5NDM2MzA5ZmQ3YTNk3WThYw==: --dhchap-ctrl-secret DHHC-1:01:N2EzOGM3ODk5NDQzMDVmY2U0YTIxNTBkNjIyZTYwMza52AGC: 00:19:11.504 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:ZWQ1MzZiMzE5M2I5NWQ3NmM2NjFhYWVlYjJkMTRmNmRiNzk5NDM2MzA5ZmQ3YTNk3WThYw==: --dhchap-ctrl-secret DHHC-1:01:N2EzOGM3ODk5NDQzMDVmY2U0YTIxNTBkNjIyZTYwMza52AGC: 00:19:12.878 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.878 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.878 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:12.878 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.878 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.878 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.878 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:12.878 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:12.878 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:12.878 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:19:12.878 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:12.878 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:12.878 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:12.878 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:12.878 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.878 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:12.878 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.878 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.878 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.878 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:12.878 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:12.878 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:13.136 00:19:13.136 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:13.136 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:13.136 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.394 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.394 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.394 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.394 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.394 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.394 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:13.394 { 00:19:13.394 "cntlid": 23, 00:19:13.394 "qid": 0, 00:19:13.394 "state": "enabled", 00:19:13.394 "thread": "nvmf_tgt_poll_group_000", 00:19:13.394 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:13.394 "listen_address": { 00:19:13.394 "trtype": "TCP", 00:19:13.394 "adrfam": "IPv4", 00:19:13.394 "traddr": "10.0.0.2", 00:19:13.394 "trsvcid": "4420" 00:19:13.394 }, 00:19:13.394 "peer_address": { 00:19:13.394 "trtype": "TCP", 00:19:13.394 "adrfam": "IPv4", 00:19:13.394 "traddr": "10.0.0.1", 00:19:13.394 "trsvcid": "41628" 00:19:13.394 }, 00:19:13.394 "auth": { 00:19:13.394 "state": "completed", 00:19:13.394 "digest": "sha256", 00:19:13.394 "dhgroup": "ffdhe3072" 00:19:13.394 } 00:19:13.394 } 00:19:13.394 ]' 00:19:13.653 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:13.653 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:13.653 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:13.653 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:13.653 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:13.653 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.653 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.653 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.911 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmQzMmQyMzZlNGExOTRhZGYxNmM1YWY1N2Q3OWFiOTAxODUyYmM3YzFjMzUyMjQwM2VkNjYyYTNiYzgwYzhkNwO/ryk=: 00:19:13.911 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YmQzMmQyMzZlNGExOTRhZGYxNmM1YWY1N2Q3OWFiOTAxODUyYmM3YzFjMzUyMjQwM2VkNjYyYTNiYzgwYzhkNwO/ryk=: 00:19:14.845 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.845 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.845 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:14.845 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.845 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.845 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.845 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:14.845 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:14.845 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:14.845 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:15.103 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:19:15.103 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:15.103 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:15.103 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:15.103 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:15.103 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.103 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:15.103 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.103 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.103 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.103 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:15.103 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:15.103 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:15.669 00:19:15.669 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:15.669 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.669 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:15.927 16:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.927 16:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.927 16:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.927 16:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.927 16:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.927 16:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:15.927 { 00:19:15.927 "cntlid": 25, 00:19:15.927 "qid": 0, 00:19:15.927 "state": "enabled", 00:19:15.927 "thread": "nvmf_tgt_poll_group_000", 00:19:15.927 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:15.927 "listen_address": { 00:19:15.927 "trtype": "TCP", 00:19:15.927 "adrfam": "IPv4", 00:19:15.927 "traddr": "10.0.0.2", 00:19:15.927 "trsvcid": "4420" 00:19:15.927 }, 00:19:15.927 "peer_address": { 00:19:15.927 "trtype": "TCP", 00:19:15.927 "adrfam": "IPv4", 00:19:15.927 "traddr": "10.0.0.1", 00:19:15.927 "trsvcid": "41654" 00:19:15.927 }, 00:19:15.927 "auth": { 00:19:15.927 "state": "completed", 00:19:15.927 "digest": "sha256", 00:19:15.927 "dhgroup": "ffdhe4096" 00:19:15.927 } 00:19:15.927 } 00:19:15.927 ]' 00:19:15.927 16:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:15.927 16:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:15.927 16:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:15.927 16:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:15.927 16:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:15.927 16:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.927 16:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.927 16:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.185 16:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODE2MTc5YjJiODk0MThmYTVhMTM0ZmQyMTZiZDQ3YmFmNWJlMzBjMGI1MmEwMDQ1QODthw==: --dhchap-ctrl-secret DHHC-1:03:NWExMzdmODlhYzdhYjY2ZDcxY2MyMmVlYTY1M2Q4MDQ3MDM2MDFiMDMyODg3Y2ExZDRjNzQ1OTRkMzYyMGU4MD8roSo=: 00:19:16.185 16:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ODE2MTc5YjJiODk0MThmYTVhMTM0ZmQyMTZiZDQ3YmFmNWJlMzBjMGI1MmEwMDQ1QODthw==: --dhchap-ctrl-secret DHHC-1:03:NWExMzdmODlhYzdhYjY2ZDcxY2MyMmVlYTY1M2Q4MDQ3MDM2MDFiMDMyODg3Y2ExZDRjNzQ1OTRkMzYyMGU4MD8roSo=: 00:19:17.118 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.118 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.118 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:17.118 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.118 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.118 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.118 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:17.119 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:17.119 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:17.377 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:19:17.377 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:17.377 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:17.377 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:17.377 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:17.377 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.377 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.377 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.377 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.377 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.377 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.377 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.377 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.943 00:19:17.943 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:17.943 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.943 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:18.201 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.201 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.201 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.201 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.201 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.201 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:18.201 { 00:19:18.201 "cntlid": 27, 00:19:18.201 "qid": 0, 00:19:18.201 "state": "enabled", 00:19:18.201 "thread": "nvmf_tgt_poll_group_000", 00:19:18.201 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:18.201 "listen_address": { 00:19:18.201 "trtype": "TCP", 00:19:18.201 "adrfam": "IPv4", 00:19:18.201 "traddr": "10.0.0.2", 00:19:18.201 "trsvcid": "4420" 00:19:18.201 }, 00:19:18.201 "peer_address": { 00:19:18.201 "trtype": "TCP", 00:19:18.201 "adrfam": "IPv4", 00:19:18.201 "traddr": "10.0.0.1", 00:19:18.201 "trsvcid": "58400" 00:19:18.201 }, 00:19:18.201 "auth": { 00:19:18.201 "state": "completed", 00:19:18.201 "digest": "sha256", 00:19:18.201 "dhgroup": "ffdhe4096" 00:19:18.201 } 00:19:18.201 } 00:19:18.201 ]' 00:19:18.201 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:18.201 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:18.201 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:18.201 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:18.201 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:18.201 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.201 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.201 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.459 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTQxNGM4YzkyOTYyOGQxOTU5YjdkNDM1MzA5MDcxOWPUhFu+: --dhchap-ctrl-secret DHHC-1:02:Zjk2YjVjZTI4Y2E4OTcyZGJhNjZkNWEzZGIxMzBmMzVkNTQ3ZGRmNDQ2NzA3ZWQzUSvQ0g==: 00:19:18.459 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OTQxNGM4YzkyOTYyOGQxOTU5YjdkNDM1MzA5MDcxOWPUhFu+: --dhchap-ctrl-secret DHHC-1:02:Zjk2YjVjZTI4Y2E4OTcyZGJhNjZkNWEzZGIxMzBmMzVkNTQ3ZGRmNDQ2NzA3ZWQzUSvQ0g==: 00:19:19.831 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.831 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.831 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:19.831 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.831 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.831 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.831 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:19.831 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:19.831 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:19.831 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:19:19.831 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:19.831 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:19.831 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:19.831 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:19.831 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.831 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:19.831 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.831 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.831 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.831 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:19.831 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:19.831 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.398 00:19:20.398 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:20.398 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:20.398 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.656 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.656 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.656 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.656 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.656 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.656 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:20.656 { 00:19:20.656 "cntlid": 29, 00:19:20.656 "qid": 0, 00:19:20.656 "state": "enabled", 00:19:20.656 "thread": "nvmf_tgt_poll_group_000", 00:19:20.656 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:20.656 "listen_address": { 00:19:20.656 "trtype": "TCP", 00:19:20.656 "adrfam": "IPv4", 00:19:20.656 "traddr": "10.0.0.2", 00:19:20.656 "trsvcid": "4420" 00:19:20.656 }, 00:19:20.656 "peer_address": { 00:19:20.656 "trtype": "TCP", 00:19:20.656 "adrfam": "IPv4", 00:19:20.656 "traddr": "10.0.0.1", 00:19:20.656 "trsvcid": "58428" 00:19:20.656 }, 00:19:20.656 "auth": { 00:19:20.656 "state": "completed", 00:19:20.656 "digest": "sha256", 00:19:20.656 "dhgroup": "ffdhe4096" 00:19:20.656 } 00:19:20.656 } 00:19:20.656 ]' 00:19:20.656 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:20.656 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:20.656 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:20.656 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:20.656 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:20.656 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.656 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.656 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.914 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWQ1MzZiMzE5M2I5NWQ3NmM2NjFhYWVlYjJkMTRmNmRiNzk5NDM2MzA5ZmQ3YTNk3WThYw==: --dhchap-ctrl-secret DHHC-1:01:N2EzOGM3ODk5NDQzMDVmY2U0YTIxNTBkNjIyZTYwMza52AGC: 00:19:20.914 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:ZWQ1MzZiMzE5M2I5NWQ3NmM2NjFhYWVlYjJkMTRmNmRiNzk5NDM2MzA5ZmQ3YTNk3WThYw==: --dhchap-ctrl-secret DHHC-1:01:N2EzOGM3ODk5NDQzMDVmY2U0YTIxNTBkNjIyZTYwMza52AGC: 00:19:21.849 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.849 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.849 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:21.849 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.849 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.849 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.849 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:21.849 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:21.849 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:22.415 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:19:22.415 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:22.415 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:22.415 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:22.415 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:22.415 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.415 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:22.415 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.415 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.415 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.415 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:22.415 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:22.415 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:22.674 00:19:22.674 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:22.674 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:22.674 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.932 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.932 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.932 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.932 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.932 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.932 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:22.932 { 00:19:22.932 "cntlid": 31, 00:19:22.932 "qid": 0, 00:19:22.932 "state": "enabled", 00:19:22.932 "thread": "nvmf_tgt_poll_group_000", 00:19:22.932 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:22.932 "listen_address": { 00:19:22.932 "trtype": "TCP", 00:19:22.932 "adrfam": "IPv4", 00:19:22.932 "traddr": "10.0.0.2", 00:19:22.932 "trsvcid": "4420" 00:19:22.932 }, 00:19:22.932 "peer_address": { 00:19:22.932 "trtype": "TCP", 00:19:22.932 "adrfam": "IPv4", 00:19:22.932 "traddr": "10.0.0.1", 00:19:22.932 "trsvcid": "58450" 00:19:22.932 }, 00:19:22.932 "auth": { 00:19:22.932 "state": "completed", 00:19:22.932 "digest": "sha256", 00:19:22.932 "dhgroup": "ffdhe4096" 00:19:22.932 } 00:19:22.932 } 00:19:22.932 ]' 00:19:22.932 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:22.932 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:22.932 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:22.932 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:22.932 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:22.932 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.932 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.932 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.497 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmQzMmQyMzZlNGExOTRhZGYxNmM1YWY1N2Q3OWFiOTAxODUyYmM3YzFjMzUyMjQwM2VkNjYyYTNiYzgwYzhkNwO/ryk=: 00:19:23.497 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YmQzMmQyMzZlNGExOTRhZGYxNmM1YWY1N2Q3OWFiOTAxODUyYmM3YzFjMzUyMjQwM2VkNjYyYTNiYzgwYzhkNwO/ryk=: 00:19:24.429 16:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.429 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.429 16:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:24.429 16:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.429 16:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.429 16:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.429 16:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:24.429 16:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:24.429 16:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:24.429 16:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:24.687 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:19:24.687 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:24.687 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:24.687 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:24.687 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:24.687 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.687 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:24.687 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.687 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.687 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.687 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:24.687 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:24.687 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.251 00:19:25.251 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:25.251 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:25.251 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.508 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.508 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.508 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.508 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.508 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.508 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:25.508 { 00:19:25.508 "cntlid": 33, 00:19:25.508 "qid": 0, 00:19:25.508 "state": "enabled", 00:19:25.508 "thread": "nvmf_tgt_poll_group_000", 00:19:25.508 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:25.508 "listen_address": { 00:19:25.508 "trtype": "TCP", 00:19:25.508 "adrfam": "IPv4", 00:19:25.508 "traddr": "10.0.0.2", 00:19:25.508 "trsvcid": "4420" 00:19:25.508 }, 00:19:25.508 "peer_address": { 00:19:25.508 "trtype": "TCP", 00:19:25.508 "adrfam": "IPv4", 00:19:25.508 "traddr": "10.0.0.1", 00:19:25.508 "trsvcid": "58482" 00:19:25.508 }, 00:19:25.508 "auth": { 00:19:25.508 "state": "completed", 00:19:25.508 "digest": "sha256", 00:19:25.508 "dhgroup": "ffdhe6144" 00:19:25.508 } 00:19:25.508 } 00:19:25.508 ]' 00:19:25.508 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:25.508 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:25.508 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:25.508 16:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:25.508 16:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:25.508 16:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.508 16:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.508 16:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.766 16:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODE2MTc5YjJiODk0MThmYTVhMTM0ZmQyMTZiZDQ3YmFmNWJlMzBjMGI1MmEwMDQ1QODthw==: --dhchap-ctrl-secret DHHC-1:03:NWExMzdmODlhYzdhYjY2ZDcxY2MyMmVlYTY1M2Q4MDQ3MDM2MDFiMDMyODg3Y2ExZDRjNzQ1OTRkMzYyMGU4MD8roSo=: 00:19:25.766 16:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ODE2MTc5YjJiODk0MThmYTVhMTM0ZmQyMTZiZDQ3YmFmNWJlMzBjMGI1MmEwMDQ1QODthw==: --dhchap-ctrl-secret DHHC-1:03:NWExMzdmODlhYzdhYjY2ZDcxY2MyMmVlYTY1M2Q4MDQ3MDM2MDFiMDMyODg3Y2ExZDRjNzQ1OTRkMzYyMGU4MD8roSo=: 00:19:26.698 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:26.954 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:26.954 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:26.954 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.954 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.954 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.954 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:26.954 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:26.954 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:27.211 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:19:27.212 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:27.212 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:27.212 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:27.212 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:27.212 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:27.212 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.212 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.212 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.212 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.212 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.212 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.212 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.776 00:19:27.777 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:27.777 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:27.777 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.034 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.034 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.034 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.034 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.034 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.034 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:28.034 { 00:19:28.034 "cntlid": 35, 00:19:28.034 "qid": 0, 00:19:28.034 "state": "enabled", 00:19:28.034 "thread": "nvmf_tgt_poll_group_000", 00:19:28.034 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:28.034 "listen_address": { 00:19:28.034 "trtype": "TCP", 00:19:28.034 "adrfam": "IPv4", 00:19:28.034 "traddr": "10.0.0.2", 00:19:28.034 "trsvcid": "4420" 00:19:28.034 }, 00:19:28.034 "peer_address": { 00:19:28.034 "trtype": "TCP", 00:19:28.034 "adrfam": "IPv4", 00:19:28.034 "traddr": "10.0.0.1", 00:19:28.034 "trsvcid": "60114" 00:19:28.034 }, 00:19:28.034 "auth": { 00:19:28.034 "state": "completed", 00:19:28.034 "digest": "sha256", 00:19:28.034 "dhgroup": "ffdhe6144" 00:19:28.034 } 00:19:28.034 } 00:19:28.034 ]' 00:19:28.034 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:28.034 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:28.034 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:28.034 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:28.034 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:28.034 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.034 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.034 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.291 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTQxNGM4YzkyOTYyOGQxOTU5YjdkNDM1MzA5MDcxOWPUhFu+: --dhchap-ctrl-secret DHHC-1:02:Zjk2YjVjZTI4Y2E4OTcyZGJhNjZkNWEzZGIxMzBmMzVkNTQ3ZGRmNDQ2NzA3ZWQzUSvQ0g==: 00:19:28.291 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OTQxNGM4YzkyOTYyOGQxOTU5YjdkNDM1MzA5MDcxOWPUhFu+: --dhchap-ctrl-secret DHHC-1:02:Zjk2YjVjZTI4Y2E4OTcyZGJhNjZkNWEzZGIxMzBmMzVkNTQ3ZGRmNDQ2NzA3ZWQzUSvQ0g==: 00:19:29.664 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.664 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.664 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:29.664 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.664 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.664 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.664 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:29.664 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:29.664 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:29.664 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:19:29.664 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:29.664 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:29.664 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:29.664 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:29.664 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:29.664 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:29.664 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.664 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.664 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.664 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:29.664 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:29.664 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.229 00:19:30.229 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:30.229 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:30.229 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.487 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.487 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:30.487 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.487 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.487 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.487 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:30.487 { 00:19:30.487 "cntlid": 37, 00:19:30.487 "qid": 0, 00:19:30.487 "state": "enabled", 00:19:30.487 "thread": "nvmf_tgt_poll_group_000", 00:19:30.487 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:30.487 "listen_address": { 00:19:30.487 "trtype": "TCP", 00:19:30.487 "adrfam": "IPv4", 00:19:30.487 "traddr": "10.0.0.2", 00:19:30.487 "trsvcid": "4420" 00:19:30.487 }, 00:19:30.487 "peer_address": { 00:19:30.487 "trtype": "TCP", 00:19:30.487 "adrfam": "IPv4", 00:19:30.487 "traddr": "10.0.0.1", 00:19:30.487 "trsvcid": "60140" 00:19:30.487 }, 00:19:30.487 "auth": { 00:19:30.487 "state": "completed", 00:19:30.487 "digest": "sha256", 00:19:30.487 "dhgroup": "ffdhe6144" 00:19:30.487 } 00:19:30.487 } 00:19:30.487 ]' 00:19:30.487 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:30.745 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:30.745 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:30.745 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:30.745 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:30.745 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.745 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.745 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.004 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWQ1MzZiMzE5M2I5NWQ3NmM2NjFhYWVlYjJkMTRmNmRiNzk5NDM2MzA5ZmQ3YTNk3WThYw==: --dhchap-ctrl-secret DHHC-1:01:N2EzOGM3ODk5NDQzMDVmY2U0YTIxNTBkNjIyZTYwMza52AGC: 00:19:31.004 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:ZWQ1MzZiMzE5M2I5NWQ3NmM2NjFhYWVlYjJkMTRmNmRiNzk5NDM2MzA5ZmQ3YTNk3WThYw==: --dhchap-ctrl-secret DHHC-1:01:N2EzOGM3ODk5NDQzMDVmY2U0YTIxNTBkNjIyZTYwMza52AGC: 00:19:31.938 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.938 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.938 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:31.938 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.938 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.938 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.938 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:31.938 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:31.938 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:32.197 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:19:32.197 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:32.197 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:32.197 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:32.197 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:32.197 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.197 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:32.197 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.197 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.197 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.197 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:32.197 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:32.197 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:33.131 00:19:33.131 16:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:33.131 16:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:33.131 16:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.131 16:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.131 16:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.131 16:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.131 16:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.131 16:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.131 16:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:33.131 { 00:19:33.131 "cntlid": 39, 00:19:33.131 "qid": 0, 00:19:33.131 "state": "enabled", 00:19:33.131 "thread": "nvmf_tgt_poll_group_000", 00:19:33.131 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:33.131 "listen_address": { 00:19:33.131 "trtype": "TCP", 00:19:33.131 "adrfam": "IPv4", 00:19:33.131 "traddr": "10.0.0.2", 00:19:33.131 "trsvcid": "4420" 00:19:33.131 }, 00:19:33.131 "peer_address": { 00:19:33.131 "trtype": "TCP", 00:19:33.131 "adrfam": "IPv4", 00:19:33.131 "traddr": "10.0.0.1", 00:19:33.131 "trsvcid": "60168" 00:19:33.131 }, 00:19:33.131 "auth": { 00:19:33.131 "state": "completed", 00:19:33.131 "digest": "sha256", 00:19:33.131 "dhgroup": "ffdhe6144" 00:19:33.131 } 00:19:33.131 } 00:19:33.131 ]' 00:19:33.131 16:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:33.131 16:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:33.131 16:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:33.388 16:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:33.389 16:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:33.389 16:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.389 16:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.389 16:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.674 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmQzMmQyMzZlNGExOTRhZGYxNmM1YWY1N2Q3OWFiOTAxODUyYmM3YzFjMzUyMjQwM2VkNjYyYTNiYzgwYzhkNwO/ryk=: 00:19:33.674 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YmQzMmQyMzZlNGExOTRhZGYxNmM1YWY1N2Q3OWFiOTAxODUyYmM3YzFjMzUyMjQwM2VkNjYyYTNiYzgwYzhkNwO/ryk=: 00:19:34.632 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.632 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.632 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:34.632 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.632 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.632 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.632 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:34.632 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:34.632 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:34.633 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:34.891 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:19:34.891 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:34.891 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:34.891 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:34.891 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:34.891 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.891 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.891 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.891 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.891 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.891 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.891 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.891 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.823 00:19:35.823 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:35.823 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:35.823 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.080 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.080 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.080 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.080 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.080 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.080 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:36.080 { 00:19:36.080 "cntlid": 41, 00:19:36.080 "qid": 0, 00:19:36.080 "state": "enabled", 00:19:36.080 "thread": "nvmf_tgt_poll_group_000", 00:19:36.080 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:36.080 "listen_address": { 00:19:36.080 "trtype": "TCP", 00:19:36.080 "adrfam": "IPv4", 00:19:36.080 "traddr": "10.0.0.2", 00:19:36.080 "trsvcid": "4420" 00:19:36.080 }, 00:19:36.080 "peer_address": { 00:19:36.080 "trtype": "TCP", 00:19:36.080 "adrfam": "IPv4", 00:19:36.080 "traddr": "10.0.0.1", 00:19:36.080 "trsvcid": "60186" 00:19:36.080 }, 00:19:36.080 "auth": { 00:19:36.080 "state": "completed", 00:19:36.080 "digest": "sha256", 00:19:36.080 "dhgroup": "ffdhe8192" 00:19:36.080 } 00:19:36.080 } 00:19:36.080 ]' 00:19:36.080 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:36.080 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:36.081 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:36.337 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:36.337 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:36.337 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.337 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.337 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.595 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODE2MTc5YjJiODk0MThmYTVhMTM0ZmQyMTZiZDQ3YmFmNWJlMzBjMGI1MmEwMDQ1QODthw==: --dhchap-ctrl-secret DHHC-1:03:NWExMzdmODlhYzdhYjY2ZDcxY2MyMmVlYTY1M2Q4MDQ3MDM2MDFiMDMyODg3Y2ExZDRjNzQ1OTRkMzYyMGU4MD8roSo=: 00:19:36.595 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ODE2MTc5YjJiODk0MThmYTVhMTM0ZmQyMTZiZDQ3YmFmNWJlMzBjMGI1MmEwMDQ1QODthw==: --dhchap-ctrl-secret DHHC-1:03:NWExMzdmODlhYzdhYjY2ZDcxY2MyMmVlYTY1M2Q4MDQ3MDM2MDFiMDMyODg3Y2ExZDRjNzQ1OTRkMzYyMGU4MD8roSo=: 00:19:37.527 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.527 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.527 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:37.527 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.527 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.527 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.527 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:37.527 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:37.527 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:37.785 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:19:37.785 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:37.785 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:37.785 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:37.785 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:37.785 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.785 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.785 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.785 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.785 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.785 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.785 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.785 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.718 00:19:38.718 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:38.718 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:38.718 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.975 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.975 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.975 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.976 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.976 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.976 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:38.976 { 00:19:38.976 "cntlid": 43, 00:19:38.976 "qid": 0, 00:19:38.976 "state": "enabled", 00:19:38.976 "thread": "nvmf_tgt_poll_group_000", 00:19:38.976 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:38.976 "listen_address": { 00:19:38.976 "trtype": "TCP", 00:19:38.976 "adrfam": "IPv4", 00:19:38.976 "traddr": "10.0.0.2", 00:19:38.976 "trsvcid": "4420" 00:19:38.976 }, 00:19:38.976 "peer_address": { 00:19:38.976 "trtype": "TCP", 00:19:38.976 "adrfam": "IPv4", 00:19:38.976 "traddr": "10.0.0.1", 00:19:38.976 "trsvcid": "45514" 00:19:38.976 }, 00:19:38.976 "auth": { 00:19:38.976 "state": "completed", 00:19:38.976 "digest": "sha256", 00:19:38.976 "dhgroup": "ffdhe8192" 00:19:38.976 } 00:19:38.976 } 00:19:38.976 ]' 00:19:38.976 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:38.976 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:38.976 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:38.976 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:38.976 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:38.976 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.976 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.976 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.234 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTQxNGM4YzkyOTYyOGQxOTU5YjdkNDM1MzA5MDcxOWPUhFu+: --dhchap-ctrl-secret DHHC-1:02:Zjk2YjVjZTI4Y2E4OTcyZGJhNjZkNWEzZGIxMzBmMzVkNTQ3ZGRmNDQ2NzA3ZWQzUSvQ0g==: 00:19:39.234 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OTQxNGM4YzkyOTYyOGQxOTU5YjdkNDM1MzA5MDcxOWPUhFu+: --dhchap-ctrl-secret DHHC-1:02:Zjk2YjVjZTI4Y2E4OTcyZGJhNjZkNWEzZGIxMzBmMzVkNTQ3ZGRmNDQ2NzA3ZWQzUSvQ0g==: 00:19:40.608 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.608 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.608 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:40.608 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.608 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.608 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.608 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:40.608 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:40.608 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:40.608 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:19:40.608 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:40.608 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:40.608 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:40.608 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:40.608 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:40.608 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.608 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.608 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.608 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.608 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.608 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.608 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.542 00:19:41.542 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:41.542 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:41.542 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.800 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.800 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.800 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.800 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.800 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.800 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:41.800 { 00:19:41.800 "cntlid": 45, 00:19:41.800 "qid": 0, 00:19:41.800 "state": "enabled", 00:19:41.800 "thread": "nvmf_tgt_poll_group_000", 00:19:41.800 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:41.800 "listen_address": { 00:19:41.800 "trtype": "TCP", 00:19:41.800 "adrfam": "IPv4", 00:19:41.800 "traddr": "10.0.0.2", 00:19:41.800 "trsvcid": "4420" 00:19:41.800 }, 00:19:41.800 "peer_address": { 00:19:41.800 "trtype": "TCP", 00:19:41.800 "adrfam": "IPv4", 00:19:41.800 "traddr": "10.0.0.1", 00:19:41.800 "trsvcid": "45552" 00:19:41.800 }, 00:19:41.800 "auth": { 00:19:41.800 "state": "completed", 00:19:41.800 "digest": "sha256", 00:19:41.800 "dhgroup": "ffdhe8192" 00:19:41.800 } 00:19:41.800 } 00:19:41.800 ]' 00:19:41.800 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:41.800 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:41.800 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:41.800 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:41.800 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:42.058 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.058 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.058 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.317 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWQ1MzZiMzE5M2I5NWQ3NmM2NjFhYWVlYjJkMTRmNmRiNzk5NDM2MzA5ZmQ3YTNk3WThYw==: --dhchap-ctrl-secret DHHC-1:01:N2EzOGM3ODk5NDQzMDVmY2U0YTIxNTBkNjIyZTYwMza52AGC: 00:19:42.317 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:ZWQ1MzZiMzE5M2I5NWQ3NmM2NjFhYWVlYjJkMTRmNmRiNzk5NDM2MzA5ZmQ3YTNk3WThYw==: --dhchap-ctrl-secret DHHC-1:01:N2EzOGM3ODk5NDQzMDVmY2U0YTIxNTBkNjIyZTYwMza52AGC: 00:19:43.249 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.249 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.249 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:43.249 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.249 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.249 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.249 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:43.249 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:43.249 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:43.506 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:19:43.507 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:43.507 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:43.507 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:43.507 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:43.507 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.507 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:43.507 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.507 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.507 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.507 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:43.507 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:43.507 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:44.440 00:19:44.440 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:44.440 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:44.440 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.698 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.698 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.698 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.698 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.698 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.698 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:44.698 { 00:19:44.698 "cntlid": 47, 00:19:44.698 "qid": 0, 00:19:44.698 "state": "enabled", 00:19:44.698 "thread": "nvmf_tgt_poll_group_000", 00:19:44.698 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:44.698 "listen_address": { 00:19:44.698 "trtype": "TCP", 00:19:44.698 "adrfam": "IPv4", 00:19:44.698 "traddr": "10.0.0.2", 00:19:44.698 "trsvcid": "4420" 00:19:44.698 }, 00:19:44.698 "peer_address": { 00:19:44.698 "trtype": "TCP", 00:19:44.698 "adrfam": "IPv4", 00:19:44.698 "traddr": "10.0.0.1", 00:19:44.698 "trsvcid": "45572" 00:19:44.698 }, 00:19:44.698 "auth": { 00:19:44.698 "state": "completed", 00:19:44.698 "digest": "sha256", 00:19:44.698 "dhgroup": "ffdhe8192" 00:19:44.698 } 00:19:44.698 } 00:19:44.698 ]' 00:19:44.698 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:44.698 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:44.698 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:44.956 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:44.956 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:44.956 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.956 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.956 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.214 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmQzMmQyMzZlNGExOTRhZGYxNmM1YWY1N2Q3OWFiOTAxODUyYmM3YzFjMzUyMjQwM2VkNjYyYTNiYzgwYzhkNwO/ryk=: 00:19:45.214 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YmQzMmQyMzZlNGExOTRhZGYxNmM1YWY1N2Q3OWFiOTAxODUyYmM3YzFjMzUyMjQwM2VkNjYyYTNiYzgwYzhkNwO/ryk=: 00:19:46.149 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.149 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.149 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:46.149 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.149 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.149 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.149 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:46.149 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:46.149 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:46.149 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:46.149 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:46.407 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:19:46.407 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:46.407 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:46.407 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:46.407 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:46.407 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.407 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.407 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.407 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.407 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.407 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.407 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.407 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.665 00:19:46.665 16:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:46.665 16:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:46.665 16:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.924 16:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.924 16:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.924 16:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.924 16:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.924 16:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.924 16:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:46.924 { 00:19:46.924 "cntlid": 49, 00:19:46.924 "qid": 0, 00:19:46.924 "state": "enabled", 00:19:46.924 "thread": "nvmf_tgt_poll_group_000", 00:19:46.924 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:46.924 "listen_address": { 00:19:46.924 "trtype": "TCP", 00:19:46.924 "adrfam": "IPv4", 00:19:46.924 "traddr": "10.0.0.2", 00:19:46.924 "trsvcid": "4420" 00:19:46.924 }, 00:19:46.924 "peer_address": { 00:19:46.924 "trtype": "TCP", 00:19:46.924 "adrfam": "IPv4", 00:19:46.924 "traddr": "10.0.0.1", 00:19:46.924 "trsvcid": "48886" 00:19:46.924 }, 00:19:46.924 "auth": { 00:19:46.924 "state": "completed", 00:19:46.924 "digest": "sha384", 00:19:46.924 "dhgroup": "null" 00:19:46.924 } 00:19:46.924 } 00:19:46.924 ]' 00:19:46.924 16:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:47.182 16:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:47.182 16:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:47.182 16:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:47.182 16:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:47.182 16:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.182 16:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.182 16:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.440 16:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODE2MTc5YjJiODk0MThmYTVhMTM0ZmQyMTZiZDQ3YmFmNWJlMzBjMGI1MmEwMDQ1QODthw==: --dhchap-ctrl-secret DHHC-1:03:NWExMzdmODlhYzdhYjY2ZDcxY2MyMmVlYTY1M2Q4MDQ3MDM2MDFiMDMyODg3Y2ExZDRjNzQ1OTRkMzYyMGU4MD8roSo=: 00:19:47.440 16:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ODE2MTc5YjJiODk0MThmYTVhMTM0ZmQyMTZiZDQ3YmFmNWJlMzBjMGI1MmEwMDQ1QODthw==: --dhchap-ctrl-secret DHHC-1:03:NWExMzdmODlhYzdhYjY2ZDcxY2MyMmVlYTY1M2Q4MDQ3MDM2MDFiMDMyODg3Y2ExZDRjNzQ1OTRkMzYyMGU4MD8roSo=: 00:19:48.373 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.373 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.373 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:48.373 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.373 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.373 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.373 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:48.373 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:48.373 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:48.638 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:19:48.638 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:48.638 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:48.638 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:48.638 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:48.638 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.638 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.638 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.638 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.896 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.896 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.896 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.896 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.155 00:19:49.155 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:49.155 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:49.155 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.413 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.413 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.413 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.413 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.413 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.413 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:49.413 { 00:19:49.413 "cntlid": 51, 00:19:49.413 "qid": 0, 00:19:49.413 "state": "enabled", 00:19:49.413 "thread": "nvmf_tgt_poll_group_000", 00:19:49.413 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:49.413 "listen_address": { 00:19:49.413 "trtype": "TCP", 00:19:49.413 "adrfam": "IPv4", 00:19:49.413 "traddr": "10.0.0.2", 00:19:49.413 "trsvcid": "4420" 00:19:49.413 }, 00:19:49.413 "peer_address": { 00:19:49.413 "trtype": "TCP", 00:19:49.413 "adrfam": "IPv4", 00:19:49.413 "traddr": "10.0.0.1", 00:19:49.413 "trsvcid": "48920" 00:19:49.413 }, 00:19:49.413 "auth": { 00:19:49.413 "state": "completed", 00:19:49.413 "digest": "sha384", 00:19:49.413 "dhgroup": "null" 00:19:49.413 } 00:19:49.413 } 00:19:49.413 ]' 00:19:49.413 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:49.413 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:49.413 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:49.413 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:49.413 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:49.413 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.413 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.413 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.672 16:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTQxNGM4YzkyOTYyOGQxOTU5YjdkNDM1MzA5MDcxOWPUhFu+: --dhchap-ctrl-secret DHHC-1:02:Zjk2YjVjZTI4Y2E4OTcyZGJhNjZkNWEzZGIxMzBmMzVkNTQ3ZGRmNDQ2NzA3ZWQzUSvQ0g==: 00:19:49.672 16:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OTQxNGM4YzkyOTYyOGQxOTU5YjdkNDM1MzA5MDcxOWPUhFu+: --dhchap-ctrl-secret DHHC-1:02:Zjk2YjVjZTI4Y2E4OTcyZGJhNjZkNWEzZGIxMzBmMzVkNTQ3ZGRmNDQ2NzA3ZWQzUSvQ0g==: 00:19:50.606 16:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.864 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.864 16:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:50.864 16:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.864 16:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.864 16:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.864 16:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:50.864 16:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:50.864 16:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:51.122 16:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:19:51.122 16:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:51.122 16:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:51.122 16:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:51.122 16:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:51.122 16:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.122 16:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.122 16:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.122 16:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.122 16:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.122 16:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.122 16:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.122 16:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.379 00:19:51.379 16:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:51.379 16:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:51.379 16:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.637 16:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.637 16:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.637 16:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.637 16:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.637 16:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.637 16:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:51.637 { 00:19:51.637 "cntlid": 53, 00:19:51.637 "qid": 0, 00:19:51.637 "state": "enabled", 00:19:51.637 "thread": "nvmf_tgt_poll_group_000", 00:19:51.637 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:51.637 "listen_address": { 00:19:51.637 "trtype": "TCP", 00:19:51.637 "adrfam": "IPv4", 00:19:51.637 "traddr": "10.0.0.2", 00:19:51.637 "trsvcid": "4420" 00:19:51.637 }, 00:19:51.637 "peer_address": { 00:19:51.637 "trtype": "TCP", 00:19:51.637 "adrfam": "IPv4", 00:19:51.637 "traddr": "10.0.0.1", 00:19:51.637 "trsvcid": "48948" 00:19:51.637 }, 00:19:51.637 "auth": { 00:19:51.637 "state": "completed", 00:19:51.637 "digest": "sha384", 00:19:51.637 "dhgroup": "null" 00:19:51.637 } 00:19:51.637 } 00:19:51.637 ]' 00:19:51.637 16:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:51.637 16:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:51.637 16:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:51.637 16:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:51.637 16:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:51.896 16:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.896 16:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.896 16:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.153 16:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWQ1MzZiMzE5M2I5NWQ3NmM2NjFhYWVlYjJkMTRmNmRiNzk5NDM2MzA5ZmQ3YTNk3WThYw==: --dhchap-ctrl-secret DHHC-1:01:N2EzOGM3ODk5NDQzMDVmY2U0YTIxNTBkNjIyZTYwMza52AGC: 00:19:52.154 16:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:ZWQ1MzZiMzE5M2I5NWQ3NmM2NjFhYWVlYjJkMTRmNmRiNzk5NDM2MzA5ZmQ3YTNk3WThYw==: --dhchap-ctrl-secret DHHC-1:01:N2EzOGM3ODk5NDQzMDVmY2U0YTIxNTBkNjIyZTYwMza52AGC: 00:19:53.087 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.087 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.087 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:53.087 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.087 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.087 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.087 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:53.087 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:53.087 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:53.344 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:19:53.344 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:53.344 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:53.344 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:53.344 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:53.344 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:53.344 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:53.344 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.344 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.344 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.344 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:53.344 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:53.344 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:53.601 00:19:53.601 16:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:53.601 16:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.601 16:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:53.858 16:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.858 16:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.858 16:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.858 16:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.858 16:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.858 16:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:53.858 { 00:19:53.858 "cntlid": 55, 00:19:53.858 "qid": 0, 00:19:53.858 "state": "enabled", 00:19:53.858 "thread": "nvmf_tgt_poll_group_000", 00:19:53.858 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:53.858 "listen_address": { 00:19:53.858 "trtype": "TCP", 00:19:53.858 "adrfam": "IPv4", 00:19:53.858 "traddr": "10.0.0.2", 00:19:53.858 "trsvcid": "4420" 00:19:53.858 }, 00:19:53.858 "peer_address": { 00:19:53.858 "trtype": "TCP", 00:19:53.858 "adrfam": "IPv4", 00:19:53.858 "traddr": "10.0.0.1", 00:19:53.858 "trsvcid": "48996" 00:19:53.858 }, 00:19:53.858 "auth": { 00:19:53.858 "state": "completed", 00:19:53.858 "digest": "sha384", 00:19:53.858 "dhgroup": "null" 00:19:53.858 } 00:19:53.858 } 00:19:53.858 ]' 00:19:53.858 16:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:54.115 16:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:54.115 16:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:54.115 16:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:54.115 16:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:54.115 16:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.115 16:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.115 16:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.373 16:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmQzMmQyMzZlNGExOTRhZGYxNmM1YWY1N2Q3OWFiOTAxODUyYmM3YzFjMzUyMjQwM2VkNjYyYTNiYzgwYzhkNwO/ryk=: 00:19:54.373 16:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YmQzMmQyMzZlNGExOTRhZGYxNmM1YWY1N2Q3OWFiOTAxODUyYmM3YzFjMzUyMjQwM2VkNjYyYTNiYzgwYzhkNwO/ryk=: 00:19:55.304 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.304 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.304 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:55.304 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.304 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.304 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.305 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:55.305 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:55.305 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:55.305 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:55.562 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:19:55.562 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:55.562 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:55.562 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:55.562 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:55.562 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.562 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.562 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.562 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.562 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.562 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.562 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.562 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.819 00:19:55.819 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:55.819 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:55.819 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.382 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.382 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.382 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.382 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.382 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.382 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:56.382 { 00:19:56.382 "cntlid": 57, 00:19:56.382 "qid": 0, 00:19:56.382 "state": "enabled", 00:19:56.382 "thread": "nvmf_tgt_poll_group_000", 00:19:56.382 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:56.382 "listen_address": { 00:19:56.382 "trtype": "TCP", 00:19:56.382 "adrfam": "IPv4", 00:19:56.382 "traddr": "10.0.0.2", 00:19:56.382 "trsvcid": "4420" 00:19:56.382 }, 00:19:56.382 "peer_address": { 00:19:56.382 "trtype": "TCP", 00:19:56.382 "adrfam": "IPv4", 00:19:56.382 "traddr": "10.0.0.1", 00:19:56.382 "trsvcid": "49008" 00:19:56.382 }, 00:19:56.382 "auth": { 00:19:56.382 "state": "completed", 00:19:56.382 "digest": "sha384", 00:19:56.382 "dhgroup": "ffdhe2048" 00:19:56.382 } 00:19:56.382 } 00:19:56.382 ]' 00:19:56.382 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:56.382 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:56.383 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:56.383 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:56.383 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:56.383 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.383 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.383 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.641 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODE2MTc5YjJiODk0MThmYTVhMTM0ZmQyMTZiZDQ3YmFmNWJlMzBjMGI1MmEwMDQ1QODthw==: --dhchap-ctrl-secret DHHC-1:03:NWExMzdmODlhYzdhYjY2ZDcxY2MyMmVlYTY1M2Q4MDQ3MDM2MDFiMDMyODg3Y2ExZDRjNzQ1OTRkMzYyMGU4MD8roSo=: 00:19:56.641 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ODE2MTc5YjJiODk0MThmYTVhMTM0ZmQyMTZiZDQ3YmFmNWJlMzBjMGI1MmEwMDQ1QODthw==: --dhchap-ctrl-secret DHHC-1:03:NWExMzdmODlhYzdhYjY2ZDcxY2MyMmVlYTY1M2Q4MDQ3MDM2MDFiMDMyODg3Y2ExZDRjNzQ1OTRkMzYyMGU4MD8roSo=: 00:19:57.575 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.575 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.575 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:57.575 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.575 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.575 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.575 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:57.575 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:57.575 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:57.833 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:19:57.833 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:57.833 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:57.833 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:57.833 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:57.833 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.833 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.833 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.833 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.833 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.833 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.833 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.833 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.400 00:19:58.400 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:58.400 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:58.400 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.659 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.659 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.659 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.659 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.659 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.659 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:58.659 { 00:19:58.659 "cntlid": 59, 00:19:58.659 "qid": 0, 00:19:58.659 "state": "enabled", 00:19:58.659 "thread": "nvmf_tgt_poll_group_000", 00:19:58.659 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:58.659 "listen_address": { 00:19:58.659 "trtype": "TCP", 00:19:58.659 "adrfam": "IPv4", 00:19:58.659 "traddr": "10.0.0.2", 00:19:58.659 "trsvcid": "4420" 00:19:58.659 }, 00:19:58.659 "peer_address": { 00:19:58.659 "trtype": "TCP", 00:19:58.659 "adrfam": "IPv4", 00:19:58.659 "traddr": "10.0.0.1", 00:19:58.659 "trsvcid": "54188" 00:19:58.659 }, 00:19:58.659 "auth": { 00:19:58.659 "state": "completed", 00:19:58.659 "digest": "sha384", 00:19:58.659 "dhgroup": "ffdhe2048" 00:19:58.659 } 00:19:58.659 } 00:19:58.659 ]' 00:19:58.659 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:58.659 16:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:58.659 16:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:58.659 16:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:58.660 16:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:58.660 16:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.660 16:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.660 16:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.917 16:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTQxNGM4YzkyOTYyOGQxOTU5YjdkNDM1MzA5MDcxOWPUhFu+: --dhchap-ctrl-secret DHHC-1:02:Zjk2YjVjZTI4Y2E4OTcyZGJhNjZkNWEzZGIxMzBmMzVkNTQ3ZGRmNDQ2NzA3ZWQzUSvQ0g==: 00:19:58.917 16:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OTQxNGM4YzkyOTYyOGQxOTU5YjdkNDM1MzA5MDcxOWPUhFu+: --dhchap-ctrl-secret DHHC-1:02:Zjk2YjVjZTI4Y2E4OTcyZGJhNjZkNWEzZGIxMzBmMzVkNTQ3ZGRmNDQ2NzA3ZWQzUSvQ0g==: 00:19:59.851 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.851 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.851 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:59.851 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.851 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.851 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.851 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:59.851 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:59.851 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:00.109 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:20:00.109 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:00.109 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:00.109 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:00.109 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:00.109 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.109 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.109 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.109 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.109 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.109 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.109 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.109 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.676 00:20:00.676 16:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:00.676 16:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:00.676 16:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.934 16:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.934 16:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.934 16:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.934 16:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.934 16:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.934 16:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:00.934 { 00:20:00.934 "cntlid": 61, 00:20:00.934 "qid": 0, 00:20:00.934 "state": "enabled", 00:20:00.934 "thread": "nvmf_tgt_poll_group_000", 00:20:00.934 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:00.934 "listen_address": { 00:20:00.934 "trtype": "TCP", 00:20:00.934 "adrfam": "IPv4", 00:20:00.934 "traddr": "10.0.0.2", 00:20:00.934 "trsvcid": "4420" 00:20:00.934 }, 00:20:00.934 "peer_address": { 00:20:00.934 "trtype": "TCP", 00:20:00.934 "adrfam": "IPv4", 00:20:00.934 "traddr": "10.0.0.1", 00:20:00.934 "trsvcid": "54224" 00:20:00.934 }, 00:20:00.934 "auth": { 00:20:00.934 "state": "completed", 00:20:00.934 "digest": "sha384", 00:20:00.934 "dhgroup": "ffdhe2048" 00:20:00.934 } 00:20:00.934 } 00:20:00.934 ]' 00:20:00.934 16:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:00.934 16:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:00.934 16:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:00.934 16:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:00.934 16:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:00.934 16:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.934 16:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.934 16:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.193 16:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWQ1MzZiMzE5M2I5NWQ3NmM2NjFhYWVlYjJkMTRmNmRiNzk5NDM2MzA5ZmQ3YTNk3WThYw==: --dhchap-ctrl-secret DHHC-1:01:N2EzOGM3ODk5NDQzMDVmY2U0YTIxNTBkNjIyZTYwMza52AGC: 00:20:01.193 16:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:ZWQ1MzZiMzE5M2I5NWQ3NmM2NjFhYWVlYjJkMTRmNmRiNzk5NDM2MzA5ZmQ3YTNk3WThYw==: --dhchap-ctrl-secret DHHC-1:01:N2EzOGM3ODk5NDQzMDVmY2U0YTIxNTBkNjIyZTYwMza52AGC: 00:20:02.126 16:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.126 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.126 16:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:02.126 16:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.126 16:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.126 16:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.126 16:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:02.126 16:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:02.126 16:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:02.692 16:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:20:02.692 16:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:02.692 16:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:02.692 16:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:02.692 16:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:02.692 16:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.692 16:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:02.692 16:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.692 16:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.692 16:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.692 16:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:02.692 16:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:02.692 16:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:02.950 00:20:02.950 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:02.950 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:02.950 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.209 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.209 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.209 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.209 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.209 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.209 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:03.209 { 00:20:03.209 "cntlid": 63, 00:20:03.209 "qid": 0, 00:20:03.209 "state": "enabled", 00:20:03.209 "thread": "nvmf_tgt_poll_group_000", 00:20:03.209 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:03.209 "listen_address": { 00:20:03.209 "trtype": "TCP", 00:20:03.209 "adrfam": "IPv4", 00:20:03.209 "traddr": "10.0.0.2", 00:20:03.209 "trsvcid": "4420" 00:20:03.209 }, 00:20:03.209 "peer_address": { 00:20:03.209 "trtype": "TCP", 00:20:03.209 "adrfam": "IPv4", 00:20:03.209 "traddr": "10.0.0.1", 00:20:03.209 "trsvcid": "54248" 00:20:03.209 }, 00:20:03.209 "auth": { 00:20:03.209 "state": "completed", 00:20:03.209 "digest": "sha384", 00:20:03.209 "dhgroup": "ffdhe2048" 00:20:03.209 } 00:20:03.209 } 00:20:03.209 ]' 00:20:03.209 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:03.209 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:03.209 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:03.209 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:03.209 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:03.209 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.209 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.209 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.468 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmQzMmQyMzZlNGExOTRhZGYxNmM1YWY1N2Q3OWFiOTAxODUyYmM3YzFjMzUyMjQwM2VkNjYyYTNiYzgwYzhkNwO/ryk=: 00:20:03.468 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YmQzMmQyMzZlNGExOTRhZGYxNmM1YWY1N2Q3OWFiOTAxODUyYmM3YzFjMzUyMjQwM2VkNjYyYTNiYzgwYzhkNwO/ryk=: 00:20:04.458 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.458 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.458 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:04.458 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.459 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.459 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.459 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:04.459 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:04.459 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:04.459 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:05.026 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:20:05.026 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:05.026 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:05.026 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:05.026 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:05.026 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:05.026 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.026 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.026 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.026 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.026 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.026 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.026 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.285 00:20:05.285 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:05.285 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:05.285 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.543 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.543 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.543 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.543 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.543 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.543 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:05.543 { 00:20:05.543 "cntlid": 65, 00:20:05.543 "qid": 0, 00:20:05.543 "state": "enabled", 00:20:05.543 "thread": "nvmf_tgt_poll_group_000", 00:20:05.543 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:05.543 "listen_address": { 00:20:05.543 "trtype": "TCP", 00:20:05.543 "adrfam": "IPv4", 00:20:05.543 "traddr": "10.0.0.2", 00:20:05.543 "trsvcid": "4420" 00:20:05.543 }, 00:20:05.543 "peer_address": { 00:20:05.543 "trtype": "TCP", 00:20:05.543 "adrfam": "IPv4", 00:20:05.543 "traddr": "10.0.0.1", 00:20:05.543 "trsvcid": "54290" 00:20:05.543 }, 00:20:05.543 "auth": { 00:20:05.543 "state": "completed", 00:20:05.543 "digest": "sha384", 00:20:05.543 "dhgroup": "ffdhe3072" 00:20:05.543 } 00:20:05.544 } 00:20:05.544 ]' 00:20:05.544 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:05.544 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:05.544 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:05.544 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:05.544 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:05.544 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.544 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.544 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.802 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODE2MTc5YjJiODk0MThmYTVhMTM0ZmQyMTZiZDQ3YmFmNWJlMzBjMGI1MmEwMDQ1QODthw==: --dhchap-ctrl-secret DHHC-1:03:NWExMzdmODlhYzdhYjY2ZDcxY2MyMmVlYTY1M2Q4MDQ3MDM2MDFiMDMyODg3Y2ExZDRjNzQ1OTRkMzYyMGU4MD8roSo=: 00:20:05.802 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ODE2MTc5YjJiODk0MThmYTVhMTM0ZmQyMTZiZDQ3YmFmNWJlMzBjMGI1MmEwMDQ1QODthw==: --dhchap-ctrl-secret DHHC-1:03:NWExMzdmODlhYzdhYjY2ZDcxY2MyMmVlYTY1M2Q4MDQ3MDM2MDFiMDMyODg3Y2ExZDRjNzQ1OTRkMzYyMGU4MD8roSo=: 00:20:07.176 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.176 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.176 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:07.176 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.176 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.176 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.176 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:07.176 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:07.176 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:07.176 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:20:07.176 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:07.176 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:07.176 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:07.176 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:07.176 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.176 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.176 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.176 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.176 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.176 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.176 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.176 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.743 00:20:07.743 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:07.743 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.743 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:07.743 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.743 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.743 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.743 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.743 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.743 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:07.743 { 00:20:07.743 "cntlid": 67, 00:20:07.743 "qid": 0, 00:20:07.743 "state": "enabled", 00:20:07.743 "thread": "nvmf_tgt_poll_group_000", 00:20:07.743 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:07.743 "listen_address": { 00:20:07.743 "trtype": "TCP", 00:20:07.743 "adrfam": "IPv4", 00:20:07.743 "traddr": "10.0.0.2", 00:20:07.743 "trsvcid": "4420" 00:20:07.743 }, 00:20:07.743 "peer_address": { 00:20:07.743 "trtype": "TCP", 00:20:07.743 "adrfam": "IPv4", 00:20:07.743 "traddr": "10.0.0.1", 00:20:07.743 "trsvcid": "44156" 00:20:07.743 }, 00:20:07.743 "auth": { 00:20:07.743 "state": "completed", 00:20:07.743 "digest": "sha384", 00:20:07.743 "dhgroup": "ffdhe3072" 00:20:07.743 } 00:20:07.743 } 00:20:07.744 ]' 00:20:07.744 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:08.002 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:08.002 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:08.002 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:08.002 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:08.002 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.002 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.002 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.260 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTQxNGM4YzkyOTYyOGQxOTU5YjdkNDM1MzA5MDcxOWPUhFu+: --dhchap-ctrl-secret DHHC-1:02:Zjk2YjVjZTI4Y2E4OTcyZGJhNjZkNWEzZGIxMzBmMzVkNTQ3ZGRmNDQ2NzA3ZWQzUSvQ0g==: 00:20:08.260 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OTQxNGM4YzkyOTYyOGQxOTU5YjdkNDM1MzA5MDcxOWPUhFu+: --dhchap-ctrl-secret DHHC-1:02:Zjk2YjVjZTI4Y2E4OTcyZGJhNjZkNWEzZGIxMzBmMzVkNTQ3ZGRmNDQ2NzA3ZWQzUSvQ0g==: 00:20:09.193 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.193 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.193 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:09.193 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.193 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.193 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.193 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:09.193 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:09.193 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:09.451 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:20:09.451 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:09.451 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:09.451 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:09.451 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:09.451 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.451 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.451 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.451 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.451 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.451 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.451 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.451 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.018 00:20:10.018 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:10.018 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:10.018 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.276 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.276 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.276 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.276 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.276 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.276 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:10.276 { 00:20:10.276 "cntlid": 69, 00:20:10.276 "qid": 0, 00:20:10.276 "state": "enabled", 00:20:10.276 "thread": "nvmf_tgt_poll_group_000", 00:20:10.276 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:10.276 "listen_address": { 00:20:10.276 "trtype": "TCP", 00:20:10.276 "adrfam": "IPv4", 00:20:10.276 "traddr": "10.0.0.2", 00:20:10.276 "trsvcid": "4420" 00:20:10.276 }, 00:20:10.276 "peer_address": { 00:20:10.276 "trtype": "TCP", 00:20:10.276 "adrfam": "IPv4", 00:20:10.276 "traddr": "10.0.0.1", 00:20:10.276 "trsvcid": "44194" 00:20:10.276 }, 00:20:10.276 "auth": { 00:20:10.276 "state": "completed", 00:20:10.276 "digest": "sha384", 00:20:10.276 "dhgroup": "ffdhe3072" 00:20:10.276 } 00:20:10.276 } 00:20:10.276 ]' 00:20:10.276 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:10.276 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:10.276 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:10.276 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:10.276 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:10.276 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.276 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.276 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.534 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWQ1MzZiMzE5M2I5NWQ3NmM2NjFhYWVlYjJkMTRmNmRiNzk5NDM2MzA5ZmQ3YTNk3WThYw==: --dhchap-ctrl-secret DHHC-1:01:N2EzOGM3ODk5NDQzMDVmY2U0YTIxNTBkNjIyZTYwMza52AGC: 00:20:10.534 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:ZWQ1MzZiMzE5M2I5NWQ3NmM2NjFhYWVlYjJkMTRmNmRiNzk5NDM2MzA5ZmQ3YTNk3WThYw==: --dhchap-ctrl-secret DHHC-1:01:N2EzOGM3ODk5NDQzMDVmY2U0YTIxNTBkNjIyZTYwMza52AGC: 00:20:11.469 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.469 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.469 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:11.469 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.469 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.469 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.469 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:11.469 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:11.469 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:11.728 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:20:11.728 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:11.728 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:11.728 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:11.728 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:11.728 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.728 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:11.728 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.728 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.728 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.728 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:11.728 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:11.728 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:12.294 00:20:12.294 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:12.294 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:12.294 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.553 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.553 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.553 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.553 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.553 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.553 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:12.553 { 00:20:12.553 "cntlid": 71, 00:20:12.553 "qid": 0, 00:20:12.553 "state": "enabled", 00:20:12.553 "thread": "nvmf_tgt_poll_group_000", 00:20:12.553 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:12.553 "listen_address": { 00:20:12.553 "trtype": "TCP", 00:20:12.553 "adrfam": "IPv4", 00:20:12.553 "traddr": "10.0.0.2", 00:20:12.553 "trsvcid": "4420" 00:20:12.553 }, 00:20:12.553 "peer_address": { 00:20:12.553 "trtype": "TCP", 00:20:12.553 "adrfam": "IPv4", 00:20:12.553 "traddr": "10.0.0.1", 00:20:12.553 "trsvcid": "44230" 00:20:12.553 }, 00:20:12.553 "auth": { 00:20:12.553 "state": "completed", 00:20:12.553 "digest": "sha384", 00:20:12.553 "dhgroup": "ffdhe3072" 00:20:12.553 } 00:20:12.553 } 00:20:12.553 ]' 00:20:12.553 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:12.553 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:12.553 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:12.553 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:12.553 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:12.553 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.553 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.553 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.811 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmQzMmQyMzZlNGExOTRhZGYxNmM1YWY1N2Q3OWFiOTAxODUyYmM3YzFjMzUyMjQwM2VkNjYyYTNiYzgwYzhkNwO/ryk=: 00:20:12.811 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YmQzMmQyMzZlNGExOTRhZGYxNmM1YWY1N2Q3OWFiOTAxODUyYmM3YzFjMzUyMjQwM2VkNjYyYTNiYzgwYzhkNwO/ryk=: 00:20:14.182 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.182 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.182 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:14.182 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.182 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.182 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.182 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:14.182 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:14.182 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:14.182 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:14.182 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:20:14.182 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:14.182 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:14.182 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:14.182 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:14.182 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.182 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.182 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.182 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.182 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.182 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.182 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.182 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.747 00:20:14.747 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:14.747 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:14.747 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.005 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.005 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:15.005 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.005 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.005 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.005 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:15.005 { 00:20:15.005 "cntlid": 73, 00:20:15.005 "qid": 0, 00:20:15.005 "state": "enabled", 00:20:15.005 "thread": "nvmf_tgt_poll_group_000", 00:20:15.005 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:15.005 "listen_address": { 00:20:15.005 "trtype": "TCP", 00:20:15.005 "adrfam": "IPv4", 00:20:15.005 "traddr": "10.0.0.2", 00:20:15.005 "trsvcid": "4420" 00:20:15.005 }, 00:20:15.005 "peer_address": { 00:20:15.005 "trtype": "TCP", 00:20:15.005 "adrfam": "IPv4", 00:20:15.005 "traddr": "10.0.0.1", 00:20:15.005 "trsvcid": "44270" 00:20:15.005 }, 00:20:15.005 "auth": { 00:20:15.005 "state": "completed", 00:20:15.005 "digest": "sha384", 00:20:15.005 "dhgroup": "ffdhe4096" 00:20:15.005 } 00:20:15.005 } 00:20:15.005 ]' 00:20:15.005 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:15.005 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:15.005 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:15.005 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:15.005 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:15.005 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.005 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.005 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.263 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODE2MTc5YjJiODk0MThmYTVhMTM0ZmQyMTZiZDQ3YmFmNWJlMzBjMGI1MmEwMDQ1QODthw==: --dhchap-ctrl-secret DHHC-1:03:NWExMzdmODlhYzdhYjY2ZDcxY2MyMmVlYTY1M2Q4MDQ3MDM2MDFiMDMyODg3Y2ExZDRjNzQ1OTRkMzYyMGU4MD8roSo=: 00:20:15.263 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ODE2MTc5YjJiODk0MThmYTVhMTM0ZmQyMTZiZDQ3YmFmNWJlMzBjMGI1MmEwMDQ1QODthw==: --dhchap-ctrl-secret DHHC-1:03:NWExMzdmODlhYzdhYjY2ZDcxY2MyMmVlYTY1M2Q4MDQ3MDM2MDFiMDMyODg3Y2ExZDRjNzQ1OTRkMzYyMGU4MD8roSo=: 00:20:16.195 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.195 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.195 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:16.195 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.195 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.195 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.195 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:16.195 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:16.195 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:16.760 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:20:16.760 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:16.760 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:16.760 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:16.760 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:16.760 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.760 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.760 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.760 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.760 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.760 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.760 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.760 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.017 00:20:17.017 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:17.017 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:17.017 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.300 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.300 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.300 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.300 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.300 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.300 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:17.300 { 00:20:17.300 "cntlid": 75, 00:20:17.300 "qid": 0, 00:20:17.300 "state": "enabled", 00:20:17.300 "thread": "nvmf_tgt_poll_group_000", 00:20:17.300 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:17.300 "listen_address": { 00:20:17.300 "trtype": "TCP", 00:20:17.300 "adrfam": "IPv4", 00:20:17.300 "traddr": "10.0.0.2", 00:20:17.300 "trsvcid": "4420" 00:20:17.300 }, 00:20:17.300 "peer_address": { 00:20:17.300 "trtype": "TCP", 00:20:17.300 "adrfam": "IPv4", 00:20:17.300 "traddr": "10.0.0.1", 00:20:17.300 "trsvcid": "36606" 00:20:17.300 }, 00:20:17.300 "auth": { 00:20:17.300 "state": "completed", 00:20:17.300 "digest": "sha384", 00:20:17.300 "dhgroup": "ffdhe4096" 00:20:17.300 } 00:20:17.300 } 00:20:17.300 ]' 00:20:17.300 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:17.300 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:17.300 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:17.300 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:17.300 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:17.558 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.559 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.559 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.817 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTQxNGM4YzkyOTYyOGQxOTU5YjdkNDM1MzA5MDcxOWPUhFu+: --dhchap-ctrl-secret DHHC-1:02:Zjk2YjVjZTI4Y2E4OTcyZGJhNjZkNWEzZGIxMzBmMzVkNTQ3ZGRmNDQ2NzA3ZWQzUSvQ0g==: 00:20:17.817 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OTQxNGM4YzkyOTYyOGQxOTU5YjdkNDM1MzA5MDcxOWPUhFu+: --dhchap-ctrl-secret DHHC-1:02:Zjk2YjVjZTI4Y2E4OTcyZGJhNjZkNWEzZGIxMzBmMzVkNTQ3ZGRmNDQ2NzA3ZWQzUSvQ0g==: 00:20:18.749 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.749 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.749 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:18.749 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.749 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.749 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.749 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:18.749 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:18.749 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:19.007 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:20:19.007 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:19.007 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:19.007 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:19.007 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:19.007 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.007 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.007 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.007 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.007 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.007 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.007 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.007 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.573 00:20:19.573 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:19.573 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.573 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:19.831 16:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.831 16:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.831 16:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.831 16:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.831 16:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.831 16:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:19.831 { 00:20:19.831 "cntlid": 77, 00:20:19.831 "qid": 0, 00:20:19.831 "state": "enabled", 00:20:19.831 "thread": "nvmf_tgt_poll_group_000", 00:20:19.831 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:19.831 "listen_address": { 00:20:19.831 "trtype": "TCP", 00:20:19.831 "adrfam": "IPv4", 00:20:19.831 "traddr": "10.0.0.2", 00:20:19.831 "trsvcid": "4420" 00:20:19.831 }, 00:20:19.831 "peer_address": { 00:20:19.831 "trtype": "TCP", 00:20:19.831 "adrfam": "IPv4", 00:20:19.831 "traddr": "10.0.0.1", 00:20:19.831 "trsvcid": "36646" 00:20:19.831 }, 00:20:19.831 "auth": { 00:20:19.831 "state": "completed", 00:20:19.831 "digest": "sha384", 00:20:19.831 "dhgroup": "ffdhe4096" 00:20:19.831 } 00:20:19.831 } 00:20:19.831 ]' 00:20:19.831 16:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:19.831 16:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:19.831 16:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:19.831 16:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:19.831 16:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:19.831 16:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.831 16:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.831 16:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.089 16:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWQ1MzZiMzE5M2I5NWQ3NmM2NjFhYWVlYjJkMTRmNmRiNzk5NDM2MzA5ZmQ3YTNk3WThYw==: --dhchap-ctrl-secret DHHC-1:01:N2EzOGM3ODk5NDQzMDVmY2U0YTIxNTBkNjIyZTYwMza52AGC: 00:20:20.089 16:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:ZWQ1MzZiMzE5M2I5NWQ3NmM2NjFhYWVlYjJkMTRmNmRiNzk5NDM2MzA5ZmQ3YTNk3WThYw==: --dhchap-ctrl-secret DHHC-1:01:N2EzOGM3ODk5NDQzMDVmY2U0YTIxNTBkNjIyZTYwMza52AGC: 00:20:21.023 16:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:21.023 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:21.023 16:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:21.023 16:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.023 16:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.023 16:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.023 16:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:21.023 16:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:21.023 16:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:21.281 16:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:20:21.281 16:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:21.281 16:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:21.281 16:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:21.281 16:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:21.281 16:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.281 16:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:21.281 16:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.281 16:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.281 16:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.281 16:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:21.281 16:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:21.281 16:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:21.847 00:20:21.847 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:21.847 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:21.847 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.105 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.105 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.105 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.105 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.105 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.105 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:22.105 { 00:20:22.105 "cntlid": 79, 00:20:22.105 "qid": 0, 00:20:22.105 "state": "enabled", 00:20:22.105 "thread": "nvmf_tgt_poll_group_000", 00:20:22.105 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:22.105 "listen_address": { 00:20:22.106 "trtype": "TCP", 00:20:22.106 "adrfam": "IPv4", 00:20:22.106 "traddr": "10.0.0.2", 00:20:22.106 "trsvcid": "4420" 00:20:22.106 }, 00:20:22.106 "peer_address": { 00:20:22.106 "trtype": "TCP", 00:20:22.106 "adrfam": "IPv4", 00:20:22.106 "traddr": "10.0.0.1", 00:20:22.106 "trsvcid": "36668" 00:20:22.106 }, 00:20:22.106 "auth": { 00:20:22.106 "state": "completed", 00:20:22.106 "digest": "sha384", 00:20:22.106 "dhgroup": "ffdhe4096" 00:20:22.106 } 00:20:22.106 } 00:20:22.106 ]' 00:20:22.106 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:22.106 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:22.106 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:22.106 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:22.106 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:22.106 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.106 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.106 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.364 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmQzMmQyMzZlNGExOTRhZGYxNmM1YWY1N2Q3OWFiOTAxODUyYmM3YzFjMzUyMjQwM2VkNjYyYTNiYzgwYzhkNwO/ryk=: 00:20:22.364 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YmQzMmQyMzZlNGExOTRhZGYxNmM1YWY1N2Q3OWFiOTAxODUyYmM3YzFjMzUyMjQwM2VkNjYyYTNiYzgwYzhkNwO/ryk=: 00:20:23.740 16:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.740 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.740 16:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:23.740 16:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.740 16:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.740 16:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.740 16:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:23.740 16:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:23.740 16:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:23.740 16:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:23.740 16:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:20:23.740 16:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:23.740 16:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:23.740 16:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:23.740 16:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:23.740 16:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.740 16:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:23.740 16:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.740 16:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.740 16:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.740 16:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:23.740 16:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:23.740 16:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.307 00:20:24.307 16:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:24.307 16:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:24.307 16:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.565 16:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.565 16:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.565 16:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.565 16:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.565 16:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.565 16:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:24.565 { 00:20:24.565 "cntlid": 81, 00:20:24.565 "qid": 0, 00:20:24.565 "state": "enabled", 00:20:24.565 "thread": "nvmf_tgt_poll_group_000", 00:20:24.565 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:24.565 "listen_address": { 00:20:24.565 "trtype": "TCP", 00:20:24.565 "adrfam": "IPv4", 00:20:24.565 "traddr": "10.0.0.2", 00:20:24.565 "trsvcid": "4420" 00:20:24.565 }, 00:20:24.565 "peer_address": { 00:20:24.565 "trtype": "TCP", 00:20:24.565 "adrfam": "IPv4", 00:20:24.565 "traddr": "10.0.0.1", 00:20:24.565 "trsvcid": "36700" 00:20:24.565 }, 00:20:24.565 "auth": { 00:20:24.565 "state": "completed", 00:20:24.565 "digest": "sha384", 00:20:24.565 "dhgroup": "ffdhe6144" 00:20:24.565 } 00:20:24.565 } 00:20:24.565 ]' 00:20:24.565 16:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:24.823 16:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:24.823 16:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:24.823 16:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:24.823 16:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:24.823 16:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.823 16:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.823 16:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.081 16:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODE2MTc5YjJiODk0MThmYTVhMTM0ZmQyMTZiZDQ3YmFmNWJlMzBjMGI1MmEwMDQ1QODthw==: --dhchap-ctrl-secret DHHC-1:03:NWExMzdmODlhYzdhYjY2ZDcxY2MyMmVlYTY1M2Q4MDQ3MDM2MDFiMDMyODg3Y2ExZDRjNzQ1OTRkMzYyMGU4MD8roSo=: 00:20:25.081 16:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ODE2MTc5YjJiODk0MThmYTVhMTM0ZmQyMTZiZDQ3YmFmNWJlMzBjMGI1MmEwMDQ1QODthw==: --dhchap-ctrl-secret DHHC-1:03:NWExMzdmODlhYzdhYjY2ZDcxY2MyMmVlYTY1M2Q4MDQ3MDM2MDFiMDMyODg3Y2ExZDRjNzQ1OTRkMzYyMGU4MD8roSo=: 00:20:26.016 16:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.016 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.016 16:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:26.016 16:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.016 16:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.016 16:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.016 16:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:26.016 16:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:26.016 16:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:26.274 16:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:20:26.274 16:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:26.274 16:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:26.274 16:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:26.274 16:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:26.274 16:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:26.274 16:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:26.274 16:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.274 16:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.274 16:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.274 16:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:26.274 16:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:26.274 16:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:26.840 00:20:27.099 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:27.099 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:27.099 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.361 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.361 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.361 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.361 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.361 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.361 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:27.361 { 00:20:27.361 "cntlid": 83, 00:20:27.361 "qid": 0, 00:20:27.361 "state": "enabled", 00:20:27.361 "thread": "nvmf_tgt_poll_group_000", 00:20:27.361 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:27.361 "listen_address": { 00:20:27.361 "trtype": "TCP", 00:20:27.361 "adrfam": "IPv4", 00:20:27.361 "traddr": "10.0.0.2", 00:20:27.361 "trsvcid": "4420" 00:20:27.361 }, 00:20:27.361 "peer_address": { 00:20:27.361 "trtype": "TCP", 00:20:27.361 "adrfam": "IPv4", 00:20:27.361 "traddr": "10.0.0.1", 00:20:27.361 "trsvcid": "47140" 00:20:27.361 }, 00:20:27.361 "auth": { 00:20:27.361 "state": "completed", 00:20:27.361 "digest": "sha384", 00:20:27.361 "dhgroup": "ffdhe6144" 00:20:27.361 } 00:20:27.361 } 00:20:27.361 ]' 00:20:27.361 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:27.361 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:27.361 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:27.361 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:27.361 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:27.361 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.361 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.361 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.620 16:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTQxNGM4YzkyOTYyOGQxOTU5YjdkNDM1MzA5MDcxOWPUhFu+: --dhchap-ctrl-secret DHHC-1:02:Zjk2YjVjZTI4Y2E4OTcyZGJhNjZkNWEzZGIxMzBmMzVkNTQ3ZGRmNDQ2NzA3ZWQzUSvQ0g==: 00:20:27.620 16:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OTQxNGM4YzkyOTYyOGQxOTU5YjdkNDM1MzA5MDcxOWPUhFu+: --dhchap-ctrl-secret DHHC-1:02:Zjk2YjVjZTI4Y2E4OTcyZGJhNjZkNWEzZGIxMzBmMzVkNTQ3ZGRmNDQ2NzA3ZWQzUSvQ0g==: 00:20:28.554 16:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.554 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.554 16:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:28.554 16:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.554 16:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.554 16:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.554 16:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:28.554 16:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:28.554 16:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:28.812 16:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:20:28.812 16:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:28.812 16:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:28.812 16:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:28.812 16:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:28.812 16:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:28.812 16:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.812 16:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.812 16:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.812 16:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.812 16:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.812 16:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.812 16:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:29.377 00:20:29.635 16:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:29.635 16:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.635 16:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:29.894 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.894 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.894 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.894 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.894 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.894 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:29.894 { 00:20:29.894 "cntlid": 85, 00:20:29.894 "qid": 0, 00:20:29.894 "state": "enabled", 00:20:29.894 "thread": "nvmf_tgt_poll_group_000", 00:20:29.894 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:29.894 "listen_address": { 00:20:29.894 "trtype": "TCP", 00:20:29.894 "adrfam": "IPv4", 00:20:29.894 "traddr": "10.0.0.2", 00:20:29.894 "trsvcid": "4420" 00:20:29.894 }, 00:20:29.894 "peer_address": { 00:20:29.894 "trtype": "TCP", 00:20:29.894 "adrfam": "IPv4", 00:20:29.894 "traddr": "10.0.0.1", 00:20:29.894 "trsvcid": "47160" 00:20:29.894 }, 00:20:29.894 "auth": { 00:20:29.894 "state": "completed", 00:20:29.894 "digest": "sha384", 00:20:29.894 "dhgroup": "ffdhe6144" 00:20:29.894 } 00:20:29.894 } 00:20:29.894 ]' 00:20:29.894 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:29.894 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:29.894 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:29.894 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:29.894 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:29.894 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.894 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.894 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.152 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWQ1MzZiMzE5M2I5NWQ3NmM2NjFhYWVlYjJkMTRmNmRiNzk5NDM2MzA5ZmQ3YTNk3WThYw==: --dhchap-ctrl-secret DHHC-1:01:N2EzOGM3ODk5NDQzMDVmY2U0YTIxNTBkNjIyZTYwMza52AGC: 00:20:30.152 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:ZWQ1MzZiMzE5M2I5NWQ3NmM2NjFhYWVlYjJkMTRmNmRiNzk5NDM2MzA5ZmQ3YTNk3WThYw==: --dhchap-ctrl-secret DHHC-1:01:N2EzOGM3ODk5NDQzMDVmY2U0YTIxNTBkNjIyZTYwMza52AGC: 00:20:31.086 16:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.086 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.086 16:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:31.086 16:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.086 16:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.086 16:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.086 16:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:31.086 16:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:31.086 16:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:31.652 16:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:20:31.652 16:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:31.652 16:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:31.652 16:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:31.652 16:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:31.652 16:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.652 16:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:31.652 16:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.652 16:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.652 16:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.652 16:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:31.652 16:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:31.652 16:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:32.218 00:20:32.218 16:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:32.218 16:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:32.218 16:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.218 16:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.218 16:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.218 16:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.218 16:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.218 16:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.218 16:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:32.218 { 00:20:32.218 "cntlid": 87, 00:20:32.218 "qid": 0, 00:20:32.218 "state": "enabled", 00:20:32.218 "thread": "nvmf_tgt_poll_group_000", 00:20:32.218 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:32.218 "listen_address": { 00:20:32.218 "trtype": "TCP", 00:20:32.218 "adrfam": "IPv4", 00:20:32.218 "traddr": "10.0.0.2", 00:20:32.218 "trsvcid": "4420" 00:20:32.218 }, 00:20:32.218 "peer_address": { 00:20:32.218 "trtype": "TCP", 00:20:32.218 "adrfam": "IPv4", 00:20:32.218 "traddr": "10.0.0.1", 00:20:32.218 "trsvcid": "47198" 00:20:32.218 }, 00:20:32.218 "auth": { 00:20:32.218 "state": "completed", 00:20:32.218 "digest": "sha384", 00:20:32.218 "dhgroup": "ffdhe6144" 00:20:32.218 } 00:20:32.218 } 00:20:32.218 ]' 00:20:32.218 16:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:32.476 16:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:32.476 16:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:32.476 16:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:32.476 16:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:32.476 16:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.476 16:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.476 16:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.734 16:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmQzMmQyMzZlNGExOTRhZGYxNmM1YWY1N2Q3OWFiOTAxODUyYmM3YzFjMzUyMjQwM2VkNjYyYTNiYzgwYzhkNwO/ryk=: 00:20:32.734 16:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YmQzMmQyMzZlNGExOTRhZGYxNmM1YWY1N2Q3OWFiOTAxODUyYmM3YzFjMzUyMjQwM2VkNjYyYTNiYzgwYzhkNwO/ryk=: 00:20:33.668 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.668 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.668 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:33.668 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.668 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.668 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.668 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:33.668 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:33.668 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:33.669 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:33.927 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:20:33.927 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:33.927 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:33.927 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:33.927 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:33.927 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.927 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.927 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.927 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.927 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.927 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.927 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.927 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:34.912 00:20:34.912 16:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:34.912 16:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:34.912 16:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.191 16:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.191 16:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.191 16:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.191 16:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.191 16:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.191 16:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:35.191 { 00:20:35.191 "cntlid": 89, 00:20:35.191 "qid": 0, 00:20:35.191 "state": "enabled", 00:20:35.191 "thread": "nvmf_tgt_poll_group_000", 00:20:35.191 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:35.191 "listen_address": { 00:20:35.191 "trtype": "TCP", 00:20:35.191 "adrfam": "IPv4", 00:20:35.191 "traddr": "10.0.0.2", 00:20:35.191 "trsvcid": "4420" 00:20:35.191 }, 00:20:35.191 "peer_address": { 00:20:35.191 "trtype": "TCP", 00:20:35.191 "adrfam": "IPv4", 00:20:35.191 "traddr": "10.0.0.1", 00:20:35.191 "trsvcid": "47234" 00:20:35.191 }, 00:20:35.191 "auth": { 00:20:35.191 "state": "completed", 00:20:35.191 "digest": "sha384", 00:20:35.191 "dhgroup": "ffdhe8192" 00:20:35.191 } 00:20:35.191 } 00:20:35.191 ]' 00:20:35.191 16:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:35.191 16:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:35.191 16:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:35.191 16:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:35.191 16:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:35.448 16:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.448 16:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.448 16:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.706 16:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODE2MTc5YjJiODk0MThmYTVhMTM0ZmQyMTZiZDQ3YmFmNWJlMzBjMGI1MmEwMDQ1QODthw==: --dhchap-ctrl-secret DHHC-1:03:NWExMzdmODlhYzdhYjY2ZDcxY2MyMmVlYTY1M2Q4MDQ3MDM2MDFiMDMyODg3Y2ExZDRjNzQ1OTRkMzYyMGU4MD8roSo=: 00:20:35.706 16:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ODE2MTc5YjJiODk0MThmYTVhMTM0ZmQyMTZiZDQ3YmFmNWJlMzBjMGI1MmEwMDQ1QODthw==: --dhchap-ctrl-secret DHHC-1:03:NWExMzdmODlhYzdhYjY2ZDcxY2MyMmVlYTY1M2Q4MDQ3MDM2MDFiMDMyODg3Y2ExZDRjNzQ1OTRkMzYyMGU4MD8roSo=: 00:20:36.639 16:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.639 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.639 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:36.639 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.639 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.639 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.639 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:36.639 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:36.639 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:36.895 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:20:36.895 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:36.895 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:36.895 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:36.895 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:36.895 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.895 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.895 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.895 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.895 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.895 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.895 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.895 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.827 00:20:37.827 16:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:37.827 16:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:37.827 16:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:38.085 16:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.085 16:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:38.085 16:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.085 16:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.085 16:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.085 16:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:38.085 { 00:20:38.085 "cntlid": 91, 00:20:38.085 "qid": 0, 00:20:38.085 "state": "enabled", 00:20:38.085 "thread": "nvmf_tgt_poll_group_000", 00:20:38.085 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:38.085 "listen_address": { 00:20:38.085 "trtype": "TCP", 00:20:38.085 "adrfam": "IPv4", 00:20:38.085 "traddr": "10.0.0.2", 00:20:38.085 "trsvcid": "4420" 00:20:38.085 }, 00:20:38.085 "peer_address": { 00:20:38.085 "trtype": "TCP", 00:20:38.085 "adrfam": "IPv4", 00:20:38.085 "traddr": "10.0.0.1", 00:20:38.085 "trsvcid": "33982" 00:20:38.085 }, 00:20:38.085 "auth": { 00:20:38.085 "state": "completed", 00:20:38.085 "digest": "sha384", 00:20:38.085 "dhgroup": "ffdhe8192" 00:20:38.085 } 00:20:38.085 } 00:20:38.085 ]' 00:20:38.085 16:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:38.085 16:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:38.085 16:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:38.085 16:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:38.085 16:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:38.085 16:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:38.085 16:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:38.085 16:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:38.343 16:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTQxNGM4YzkyOTYyOGQxOTU5YjdkNDM1MzA5MDcxOWPUhFu+: --dhchap-ctrl-secret DHHC-1:02:Zjk2YjVjZTI4Y2E4OTcyZGJhNjZkNWEzZGIxMzBmMzVkNTQ3ZGRmNDQ2NzA3ZWQzUSvQ0g==: 00:20:38.343 16:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OTQxNGM4YzkyOTYyOGQxOTU5YjdkNDM1MzA5MDcxOWPUhFu+: --dhchap-ctrl-secret DHHC-1:02:Zjk2YjVjZTI4Y2E4OTcyZGJhNjZkNWEzZGIxMzBmMzVkNTQ3ZGRmNDQ2NzA3ZWQzUSvQ0g==: 00:20:39.716 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:39.716 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:39.716 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:39.716 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.716 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.716 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.716 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:39.716 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:39.716 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:39.716 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:20:39.716 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:39.716 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:39.716 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:39.716 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:39.716 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:39.716 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:39.716 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.716 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.716 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.716 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:39.716 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:39.716 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.648 00:20:40.648 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:40.648 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:40.648 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.906 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.906 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.906 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.906 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.906 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.906 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:40.906 { 00:20:40.906 "cntlid": 93, 00:20:40.906 "qid": 0, 00:20:40.906 "state": "enabled", 00:20:40.906 "thread": "nvmf_tgt_poll_group_000", 00:20:40.906 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:40.906 "listen_address": { 00:20:40.906 "trtype": "TCP", 00:20:40.906 "adrfam": "IPv4", 00:20:40.906 "traddr": "10.0.0.2", 00:20:40.906 "trsvcid": "4420" 00:20:40.906 }, 00:20:40.906 "peer_address": { 00:20:40.906 "trtype": "TCP", 00:20:40.906 "adrfam": "IPv4", 00:20:40.906 "traddr": "10.0.0.1", 00:20:40.906 "trsvcid": "34016" 00:20:40.906 }, 00:20:40.906 "auth": { 00:20:40.906 "state": "completed", 00:20:40.906 "digest": "sha384", 00:20:40.906 "dhgroup": "ffdhe8192" 00:20:40.906 } 00:20:40.906 } 00:20:40.906 ]' 00:20:40.906 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:40.906 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:40.906 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:41.164 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:41.164 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:41.164 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.164 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.164 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.422 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWQ1MzZiMzE5M2I5NWQ3NmM2NjFhYWVlYjJkMTRmNmRiNzk5NDM2MzA5ZmQ3YTNk3WThYw==: --dhchap-ctrl-secret DHHC-1:01:N2EzOGM3ODk5NDQzMDVmY2U0YTIxNTBkNjIyZTYwMza52AGC: 00:20:41.422 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:ZWQ1MzZiMzE5M2I5NWQ3NmM2NjFhYWVlYjJkMTRmNmRiNzk5NDM2MzA5ZmQ3YTNk3WThYw==: --dhchap-ctrl-secret DHHC-1:01:N2EzOGM3ODk5NDQzMDVmY2U0YTIxNTBkNjIyZTYwMza52AGC: 00:20:42.355 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.355 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.355 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:42.355 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.355 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.355 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.355 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:42.355 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:42.355 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:42.614 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:20:42.614 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:42.614 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:42.614 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:42.614 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:42.614 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.614 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:42.614 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.614 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.614 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.614 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:42.614 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:42.614 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:43.548 00:20:43.548 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:43.548 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.548 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:43.806 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.806 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.806 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.806 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.806 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.806 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:43.806 { 00:20:43.806 "cntlid": 95, 00:20:43.806 "qid": 0, 00:20:43.806 "state": "enabled", 00:20:43.806 "thread": "nvmf_tgt_poll_group_000", 00:20:43.806 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:43.806 "listen_address": { 00:20:43.806 "trtype": "TCP", 00:20:43.806 "adrfam": "IPv4", 00:20:43.806 "traddr": "10.0.0.2", 00:20:43.806 "trsvcid": "4420" 00:20:43.806 }, 00:20:43.806 "peer_address": { 00:20:43.806 "trtype": "TCP", 00:20:43.806 "adrfam": "IPv4", 00:20:43.806 "traddr": "10.0.0.1", 00:20:43.806 "trsvcid": "34054" 00:20:43.806 }, 00:20:43.806 "auth": { 00:20:43.806 "state": "completed", 00:20:43.806 "digest": "sha384", 00:20:43.806 "dhgroup": "ffdhe8192" 00:20:43.806 } 00:20:43.806 } 00:20:43.806 ]' 00:20:43.806 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:43.806 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:43.806 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:44.064 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:44.064 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:44.064 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.064 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.064 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:44.322 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmQzMmQyMzZlNGExOTRhZGYxNmM1YWY1N2Q3OWFiOTAxODUyYmM3YzFjMzUyMjQwM2VkNjYyYTNiYzgwYzhkNwO/ryk=: 00:20:44.322 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YmQzMmQyMzZlNGExOTRhZGYxNmM1YWY1N2Q3OWFiOTAxODUyYmM3YzFjMzUyMjQwM2VkNjYyYTNiYzgwYzhkNwO/ryk=: 00:20:45.256 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.256 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.256 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:45.256 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.256 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.256 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.256 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:45.256 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:45.256 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:45.256 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:45.256 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:45.514 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:20:45.514 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:45.514 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:45.514 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:45.514 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:45.514 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.514 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.514 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.514 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.514 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.514 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.514 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.514 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.080 00:20:46.080 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:46.080 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:46.080 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.338 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.338 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.338 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.338 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.338 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.338 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:46.338 { 00:20:46.338 "cntlid": 97, 00:20:46.338 "qid": 0, 00:20:46.338 "state": "enabled", 00:20:46.338 "thread": "nvmf_tgt_poll_group_000", 00:20:46.338 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:46.338 "listen_address": { 00:20:46.338 "trtype": "TCP", 00:20:46.338 "adrfam": "IPv4", 00:20:46.338 "traddr": "10.0.0.2", 00:20:46.338 "trsvcid": "4420" 00:20:46.338 }, 00:20:46.338 "peer_address": { 00:20:46.338 "trtype": "TCP", 00:20:46.338 "adrfam": "IPv4", 00:20:46.338 "traddr": "10.0.0.1", 00:20:46.338 "trsvcid": "34070" 00:20:46.338 }, 00:20:46.338 "auth": { 00:20:46.338 "state": "completed", 00:20:46.338 "digest": "sha512", 00:20:46.338 "dhgroup": "null" 00:20:46.338 } 00:20:46.338 } 00:20:46.338 ]' 00:20:46.338 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:46.338 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:46.338 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:46.338 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:46.338 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:46.338 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.338 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.338 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.596 16:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODE2MTc5YjJiODk0MThmYTVhMTM0ZmQyMTZiZDQ3YmFmNWJlMzBjMGI1MmEwMDQ1QODthw==: --dhchap-ctrl-secret DHHC-1:03:NWExMzdmODlhYzdhYjY2ZDcxY2MyMmVlYTY1M2Q4MDQ3MDM2MDFiMDMyODg3Y2ExZDRjNzQ1OTRkMzYyMGU4MD8roSo=: 00:20:46.596 16:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ODE2MTc5YjJiODk0MThmYTVhMTM0ZmQyMTZiZDQ3YmFmNWJlMzBjMGI1MmEwMDQ1QODthw==: --dhchap-ctrl-secret DHHC-1:03:NWExMzdmODlhYzdhYjY2ZDcxY2MyMmVlYTY1M2Q4MDQ3MDM2MDFiMDMyODg3Y2ExZDRjNzQ1OTRkMzYyMGU4MD8roSo=: 00:20:47.529 16:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.529 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.529 16:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:47.529 16:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.529 16:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.529 16:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.529 16:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:47.529 16:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:47.530 16:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:48.097 16:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:20:48.097 16:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:48.097 16:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:48.097 16:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:48.097 16:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:48.097 16:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.097 16:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.097 16:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.097 16:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.097 16:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.097 16:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.097 16:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.097 16:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.355 00:20:48.355 16:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:48.355 16:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:48.355 16:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.613 16:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.613 16:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.613 16:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.613 16:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.613 16:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.613 16:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:48.613 { 00:20:48.613 "cntlid": 99, 00:20:48.613 "qid": 0, 00:20:48.613 "state": "enabled", 00:20:48.613 "thread": "nvmf_tgt_poll_group_000", 00:20:48.613 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:48.613 "listen_address": { 00:20:48.613 "trtype": "TCP", 00:20:48.613 "adrfam": "IPv4", 00:20:48.613 "traddr": "10.0.0.2", 00:20:48.613 "trsvcid": "4420" 00:20:48.613 }, 00:20:48.613 "peer_address": { 00:20:48.613 "trtype": "TCP", 00:20:48.613 "adrfam": "IPv4", 00:20:48.613 "traddr": "10.0.0.1", 00:20:48.613 "trsvcid": "38366" 00:20:48.613 }, 00:20:48.613 "auth": { 00:20:48.613 "state": "completed", 00:20:48.613 "digest": "sha512", 00:20:48.613 "dhgroup": "null" 00:20:48.613 } 00:20:48.613 } 00:20:48.613 ]' 00:20:48.613 16:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:48.613 16:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:48.613 16:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:48.613 16:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:48.613 16:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:48.613 16:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.613 16:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.613 16:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.178 16:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTQxNGM4YzkyOTYyOGQxOTU5YjdkNDM1MzA5MDcxOWPUhFu+: --dhchap-ctrl-secret DHHC-1:02:Zjk2YjVjZTI4Y2E4OTcyZGJhNjZkNWEzZGIxMzBmMzVkNTQ3ZGRmNDQ2NzA3ZWQzUSvQ0g==: 00:20:49.178 16:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OTQxNGM4YzkyOTYyOGQxOTU5YjdkNDM1MzA5MDcxOWPUhFu+: --dhchap-ctrl-secret DHHC-1:02:Zjk2YjVjZTI4Y2E4OTcyZGJhNjZkNWEzZGIxMzBmMzVkNTQ3ZGRmNDQ2NzA3ZWQzUSvQ0g==: 00:20:50.112 16:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.112 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.112 16:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:50.112 16:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.112 16:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.112 16:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.112 16:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:50.112 16:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:50.112 16:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:50.371 16:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:20:50.371 16:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:50.371 16:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:50.371 16:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:50.371 16:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:50.371 16:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:50.371 16:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.371 16:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.371 16:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.371 16:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.371 16:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.371 16:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.371 16:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.629 00:20:50.629 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:50.629 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:50.629 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.888 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.888 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.888 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.888 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.888 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.888 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:50.888 { 00:20:50.888 "cntlid": 101, 00:20:50.888 "qid": 0, 00:20:50.888 "state": "enabled", 00:20:50.888 "thread": "nvmf_tgt_poll_group_000", 00:20:50.888 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:50.888 "listen_address": { 00:20:50.888 "trtype": "TCP", 00:20:50.888 "adrfam": "IPv4", 00:20:50.888 "traddr": "10.0.0.2", 00:20:50.888 "trsvcid": "4420" 00:20:50.888 }, 00:20:50.888 "peer_address": { 00:20:50.888 "trtype": "TCP", 00:20:50.888 "adrfam": "IPv4", 00:20:50.888 "traddr": "10.0.0.1", 00:20:50.888 "trsvcid": "38390" 00:20:50.888 }, 00:20:50.888 "auth": { 00:20:50.888 "state": "completed", 00:20:50.888 "digest": "sha512", 00:20:50.888 "dhgroup": "null" 00:20:50.888 } 00:20:50.888 } 00:20:50.888 ]' 00:20:50.888 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:50.888 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:50.888 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:51.147 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:51.147 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:51.147 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.147 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.147 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.406 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWQ1MzZiMzE5M2I5NWQ3NmM2NjFhYWVlYjJkMTRmNmRiNzk5NDM2MzA5ZmQ3YTNk3WThYw==: --dhchap-ctrl-secret DHHC-1:01:N2EzOGM3ODk5NDQzMDVmY2U0YTIxNTBkNjIyZTYwMza52AGC: 00:20:51.406 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:ZWQ1MzZiMzE5M2I5NWQ3NmM2NjFhYWVlYjJkMTRmNmRiNzk5NDM2MzA5ZmQ3YTNk3WThYw==: --dhchap-ctrl-secret DHHC-1:01:N2EzOGM3ODk5NDQzMDVmY2U0YTIxNTBkNjIyZTYwMza52AGC: 00:20:52.340 16:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.340 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.340 16:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:52.340 16:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.340 16:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.340 16:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.340 16:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:52.340 16:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:52.340 16:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:52.599 16:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:20:52.599 16:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:52.599 16:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:52.599 16:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:52.599 16:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:52.599 16:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:52.599 16:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:52.599 16:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.599 16:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.599 16:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.599 16:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:52.599 16:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:52.599 16:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:52.857 00:20:53.114 16:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:53.114 16:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:53.114 16:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.372 16:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.372 16:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.372 16:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.372 16:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.372 16:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.372 16:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:53.372 { 00:20:53.372 "cntlid": 103, 00:20:53.372 "qid": 0, 00:20:53.372 "state": "enabled", 00:20:53.372 "thread": "nvmf_tgt_poll_group_000", 00:20:53.372 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:53.372 "listen_address": { 00:20:53.372 "trtype": "TCP", 00:20:53.372 "adrfam": "IPv4", 00:20:53.372 "traddr": "10.0.0.2", 00:20:53.372 "trsvcid": "4420" 00:20:53.372 }, 00:20:53.372 "peer_address": { 00:20:53.372 "trtype": "TCP", 00:20:53.372 "adrfam": "IPv4", 00:20:53.372 "traddr": "10.0.0.1", 00:20:53.372 "trsvcid": "38430" 00:20:53.372 }, 00:20:53.372 "auth": { 00:20:53.372 "state": "completed", 00:20:53.372 "digest": "sha512", 00:20:53.372 "dhgroup": "null" 00:20:53.372 } 00:20:53.372 } 00:20:53.372 ]' 00:20:53.372 16:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:53.372 16:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:53.372 16:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:53.372 16:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:53.372 16:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:53.372 16:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.372 16:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.372 16:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.631 16:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmQzMmQyMzZlNGExOTRhZGYxNmM1YWY1N2Q3OWFiOTAxODUyYmM3YzFjMzUyMjQwM2VkNjYyYTNiYzgwYzhkNwO/ryk=: 00:20:53.631 16:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YmQzMmQyMzZlNGExOTRhZGYxNmM1YWY1N2Q3OWFiOTAxODUyYmM3YzFjMzUyMjQwM2VkNjYyYTNiYzgwYzhkNwO/ryk=: 00:20:55.005 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.005 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.005 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:55.005 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.005 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.005 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.005 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:55.005 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:55.005 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:55.005 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:55.005 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:20:55.005 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:55.005 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:55.005 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:55.005 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:55.005 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.005 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.005 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.005 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.005 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.005 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.005 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.005 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.263 00:20:55.521 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:55.521 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:55.521 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.779 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.779 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.779 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.779 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.779 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.779 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:55.779 { 00:20:55.779 "cntlid": 105, 00:20:55.779 "qid": 0, 00:20:55.779 "state": "enabled", 00:20:55.779 "thread": "nvmf_tgt_poll_group_000", 00:20:55.779 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:55.779 "listen_address": { 00:20:55.779 "trtype": "TCP", 00:20:55.779 "adrfam": "IPv4", 00:20:55.779 "traddr": "10.0.0.2", 00:20:55.779 "trsvcid": "4420" 00:20:55.779 }, 00:20:55.779 "peer_address": { 00:20:55.779 "trtype": "TCP", 00:20:55.779 "adrfam": "IPv4", 00:20:55.779 "traddr": "10.0.0.1", 00:20:55.779 "trsvcid": "38472" 00:20:55.779 }, 00:20:55.779 "auth": { 00:20:55.779 "state": "completed", 00:20:55.779 "digest": "sha512", 00:20:55.779 "dhgroup": "ffdhe2048" 00:20:55.779 } 00:20:55.779 } 00:20:55.779 ]' 00:20:55.779 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:55.779 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:55.779 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:55.779 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:55.779 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:55.779 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:55.779 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.779 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.037 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODE2MTc5YjJiODk0MThmYTVhMTM0ZmQyMTZiZDQ3YmFmNWJlMzBjMGI1MmEwMDQ1QODthw==: --dhchap-ctrl-secret DHHC-1:03:NWExMzdmODlhYzdhYjY2ZDcxY2MyMmVlYTY1M2Q4MDQ3MDM2MDFiMDMyODg3Y2ExZDRjNzQ1OTRkMzYyMGU4MD8roSo=: 00:20:56.037 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ODE2MTc5YjJiODk0MThmYTVhMTM0ZmQyMTZiZDQ3YmFmNWJlMzBjMGI1MmEwMDQ1QODthw==: --dhchap-ctrl-secret DHHC-1:03:NWExMzdmODlhYzdhYjY2ZDcxY2MyMmVlYTY1M2Q4MDQ3MDM2MDFiMDMyODg3Y2ExZDRjNzQ1OTRkMzYyMGU4MD8roSo=: 00:20:56.968 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.226 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.226 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:57.226 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.226 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.226 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.226 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:57.226 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:57.226 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:57.484 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:20:57.484 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:57.484 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:57.484 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:57.484 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:57.484 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.484 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.484 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.484 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.484 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.484 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.484 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.484 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.742 00:20:57.742 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:57.742 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.742 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:58.001 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.001 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:58.001 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.001 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.001 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.001 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:58.001 { 00:20:58.001 "cntlid": 107, 00:20:58.001 "qid": 0, 00:20:58.001 "state": "enabled", 00:20:58.001 "thread": "nvmf_tgt_poll_group_000", 00:20:58.001 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:58.001 "listen_address": { 00:20:58.001 "trtype": "TCP", 00:20:58.001 "adrfam": "IPv4", 00:20:58.001 "traddr": "10.0.0.2", 00:20:58.001 "trsvcid": "4420" 00:20:58.001 }, 00:20:58.001 "peer_address": { 00:20:58.001 "trtype": "TCP", 00:20:58.001 "adrfam": "IPv4", 00:20:58.001 "traddr": "10.0.0.1", 00:20:58.001 "trsvcid": "40714" 00:20:58.001 }, 00:20:58.001 "auth": { 00:20:58.001 "state": "completed", 00:20:58.001 "digest": "sha512", 00:20:58.001 "dhgroup": "ffdhe2048" 00:20:58.001 } 00:20:58.001 } 00:20:58.001 ]' 00:20:58.001 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:58.001 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:58.001 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:58.261 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:58.261 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:58.261 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:58.261 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.261 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.519 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTQxNGM4YzkyOTYyOGQxOTU5YjdkNDM1MzA5MDcxOWPUhFu+: --dhchap-ctrl-secret DHHC-1:02:Zjk2YjVjZTI4Y2E4OTcyZGJhNjZkNWEzZGIxMzBmMzVkNTQ3ZGRmNDQ2NzA3ZWQzUSvQ0g==: 00:20:58.519 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OTQxNGM4YzkyOTYyOGQxOTU5YjdkNDM1MzA5MDcxOWPUhFu+: --dhchap-ctrl-secret DHHC-1:02:Zjk2YjVjZTI4Y2E4OTcyZGJhNjZkNWEzZGIxMzBmMzVkNTQ3ZGRmNDQ2NzA3ZWQzUSvQ0g==: 00:20:59.453 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.453 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.453 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:59.453 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.453 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.453 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.453 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:59.453 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:59.453 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:59.711 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:20:59.711 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:59.711 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:59.711 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:59.711 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:59.711 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.711 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.711 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.711 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.711 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.711 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.711 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.711 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.968 00:21:00.226 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:00.226 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:00.226 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.484 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.484 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.484 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.484 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.484 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.484 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:00.484 { 00:21:00.484 "cntlid": 109, 00:21:00.484 "qid": 0, 00:21:00.484 "state": "enabled", 00:21:00.484 "thread": "nvmf_tgt_poll_group_000", 00:21:00.484 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:00.484 "listen_address": { 00:21:00.484 "trtype": "TCP", 00:21:00.484 "adrfam": "IPv4", 00:21:00.484 "traddr": "10.0.0.2", 00:21:00.484 "trsvcid": "4420" 00:21:00.484 }, 00:21:00.484 "peer_address": { 00:21:00.484 "trtype": "TCP", 00:21:00.484 "adrfam": "IPv4", 00:21:00.484 "traddr": "10.0.0.1", 00:21:00.484 "trsvcid": "40738" 00:21:00.484 }, 00:21:00.484 "auth": { 00:21:00.484 "state": "completed", 00:21:00.484 "digest": "sha512", 00:21:00.484 "dhgroup": "ffdhe2048" 00:21:00.484 } 00:21:00.484 } 00:21:00.484 ]' 00:21:00.484 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:00.484 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:00.484 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:00.484 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:00.484 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:00.484 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.484 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.484 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.742 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWQ1MzZiMzE5M2I5NWQ3NmM2NjFhYWVlYjJkMTRmNmRiNzk5NDM2MzA5ZmQ3YTNk3WThYw==: --dhchap-ctrl-secret DHHC-1:01:N2EzOGM3ODk5NDQzMDVmY2U0YTIxNTBkNjIyZTYwMza52AGC: 00:21:00.742 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:ZWQ1MzZiMzE5M2I5NWQ3NmM2NjFhYWVlYjJkMTRmNmRiNzk5NDM2MzA5ZmQ3YTNk3WThYw==: --dhchap-ctrl-secret DHHC-1:01:N2EzOGM3ODk5NDQzMDVmY2U0YTIxNTBkNjIyZTYwMza52AGC: 00:21:01.676 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.676 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.676 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:01.676 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.676 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.676 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.676 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:01.676 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:01.676 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:02.241 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:21:02.241 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:02.241 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:02.241 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:02.241 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:02.241 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:02.241 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:02.241 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.241 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.241 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.241 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:02.241 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:02.241 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:02.500 00:21:02.500 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:02.500 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:02.500 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.758 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.758 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.758 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.758 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.758 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.758 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:02.758 { 00:21:02.758 "cntlid": 111, 00:21:02.758 "qid": 0, 00:21:02.758 "state": "enabled", 00:21:02.758 "thread": "nvmf_tgt_poll_group_000", 00:21:02.758 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:02.758 "listen_address": { 00:21:02.758 "trtype": "TCP", 00:21:02.758 "adrfam": "IPv4", 00:21:02.758 "traddr": "10.0.0.2", 00:21:02.758 "trsvcid": "4420" 00:21:02.758 }, 00:21:02.758 "peer_address": { 00:21:02.758 "trtype": "TCP", 00:21:02.758 "adrfam": "IPv4", 00:21:02.758 "traddr": "10.0.0.1", 00:21:02.758 "trsvcid": "40764" 00:21:02.758 }, 00:21:02.758 "auth": { 00:21:02.758 "state": "completed", 00:21:02.758 "digest": "sha512", 00:21:02.758 "dhgroup": "ffdhe2048" 00:21:02.758 } 00:21:02.758 } 00:21:02.758 ]' 00:21:02.758 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:02.758 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:02.758 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:02.758 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:02.758 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:02.758 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.758 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.758 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.016 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmQzMmQyMzZlNGExOTRhZGYxNmM1YWY1N2Q3OWFiOTAxODUyYmM3YzFjMzUyMjQwM2VkNjYyYTNiYzgwYzhkNwO/ryk=: 00:21:03.016 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YmQzMmQyMzZlNGExOTRhZGYxNmM1YWY1N2Q3OWFiOTAxODUyYmM3YzFjMzUyMjQwM2VkNjYyYTNiYzgwYzhkNwO/ryk=: 00:21:04.389 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.389 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.389 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:04.389 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.389 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.389 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.389 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:04.389 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:04.389 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:04.389 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:04.389 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:21:04.389 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:04.389 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:04.389 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:04.389 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:04.389 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.389 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:04.389 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.389 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.389 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.389 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:04.389 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:04.389 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:04.953 00:21:04.953 16:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:04.953 16:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.953 16:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:04.953 16:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.953 16:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.953 16:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.953 16:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.953 16:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.953 16:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:04.953 { 00:21:04.953 "cntlid": 113, 00:21:04.953 "qid": 0, 00:21:04.953 "state": "enabled", 00:21:04.953 "thread": "nvmf_tgt_poll_group_000", 00:21:04.953 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:04.953 "listen_address": { 00:21:04.953 "trtype": "TCP", 00:21:04.953 "adrfam": "IPv4", 00:21:04.953 "traddr": "10.0.0.2", 00:21:04.953 "trsvcid": "4420" 00:21:04.953 }, 00:21:04.953 "peer_address": { 00:21:04.953 "trtype": "TCP", 00:21:04.953 "adrfam": "IPv4", 00:21:04.953 "traddr": "10.0.0.1", 00:21:04.953 "trsvcid": "40786" 00:21:04.953 }, 00:21:04.953 "auth": { 00:21:04.953 "state": "completed", 00:21:04.953 "digest": "sha512", 00:21:04.953 "dhgroup": "ffdhe3072" 00:21:04.953 } 00:21:04.953 } 00:21:04.953 ]' 00:21:04.953 16:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:05.212 16:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:05.212 16:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:05.212 16:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:05.212 16:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:05.212 16:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.212 16:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.212 16:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.501 16:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODE2MTc5YjJiODk0MThmYTVhMTM0ZmQyMTZiZDQ3YmFmNWJlMzBjMGI1MmEwMDQ1QODthw==: --dhchap-ctrl-secret DHHC-1:03:NWExMzdmODlhYzdhYjY2ZDcxY2MyMmVlYTY1M2Q4MDQ3MDM2MDFiMDMyODg3Y2ExZDRjNzQ1OTRkMzYyMGU4MD8roSo=: 00:21:05.501 16:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ODE2MTc5YjJiODk0MThmYTVhMTM0ZmQyMTZiZDQ3YmFmNWJlMzBjMGI1MmEwMDQ1QODthw==: --dhchap-ctrl-secret DHHC-1:03:NWExMzdmODlhYzdhYjY2ZDcxY2MyMmVlYTY1M2Q4MDQ3MDM2MDFiMDMyODg3Y2ExZDRjNzQ1OTRkMzYyMGU4MD8roSo=: 00:21:06.458 16:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.458 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.458 16:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:06.458 16:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.458 16:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.458 16:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.458 16:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:06.458 16:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:06.458 16:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:06.716 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:21:06.716 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:06.716 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:06.716 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:06.716 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:06.716 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.716 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.716 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.716 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.716 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.716 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.716 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.716 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.283 00:21:07.283 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:07.283 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:07.283 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.540 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.540 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.540 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.540 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.540 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.541 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:07.541 { 00:21:07.541 "cntlid": 115, 00:21:07.541 "qid": 0, 00:21:07.541 "state": "enabled", 00:21:07.541 "thread": "nvmf_tgt_poll_group_000", 00:21:07.541 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:07.541 "listen_address": { 00:21:07.541 "trtype": "TCP", 00:21:07.541 "adrfam": "IPv4", 00:21:07.541 "traddr": "10.0.0.2", 00:21:07.541 "trsvcid": "4420" 00:21:07.541 }, 00:21:07.541 "peer_address": { 00:21:07.541 "trtype": "TCP", 00:21:07.541 "adrfam": "IPv4", 00:21:07.541 "traddr": "10.0.0.1", 00:21:07.541 "trsvcid": "57336" 00:21:07.541 }, 00:21:07.541 "auth": { 00:21:07.541 "state": "completed", 00:21:07.541 "digest": "sha512", 00:21:07.541 "dhgroup": "ffdhe3072" 00:21:07.541 } 00:21:07.541 } 00:21:07.541 ]' 00:21:07.541 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:07.541 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:07.541 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:07.541 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:07.541 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:07.541 16:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.541 16:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.541 16:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.798 16:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTQxNGM4YzkyOTYyOGQxOTU5YjdkNDM1MzA5MDcxOWPUhFu+: --dhchap-ctrl-secret DHHC-1:02:Zjk2YjVjZTI4Y2E4OTcyZGJhNjZkNWEzZGIxMzBmMzVkNTQ3ZGRmNDQ2NzA3ZWQzUSvQ0g==: 00:21:07.798 16:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OTQxNGM4YzkyOTYyOGQxOTU5YjdkNDM1MzA5MDcxOWPUhFu+: --dhchap-ctrl-secret DHHC-1:02:Zjk2YjVjZTI4Y2E4OTcyZGJhNjZkNWEzZGIxMzBmMzVkNTQ3ZGRmNDQ2NzA3ZWQzUSvQ0g==: 00:21:08.730 16:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.730 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.730 16:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:08.730 16:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.730 16:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.730 16:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.730 16:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:08.730 16:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:08.730 16:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:08.988 16:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:21:08.988 16:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:08.988 16:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:08.988 16:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:08.988 16:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:08.988 16:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:08.988 16:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.988 16:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.988 16:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.247 16:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.247 16:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.247 16:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.247 16:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.505 00:21:09.505 16:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:09.505 16:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:09.505 16:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.763 16:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.763 16:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.763 16:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.763 16:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.763 16:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.763 16:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:09.763 { 00:21:09.763 "cntlid": 117, 00:21:09.763 "qid": 0, 00:21:09.763 "state": "enabled", 00:21:09.763 "thread": "nvmf_tgt_poll_group_000", 00:21:09.763 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:09.763 "listen_address": { 00:21:09.763 "trtype": "TCP", 00:21:09.763 "adrfam": "IPv4", 00:21:09.763 "traddr": "10.0.0.2", 00:21:09.763 "trsvcid": "4420" 00:21:09.763 }, 00:21:09.763 "peer_address": { 00:21:09.763 "trtype": "TCP", 00:21:09.763 "adrfam": "IPv4", 00:21:09.763 "traddr": "10.0.0.1", 00:21:09.763 "trsvcid": "57350" 00:21:09.763 }, 00:21:09.763 "auth": { 00:21:09.763 "state": "completed", 00:21:09.763 "digest": "sha512", 00:21:09.763 "dhgroup": "ffdhe3072" 00:21:09.763 } 00:21:09.763 } 00:21:09.763 ]' 00:21:09.763 16:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:09.763 16:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:09.763 16:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:09.763 16:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:09.763 16:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:10.022 16:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.022 16:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.022 16:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.280 16:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWQ1MzZiMzE5M2I5NWQ3NmM2NjFhYWVlYjJkMTRmNmRiNzk5NDM2MzA5ZmQ3YTNk3WThYw==: --dhchap-ctrl-secret DHHC-1:01:N2EzOGM3ODk5NDQzMDVmY2U0YTIxNTBkNjIyZTYwMza52AGC: 00:21:10.280 16:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:ZWQ1MzZiMzE5M2I5NWQ3NmM2NjFhYWVlYjJkMTRmNmRiNzk5NDM2MzA5ZmQ3YTNk3WThYw==: --dhchap-ctrl-secret DHHC-1:01:N2EzOGM3ODk5NDQzMDVmY2U0YTIxNTBkNjIyZTYwMza52AGC: 00:21:11.214 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.214 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.214 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:11.214 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.214 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.214 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.214 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:11.214 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:11.214 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:11.473 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:21:11.473 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:11.473 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:11.473 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:11.473 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:11.473 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.473 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:11.473 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.473 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.473 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.473 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:11.473 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:11.473 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:12.040 00:21:12.040 16:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:12.040 16:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:12.040 16:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:12.297 16:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.297 16:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:12.297 16:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.297 16:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.297 16:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.297 16:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:12.297 { 00:21:12.297 "cntlid": 119, 00:21:12.298 "qid": 0, 00:21:12.298 "state": "enabled", 00:21:12.298 "thread": "nvmf_tgt_poll_group_000", 00:21:12.298 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:12.298 "listen_address": { 00:21:12.298 "trtype": "TCP", 00:21:12.298 "adrfam": "IPv4", 00:21:12.298 "traddr": "10.0.0.2", 00:21:12.298 "trsvcid": "4420" 00:21:12.298 }, 00:21:12.298 "peer_address": { 00:21:12.298 "trtype": "TCP", 00:21:12.298 "adrfam": "IPv4", 00:21:12.298 "traddr": "10.0.0.1", 00:21:12.298 "trsvcid": "57366" 00:21:12.298 }, 00:21:12.298 "auth": { 00:21:12.298 "state": "completed", 00:21:12.298 "digest": "sha512", 00:21:12.298 "dhgroup": "ffdhe3072" 00:21:12.298 } 00:21:12.298 } 00:21:12.298 ]' 00:21:12.298 16:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:12.298 16:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:12.298 16:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:12.298 16:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:12.298 16:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:12.298 16:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:12.298 16:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:12.298 16:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.556 16:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmQzMmQyMzZlNGExOTRhZGYxNmM1YWY1N2Q3OWFiOTAxODUyYmM3YzFjMzUyMjQwM2VkNjYyYTNiYzgwYzhkNwO/ryk=: 00:21:12.556 16:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YmQzMmQyMzZlNGExOTRhZGYxNmM1YWY1N2Q3OWFiOTAxODUyYmM3YzFjMzUyMjQwM2VkNjYyYTNiYzgwYzhkNwO/ryk=: 00:21:13.930 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.930 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.930 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:13.930 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.930 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.930 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.930 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:13.930 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:13.930 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:13.930 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:13.930 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:21:13.930 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:13.930 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:13.930 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:13.930 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:13.930 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.931 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.931 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.931 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.931 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.931 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.931 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.931 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:14.497 00:21:14.497 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:14.497 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:14.497 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:14.497 16:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.497 16:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:14.497 16:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.497 16:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.755 16:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.755 16:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:14.755 { 00:21:14.755 "cntlid": 121, 00:21:14.755 "qid": 0, 00:21:14.755 "state": "enabled", 00:21:14.755 "thread": "nvmf_tgt_poll_group_000", 00:21:14.755 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:14.755 "listen_address": { 00:21:14.755 "trtype": "TCP", 00:21:14.755 "adrfam": "IPv4", 00:21:14.755 "traddr": "10.0.0.2", 00:21:14.755 "trsvcid": "4420" 00:21:14.755 }, 00:21:14.755 "peer_address": { 00:21:14.755 "trtype": "TCP", 00:21:14.755 "adrfam": "IPv4", 00:21:14.755 "traddr": "10.0.0.1", 00:21:14.755 "trsvcid": "57390" 00:21:14.755 }, 00:21:14.755 "auth": { 00:21:14.755 "state": "completed", 00:21:14.755 "digest": "sha512", 00:21:14.755 "dhgroup": "ffdhe4096" 00:21:14.755 } 00:21:14.755 } 00:21:14.755 ]' 00:21:14.755 16:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:14.755 16:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:14.755 16:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:14.755 16:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:14.755 16:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:14.755 16:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:14.755 16:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:14.755 16:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.013 16:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODE2MTc5YjJiODk0MThmYTVhMTM0ZmQyMTZiZDQ3YmFmNWJlMzBjMGI1MmEwMDQ1QODthw==: --dhchap-ctrl-secret DHHC-1:03:NWExMzdmODlhYzdhYjY2ZDcxY2MyMmVlYTY1M2Q4MDQ3MDM2MDFiMDMyODg3Y2ExZDRjNzQ1OTRkMzYyMGU4MD8roSo=: 00:21:15.013 16:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ODE2MTc5YjJiODk0MThmYTVhMTM0ZmQyMTZiZDQ3YmFmNWJlMzBjMGI1MmEwMDQ1QODthw==: --dhchap-ctrl-secret DHHC-1:03:NWExMzdmODlhYzdhYjY2ZDcxY2MyMmVlYTY1M2Q4MDQ3MDM2MDFiMDMyODg3Y2ExZDRjNzQ1OTRkMzYyMGU4MD8roSo=: 00:21:15.946 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.946 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.946 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:15.946 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.946 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.946 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.946 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:15.946 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:15.946 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:16.204 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:21:16.204 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:16.204 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:16.204 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:16.204 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:16.204 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:16.204 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:16.204 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.204 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.204 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.204 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:16.204 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:16.204 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:16.770 00:21:16.770 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:16.770 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:16.770 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.028 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.028 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:17.028 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.028 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.028 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.028 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:17.028 { 00:21:17.028 "cntlid": 123, 00:21:17.028 "qid": 0, 00:21:17.028 "state": "enabled", 00:21:17.028 "thread": "nvmf_tgt_poll_group_000", 00:21:17.028 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:17.028 "listen_address": { 00:21:17.028 "trtype": "TCP", 00:21:17.028 "adrfam": "IPv4", 00:21:17.028 "traddr": "10.0.0.2", 00:21:17.028 "trsvcid": "4420" 00:21:17.028 }, 00:21:17.028 "peer_address": { 00:21:17.028 "trtype": "TCP", 00:21:17.028 "adrfam": "IPv4", 00:21:17.028 "traddr": "10.0.0.1", 00:21:17.028 "trsvcid": "41566" 00:21:17.028 }, 00:21:17.028 "auth": { 00:21:17.028 "state": "completed", 00:21:17.028 "digest": "sha512", 00:21:17.028 "dhgroup": "ffdhe4096" 00:21:17.028 } 00:21:17.028 } 00:21:17.028 ]' 00:21:17.028 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:17.028 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:17.028 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:17.028 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:17.028 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:17.028 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:17.028 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:17.028 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:17.286 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTQxNGM4YzkyOTYyOGQxOTU5YjdkNDM1MzA5MDcxOWPUhFu+: --dhchap-ctrl-secret DHHC-1:02:Zjk2YjVjZTI4Y2E4OTcyZGJhNjZkNWEzZGIxMzBmMzVkNTQ3ZGRmNDQ2NzA3ZWQzUSvQ0g==: 00:21:17.286 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OTQxNGM4YzkyOTYyOGQxOTU5YjdkNDM1MzA5MDcxOWPUhFu+: --dhchap-ctrl-secret DHHC-1:02:Zjk2YjVjZTI4Y2E4OTcyZGJhNjZkNWEzZGIxMzBmMzVkNTQ3ZGRmNDQ2NzA3ZWQzUSvQ0g==: 00:21:18.221 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:18.480 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:18.480 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:18.480 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.480 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.480 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.480 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:18.480 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:18.480 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:18.737 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:21:18.737 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:18.737 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:18.737 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:18.737 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:18.737 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:18.737 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:18.737 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.737 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.737 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.737 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:18.737 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:18.737 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:18.995 00:21:18.995 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:18.995 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:18.995 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.253 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.253 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.253 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.253 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.253 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.253 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:19.253 { 00:21:19.253 "cntlid": 125, 00:21:19.253 "qid": 0, 00:21:19.253 "state": "enabled", 00:21:19.253 "thread": "nvmf_tgt_poll_group_000", 00:21:19.253 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:19.253 "listen_address": { 00:21:19.253 "trtype": "TCP", 00:21:19.253 "adrfam": "IPv4", 00:21:19.253 "traddr": "10.0.0.2", 00:21:19.253 "trsvcid": "4420" 00:21:19.253 }, 00:21:19.253 "peer_address": { 00:21:19.253 "trtype": "TCP", 00:21:19.253 "adrfam": "IPv4", 00:21:19.253 "traddr": "10.0.0.1", 00:21:19.253 "trsvcid": "41604" 00:21:19.253 }, 00:21:19.253 "auth": { 00:21:19.253 "state": "completed", 00:21:19.253 "digest": "sha512", 00:21:19.253 "dhgroup": "ffdhe4096" 00:21:19.253 } 00:21:19.253 } 00:21:19.253 ]' 00:21:19.253 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:19.511 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:19.511 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:19.511 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:19.511 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:19.511 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.511 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.511 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.769 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWQ1MzZiMzE5M2I5NWQ3NmM2NjFhYWVlYjJkMTRmNmRiNzk5NDM2MzA5ZmQ3YTNk3WThYw==: --dhchap-ctrl-secret DHHC-1:01:N2EzOGM3ODk5NDQzMDVmY2U0YTIxNTBkNjIyZTYwMza52AGC: 00:21:19.769 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:ZWQ1MzZiMzE5M2I5NWQ3NmM2NjFhYWVlYjJkMTRmNmRiNzk5NDM2MzA5ZmQ3YTNk3WThYw==: --dhchap-ctrl-secret DHHC-1:01:N2EzOGM3ODk5NDQzMDVmY2U0YTIxNTBkNjIyZTYwMza52AGC: 00:21:20.703 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.703 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.703 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:20.703 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.703 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.703 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.703 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:20.703 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:20.703 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:20.962 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:21:20.962 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:20.962 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:20.962 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:20.962 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:20.962 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:20.962 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:20.962 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.962 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.962 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.962 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:20.962 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:20.962 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:21.530 00:21:21.530 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:21.530 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:21.530 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.788 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.788 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.788 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.788 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.788 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.788 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:21.788 { 00:21:21.788 "cntlid": 127, 00:21:21.788 "qid": 0, 00:21:21.788 "state": "enabled", 00:21:21.788 "thread": "nvmf_tgt_poll_group_000", 00:21:21.788 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:21.788 "listen_address": { 00:21:21.788 "trtype": "TCP", 00:21:21.788 "adrfam": "IPv4", 00:21:21.788 "traddr": "10.0.0.2", 00:21:21.788 "trsvcid": "4420" 00:21:21.788 }, 00:21:21.788 "peer_address": { 00:21:21.788 "trtype": "TCP", 00:21:21.788 "adrfam": "IPv4", 00:21:21.788 "traddr": "10.0.0.1", 00:21:21.788 "trsvcid": "41626" 00:21:21.788 }, 00:21:21.788 "auth": { 00:21:21.788 "state": "completed", 00:21:21.788 "digest": "sha512", 00:21:21.788 "dhgroup": "ffdhe4096" 00:21:21.788 } 00:21:21.788 } 00:21:21.788 ]' 00:21:21.788 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:21.788 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:21.788 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:21.788 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:21.788 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:21.788 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.788 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.788 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.355 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmQzMmQyMzZlNGExOTRhZGYxNmM1YWY1N2Q3OWFiOTAxODUyYmM3YzFjMzUyMjQwM2VkNjYyYTNiYzgwYzhkNwO/ryk=: 00:21:22.355 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YmQzMmQyMzZlNGExOTRhZGYxNmM1YWY1N2Q3OWFiOTAxODUyYmM3YzFjMzUyMjQwM2VkNjYyYTNiYzgwYzhkNwO/ryk=: 00:21:23.290 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.290 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.290 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:23.290 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.290 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.290 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.290 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:23.290 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:23.290 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:23.290 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:23.548 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:21:23.548 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:23.548 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:23.548 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:23.548 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:23.548 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:23.548 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:23.548 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.548 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.548 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.548 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:23.548 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:23.548 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:24.115 00:21:24.115 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:24.115 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.115 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:24.373 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.373 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.373 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.373 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.373 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.373 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:24.373 { 00:21:24.373 "cntlid": 129, 00:21:24.373 "qid": 0, 00:21:24.373 "state": "enabled", 00:21:24.373 "thread": "nvmf_tgt_poll_group_000", 00:21:24.373 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:24.373 "listen_address": { 00:21:24.373 "trtype": "TCP", 00:21:24.373 "adrfam": "IPv4", 00:21:24.373 "traddr": "10.0.0.2", 00:21:24.373 "trsvcid": "4420" 00:21:24.373 }, 00:21:24.373 "peer_address": { 00:21:24.373 "trtype": "TCP", 00:21:24.373 "adrfam": "IPv4", 00:21:24.373 "traddr": "10.0.0.1", 00:21:24.373 "trsvcid": "41662" 00:21:24.373 }, 00:21:24.373 "auth": { 00:21:24.373 "state": "completed", 00:21:24.373 "digest": "sha512", 00:21:24.373 "dhgroup": "ffdhe6144" 00:21:24.373 } 00:21:24.373 } 00:21:24.373 ]' 00:21:24.373 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:24.373 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:24.373 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:24.374 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:24.374 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:24.631 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.632 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.632 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:24.890 16:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODE2MTc5YjJiODk0MThmYTVhMTM0ZmQyMTZiZDQ3YmFmNWJlMzBjMGI1MmEwMDQ1QODthw==: --dhchap-ctrl-secret DHHC-1:03:NWExMzdmODlhYzdhYjY2ZDcxY2MyMmVlYTY1M2Q4MDQ3MDM2MDFiMDMyODg3Y2ExZDRjNzQ1OTRkMzYyMGU4MD8roSo=: 00:21:24.890 16:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ODE2MTc5YjJiODk0MThmYTVhMTM0ZmQyMTZiZDQ3YmFmNWJlMzBjMGI1MmEwMDQ1QODthw==: --dhchap-ctrl-secret DHHC-1:03:NWExMzdmODlhYzdhYjY2ZDcxY2MyMmVlYTY1M2Q4MDQ3MDM2MDFiMDMyODg3Y2ExZDRjNzQ1OTRkMzYyMGU4MD8roSo=: 00:21:25.826 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.826 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.826 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:25.826 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.826 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.826 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.826 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:25.826 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:25.826 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:26.085 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:21:26.085 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:26.085 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:26.085 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:26.085 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:26.085 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.085 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:26.085 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.085 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.085 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.085 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:26.085 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:26.085 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:26.651 00:21:26.651 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:26.651 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:26.651 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.909 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.909 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.909 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.909 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.909 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.909 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:26.909 { 00:21:26.909 "cntlid": 131, 00:21:26.909 "qid": 0, 00:21:26.909 "state": "enabled", 00:21:26.909 "thread": "nvmf_tgt_poll_group_000", 00:21:26.909 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:26.909 "listen_address": { 00:21:26.909 "trtype": "TCP", 00:21:26.909 "adrfam": "IPv4", 00:21:26.909 "traddr": "10.0.0.2", 00:21:26.909 "trsvcid": "4420" 00:21:26.909 }, 00:21:26.909 "peer_address": { 00:21:26.909 "trtype": "TCP", 00:21:26.909 "adrfam": "IPv4", 00:21:26.909 "traddr": "10.0.0.1", 00:21:26.909 "trsvcid": "54620" 00:21:26.909 }, 00:21:26.909 "auth": { 00:21:26.909 "state": "completed", 00:21:26.909 "digest": "sha512", 00:21:26.909 "dhgroup": "ffdhe6144" 00:21:26.909 } 00:21:26.909 } 00:21:26.909 ]' 00:21:26.909 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:26.909 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:26.909 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:26.909 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:26.909 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:27.167 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.167 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.167 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.424 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTQxNGM4YzkyOTYyOGQxOTU5YjdkNDM1MzA5MDcxOWPUhFu+: --dhchap-ctrl-secret DHHC-1:02:Zjk2YjVjZTI4Y2E4OTcyZGJhNjZkNWEzZGIxMzBmMzVkNTQ3ZGRmNDQ2NzA3ZWQzUSvQ0g==: 00:21:27.424 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OTQxNGM4YzkyOTYyOGQxOTU5YjdkNDM1MzA5MDcxOWPUhFu+: --dhchap-ctrl-secret DHHC-1:02:Zjk2YjVjZTI4Y2E4OTcyZGJhNjZkNWEzZGIxMzBmMzVkNTQ3ZGRmNDQ2NzA3ZWQzUSvQ0g==: 00:21:28.356 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.356 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.356 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:28.356 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.356 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.356 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.356 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:28.356 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:28.356 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:28.613 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:21:28.613 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:28.613 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:28.613 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:28.613 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:28.613 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.613 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.613 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.613 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.614 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.614 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.614 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.614 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:29.178 00:21:29.178 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:29.178 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.178 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:29.436 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.436 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.436 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.436 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.436 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.436 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:29.436 { 00:21:29.436 "cntlid": 133, 00:21:29.436 "qid": 0, 00:21:29.436 "state": "enabled", 00:21:29.436 "thread": "nvmf_tgt_poll_group_000", 00:21:29.436 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:29.436 "listen_address": { 00:21:29.436 "trtype": "TCP", 00:21:29.436 "adrfam": "IPv4", 00:21:29.436 "traddr": "10.0.0.2", 00:21:29.436 "trsvcid": "4420" 00:21:29.436 }, 00:21:29.436 "peer_address": { 00:21:29.436 "trtype": "TCP", 00:21:29.436 "adrfam": "IPv4", 00:21:29.436 "traddr": "10.0.0.1", 00:21:29.436 "trsvcid": "54636" 00:21:29.436 }, 00:21:29.436 "auth": { 00:21:29.436 "state": "completed", 00:21:29.436 "digest": "sha512", 00:21:29.436 "dhgroup": "ffdhe6144" 00:21:29.436 } 00:21:29.436 } 00:21:29.436 ]' 00:21:29.436 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:29.436 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:29.436 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:29.695 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:29.695 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:29.695 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.695 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.695 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.953 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWQ1MzZiMzE5M2I5NWQ3NmM2NjFhYWVlYjJkMTRmNmRiNzk5NDM2MzA5ZmQ3YTNk3WThYw==: --dhchap-ctrl-secret DHHC-1:01:N2EzOGM3ODk5NDQzMDVmY2U0YTIxNTBkNjIyZTYwMza52AGC: 00:21:29.953 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:ZWQ1MzZiMzE5M2I5NWQ3NmM2NjFhYWVlYjJkMTRmNmRiNzk5NDM2MzA5ZmQ3YTNk3WThYw==: --dhchap-ctrl-secret DHHC-1:01:N2EzOGM3ODk5NDQzMDVmY2U0YTIxNTBkNjIyZTYwMza52AGC: 00:21:30.886 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.886 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.886 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:30.886 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.886 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.886 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.886 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:30.886 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:30.886 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:31.144 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:21:31.144 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:31.144 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:31.144 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:31.144 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:31.144 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:31.144 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:31.144 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.144 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.144 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.144 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:31.144 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:31.144 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:31.709 00:21:31.709 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:31.709 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:31.709 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.967 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.967 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.967 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.967 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.967 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.967 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:31.967 { 00:21:31.967 "cntlid": 135, 00:21:31.967 "qid": 0, 00:21:31.967 "state": "enabled", 00:21:31.967 "thread": "nvmf_tgt_poll_group_000", 00:21:31.967 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:31.967 "listen_address": { 00:21:31.967 "trtype": "TCP", 00:21:31.967 "adrfam": "IPv4", 00:21:31.967 "traddr": "10.0.0.2", 00:21:31.967 "trsvcid": "4420" 00:21:31.967 }, 00:21:31.967 "peer_address": { 00:21:31.967 "trtype": "TCP", 00:21:31.967 "adrfam": "IPv4", 00:21:31.967 "traddr": "10.0.0.1", 00:21:31.967 "trsvcid": "54664" 00:21:31.967 }, 00:21:31.967 "auth": { 00:21:31.967 "state": "completed", 00:21:31.967 "digest": "sha512", 00:21:31.967 "dhgroup": "ffdhe6144" 00:21:31.967 } 00:21:31.967 } 00:21:31.967 ]' 00:21:31.967 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:31.967 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:31.967 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:32.225 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:32.225 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:32.225 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:32.225 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:32.225 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.483 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmQzMmQyMzZlNGExOTRhZGYxNmM1YWY1N2Q3OWFiOTAxODUyYmM3YzFjMzUyMjQwM2VkNjYyYTNiYzgwYzhkNwO/ryk=: 00:21:32.483 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YmQzMmQyMzZlNGExOTRhZGYxNmM1YWY1N2Q3OWFiOTAxODUyYmM3YzFjMzUyMjQwM2VkNjYyYTNiYzgwYzhkNwO/ryk=: 00:21:33.437 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.437 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.437 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:33.437 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.437 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.437 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.437 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:33.437 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:33.437 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:33.437 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:33.695 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:21:33.695 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:33.695 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:33.695 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:33.695 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:33.695 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:33.695 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:33.695 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.695 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.695 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.695 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:33.695 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:33.695 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:34.637 00:21:34.637 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:34.637 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.637 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:34.894 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.894 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:34.894 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.894 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.894 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.894 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:34.894 { 00:21:34.894 "cntlid": 137, 00:21:34.894 "qid": 0, 00:21:34.894 "state": "enabled", 00:21:34.894 "thread": "nvmf_tgt_poll_group_000", 00:21:34.894 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:34.894 "listen_address": { 00:21:34.894 "trtype": "TCP", 00:21:34.894 "adrfam": "IPv4", 00:21:34.894 "traddr": "10.0.0.2", 00:21:34.894 "trsvcid": "4420" 00:21:34.894 }, 00:21:34.894 "peer_address": { 00:21:34.894 "trtype": "TCP", 00:21:34.894 "adrfam": "IPv4", 00:21:34.894 "traddr": "10.0.0.1", 00:21:34.894 "trsvcid": "54688" 00:21:34.894 }, 00:21:34.894 "auth": { 00:21:34.894 "state": "completed", 00:21:34.894 "digest": "sha512", 00:21:34.894 "dhgroup": "ffdhe8192" 00:21:34.894 } 00:21:34.894 } 00:21:34.894 ]' 00:21:34.894 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:34.894 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:34.894 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:35.151 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:35.151 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:35.151 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:35.151 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.151 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.407 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODE2MTc5YjJiODk0MThmYTVhMTM0ZmQyMTZiZDQ3YmFmNWJlMzBjMGI1MmEwMDQ1QODthw==: --dhchap-ctrl-secret DHHC-1:03:NWExMzdmODlhYzdhYjY2ZDcxY2MyMmVlYTY1M2Q4MDQ3MDM2MDFiMDMyODg3Y2ExZDRjNzQ1OTRkMzYyMGU4MD8roSo=: 00:21:35.408 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ODE2MTc5YjJiODk0MThmYTVhMTM0ZmQyMTZiZDQ3YmFmNWJlMzBjMGI1MmEwMDQ1QODthw==: --dhchap-ctrl-secret DHHC-1:03:NWExMzdmODlhYzdhYjY2ZDcxY2MyMmVlYTY1M2Q4MDQ3MDM2MDFiMDMyODg3Y2ExZDRjNzQ1OTRkMzYyMGU4MD8roSo=: 00:21:36.386 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:36.386 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:36.386 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:36.386 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.386 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.386 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.386 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:36.386 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:36.386 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:36.642 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:21:36.642 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:36.642 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:36.642 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:36.643 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:36.643 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:36.643 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.643 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.643 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.643 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.643 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.643 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.643 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:37.575 00:21:37.575 16:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:37.575 16:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:37.575 16:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.834 16:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.834 16:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:37.834 16:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.834 16:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.834 16:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.834 16:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:37.834 { 00:21:37.834 "cntlid": 139, 00:21:37.834 "qid": 0, 00:21:37.834 "state": "enabled", 00:21:37.834 "thread": "nvmf_tgt_poll_group_000", 00:21:37.834 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:37.834 "listen_address": { 00:21:37.834 "trtype": "TCP", 00:21:37.834 "adrfam": "IPv4", 00:21:37.834 "traddr": "10.0.0.2", 00:21:37.834 "trsvcid": "4420" 00:21:37.834 }, 00:21:37.834 "peer_address": { 00:21:37.834 "trtype": "TCP", 00:21:37.834 "adrfam": "IPv4", 00:21:37.834 "traddr": "10.0.0.1", 00:21:37.834 "trsvcid": "53532" 00:21:37.834 }, 00:21:37.834 "auth": { 00:21:37.834 "state": "completed", 00:21:37.834 "digest": "sha512", 00:21:37.834 "dhgroup": "ffdhe8192" 00:21:37.834 } 00:21:37.834 } 00:21:37.834 ]' 00:21:37.834 16:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:37.834 16:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:37.834 16:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:38.091 16:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:38.091 16:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:38.091 16:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.091 16:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.091 16:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.349 16:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTQxNGM4YzkyOTYyOGQxOTU5YjdkNDM1MzA5MDcxOWPUhFu+: --dhchap-ctrl-secret DHHC-1:02:Zjk2YjVjZTI4Y2E4OTcyZGJhNjZkNWEzZGIxMzBmMzVkNTQ3ZGRmNDQ2NzA3ZWQzUSvQ0g==: 00:21:38.349 16:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:OTQxNGM4YzkyOTYyOGQxOTU5YjdkNDM1MzA5MDcxOWPUhFu+: --dhchap-ctrl-secret DHHC-1:02:Zjk2YjVjZTI4Y2E4OTcyZGJhNjZkNWEzZGIxMzBmMzVkNTQ3ZGRmNDQ2NzA3ZWQzUSvQ0g==: 00:21:39.281 16:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.281 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.281 16:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:39.281 16:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.281 16:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.281 16:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.281 16:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:39.281 16:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:39.281 16:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:39.539 16:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:21:39.539 16:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:39.539 16:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:39.539 16:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:39.539 16:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:39.539 16:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:39.539 16:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:39.539 16:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.539 16:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.796 16:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.796 16:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:39.797 16:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:39.797 16:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:40.730 00:21:40.730 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:40.730 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:40.730 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.730 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.730 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:40.730 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.730 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.988 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.989 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:40.989 { 00:21:40.989 "cntlid": 141, 00:21:40.989 "qid": 0, 00:21:40.989 "state": "enabled", 00:21:40.989 "thread": "nvmf_tgt_poll_group_000", 00:21:40.989 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:40.989 "listen_address": { 00:21:40.989 "trtype": "TCP", 00:21:40.989 "adrfam": "IPv4", 00:21:40.989 "traddr": "10.0.0.2", 00:21:40.989 "trsvcid": "4420" 00:21:40.989 }, 00:21:40.989 "peer_address": { 00:21:40.989 "trtype": "TCP", 00:21:40.989 "adrfam": "IPv4", 00:21:40.989 "traddr": "10.0.0.1", 00:21:40.989 "trsvcid": "53558" 00:21:40.989 }, 00:21:40.989 "auth": { 00:21:40.989 "state": "completed", 00:21:40.989 "digest": "sha512", 00:21:40.989 "dhgroup": "ffdhe8192" 00:21:40.989 } 00:21:40.989 } 00:21:40.989 ]' 00:21:40.989 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:40.989 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:40.989 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:40.989 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:40.989 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:40.989 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:40.989 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:40.989 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:41.247 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWQ1MzZiMzE5M2I5NWQ3NmM2NjFhYWVlYjJkMTRmNmRiNzk5NDM2MzA5ZmQ3YTNk3WThYw==: --dhchap-ctrl-secret DHHC-1:01:N2EzOGM3ODk5NDQzMDVmY2U0YTIxNTBkNjIyZTYwMza52AGC: 00:21:41.247 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:ZWQ1MzZiMzE5M2I5NWQ3NmM2NjFhYWVlYjJkMTRmNmRiNzk5NDM2MzA5ZmQ3YTNk3WThYw==: --dhchap-ctrl-secret DHHC-1:01:N2EzOGM3ODk5NDQzMDVmY2U0YTIxNTBkNjIyZTYwMza52AGC: 00:21:42.180 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:42.180 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:42.180 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:42.180 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.180 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.180 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.180 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:42.180 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:42.180 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:42.438 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:21:42.438 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:42.438 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:42.438 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:42.438 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:42.438 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:42.438 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:42.438 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.438 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.438 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.438 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:42.438 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:42.438 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:43.373 00:21:43.373 16:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:43.373 16:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:43.373 16:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.631 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.631 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:43.631 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.631 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.631 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.631 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:43.631 { 00:21:43.631 "cntlid": 143, 00:21:43.631 "qid": 0, 00:21:43.631 "state": "enabled", 00:21:43.631 "thread": "nvmf_tgt_poll_group_000", 00:21:43.631 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:43.631 "listen_address": { 00:21:43.631 "trtype": "TCP", 00:21:43.631 "adrfam": "IPv4", 00:21:43.631 "traddr": "10.0.0.2", 00:21:43.631 "trsvcid": "4420" 00:21:43.631 }, 00:21:43.631 "peer_address": { 00:21:43.631 "trtype": "TCP", 00:21:43.631 "adrfam": "IPv4", 00:21:43.631 "traddr": "10.0.0.1", 00:21:43.631 "trsvcid": "53574" 00:21:43.631 }, 00:21:43.631 "auth": { 00:21:43.631 "state": "completed", 00:21:43.631 "digest": "sha512", 00:21:43.631 "dhgroup": "ffdhe8192" 00:21:43.631 } 00:21:43.631 } 00:21:43.631 ]' 00:21:43.631 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:43.631 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:43.631 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:43.631 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:43.631 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:43.889 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:43.889 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:43.889 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:44.147 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmQzMmQyMzZlNGExOTRhZGYxNmM1YWY1N2Q3OWFiOTAxODUyYmM3YzFjMzUyMjQwM2VkNjYyYTNiYzgwYzhkNwO/ryk=: 00:21:44.147 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YmQzMmQyMzZlNGExOTRhZGYxNmM1YWY1N2Q3OWFiOTAxODUyYmM3YzFjMzUyMjQwM2VkNjYyYTNiYzgwYzhkNwO/ryk=: 00:21:45.083 16:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:45.083 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:45.083 16:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:45.083 16:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.083 16:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.083 16:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.083 16:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:45.083 16:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:21:45.083 16:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:45.083 16:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:45.083 16:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:45.083 16:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:45.341 16:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:21:45.341 16:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:45.341 16:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:45.341 16:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:45.341 16:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:45.341 16:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:45.341 16:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:45.341 16:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.341 16:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.341 16:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.341 16:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:45.341 16:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:45.341 16:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:46.274 00:21:46.274 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:46.274 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:46.274 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:46.532 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.532 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:46.532 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.532 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.532 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.532 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:46.532 { 00:21:46.532 "cntlid": 145, 00:21:46.532 "qid": 0, 00:21:46.532 "state": "enabled", 00:21:46.532 "thread": "nvmf_tgt_poll_group_000", 00:21:46.532 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:46.532 "listen_address": { 00:21:46.532 "trtype": "TCP", 00:21:46.532 "adrfam": "IPv4", 00:21:46.532 "traddr": "10.0.0.2", 00:21:46.532 "trsvcid": "4420" 00:21:46.532 }, 00:21:46.532 "peer_address": { 00:21:46.532 "trtype": "TCP", 00:21:46.532 "adrfam": "IPv4", 00:21:46.532 "traddr": "10.0.0.1", 00:21:46.532 "trsvcid": "53606" 00:21:46.532 }, 00:21:46.532 "auth": { 00:21:46.532 "state": "completed", 00:21:46.532 "digest": "sha512", 00:21:46.532 "dhgroup": "ffdhe8192" 00:21:46.532 } 00:21:46.532 } 00:21:46.532 ]' 00:21:46.532 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:46.532 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:46.532 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:46.790 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:46.790 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:46.790 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:46.790 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:46.790 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:47.049 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODE2MTc5YjJiODk0MThmYTVhMTM0ZmQyMTZiZDQ3YmFmNWJlMzBjMGI1MmEwMDQ1QODthw==: --dhchap-ctrl-secret DHHC-1:03:NWExMzdmODlhYzdhYjY2ZDcxY2MyMmVlYTY1M2Q4MDQ3MDM2MDFiMDMyODg3Y2ExZDRjNzQ1OTRkMzYyMGU4MD8roSo=: 00:21:47.049 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ODE2MTc5YjJiODk0MThmYTVhMTM0ZmQyMTZiZDQ3YmFmNWJlMzBjMGI1MmEwMDQ1QODthw==: --dhchap-ctrl-secret DHHC-1:03:NWExMzdmODlhYzdhYjY2ZDcxY2MyMmVlYTY1M2Q4MDQ3MDM2MDFiMDMyODg3Y2ExZDRjNzQ1OTRkMzYyMGU4MD8roSo=: 00:21:47.982 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:47.982 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:47.982 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:47.982 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.982 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.982 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.982 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:21:47.982 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.982 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.982 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.982 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:21:47.982 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:47.982 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:21:47.982 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:47.982 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:47.982 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:47.982 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:47.982 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:21:47.982 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:47.982 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:48.916 request: 00:21:48.916 { 00:21:48.916 "name": "nvme0", 00:21:48.916 "trtype": "tcp", 00:21:48.916 "traddr": "10.0.0.2", 00:21:48.916 "adrfam": "ipv4", 00:21:48.916 "trsvcid": "4420", 00:21:48.916 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:48.916 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:48.916 "prchk_reftag": false, 00:21:48.917 "prchk_guard": false, 00:21:48.917 "hdgst": false, 00:21:48.917 "ddgst": false, 00:21:48.917 "dhchap_key": "key2", 00:21:48.917 "allow_unrecognized_csi": false, 00:21:48.917 "method": "bdev_nvme_attach_controller", 00:21:48.917 "req_id": 1 00:21:48.917 } 00:21:48.917 Got JSON-RPC error response 00:21:48.917 response: 00:21:48.917 { 00:21:48.917 "code": -5, 00:21:48.917 "message": "Input/output error" 00:21:48.917 } 00:21:48.917 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:48.917 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:48.917 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:48.917 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:48.917 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:48.917 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.917 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.917 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.917 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:48.917 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.917 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.917 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.917 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:48.917 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:48.917 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:48.917 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:48.917 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:48.917 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:48.917 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:48.917 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:48.917 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:48.917 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:49.850 request: 00:21:49.850 { 00:21:49.850 "name": "nvme0", 00:21:49.850 "trtype": "tcp", 00:21:49.850 "traddr": "10.0.0.2", 00:21:49.850 "adrfam": "ipv4", 00:21:49.850 "trsvcid": "4420", 00:21:49.850 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:49.850 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:49.850 "prchk_reftag": false, 00:21:49.850 "prchk_guard": false, 00:21:49.850 "hdgst": false, 00:21:49.850 "ddgst": false, 00:21:49.850 "dhchap_key": "key1", 00:21:49.850 "dhchap_ctrlr_key": "ckey2", 00:21:49.850 "allow_unrecognized_csi": false, 00:21:49.850 "method": "bdev_nvme_attach_controller", 00:21:49.850 "req_id": 1 00:21:49.850 } 00:21:49.850 Got JSON-RPC error response 00:21:49.850 response: 00:21:49.850 { 00:21:49.850 "code": -5, 00:21:49.850 "message": "Input/output error" 00:21:49.850 } 00:21:49.850 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:49.850 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:49.850 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:49.850 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:49.850 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:49.850 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.850 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.850 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.850 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:21:49.850 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.850 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.850 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.850 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.850 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:49.850 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.850 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:49.850 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:49.850 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:49.850 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:49.850 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.850 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.851 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:50.785 request: 00:21:50.785 { 00:21:50.785 "name": "nvme0", 00:21:50.785 "trtype": "tcp", 00:21:50.785 "traddr": "10.0.0.2", 00:21:50.785 "adrfam": "ipv4", 00:21:50.785 "trsvcid": "4420", 00:21:50.785 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:50.785 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:50.785 "prchk_reftag": false, 00:21:50.785 "prchk_guard": false, 00:21:50.785 "hdgst": false, 00:21:50.785 "ddgst": false, 00:21:50.785 "dhchap_key": "key1", 00:21:50.785 "dhchap_ctrlr_key": "ckey1", 00:21:50.785 "allow_unrecognized_csi": false, 00:21:50.785 "method": "bdev_nvme_attach_controller", 00:21:50.785 "req_id": 1 00:21:50.785 } 00:21:50.785 Got JSON-RPC error response 00:21:50.785 response: 00:21:50.785 { 00:21:50.785 "code": -5, 00:21:50.785 "message": "Input/output error" 00:21:50.785 } 00:21:50.785 16:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:50.785 16:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:50.785 16:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:50.785 16:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:50.785 16:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:50.785 16:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.785 16:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.785 16:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.785 16:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 3151917 00:21:50.785 16:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 3151917 ']' 00:21:50.785 16:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 3151917 00:21:50.785 16:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:21:50.785 16:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:50.785 16:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3151917 00:21:50.785 16:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:50.785 16:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:50.785 16:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3151917' 00:21:50.785 killing process with pid 3151917 00:21:50.785 16:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 3151917 00:21:50.785 16:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 3151917 00:21:52.157 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:21:52.157 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:21:52.157 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:52.157 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.157 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # nvmfpid=3175503 00:21:52.157 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:21:52.157 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # waitforlisten 3175503 00:21:52.157 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3175503 ']' 00:21:52.157 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:52.157 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:52.157 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:52.157 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:52.157 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.088 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:53.088 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:21:53.088 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:21:53.088 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:53.088 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.088 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:53.088 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:53.088 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 3175503 00:21:53.088 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3175503 ']' 00:21:53.088 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:53.088 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:53.088 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:53.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:53.088 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:53.088 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.346 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:53.346 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:21:53.346 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:21:53.346 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.346 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.912 null0 00:21:53.912 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.912 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:53.912 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.dHI 00:21:53.912 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.912 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.912 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.912 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.ddv ]] 00:21:53.912 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ddv 00:21:53.912 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.912 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.912 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.912 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:53.912 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.dh6 00:21:53.912 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.912 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.912 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.912 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.y0V ]] 00:21:53.912 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.y0V 00:21:53.912 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.912 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.912 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.912 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:53.912 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.bxT 00:21:53.912 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.912 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.912 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.912 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.L5D ]] 00:21:53.912 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.L5D 00:21:53.912 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.912 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.912 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.912 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:53.912 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.L7P 00:21:53.912 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.912 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.912 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.912 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:21:53.912 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:21:53.912 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:53.912 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:53.912 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:53.912 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:53.912 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:53.912 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:53.912 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.912 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.912 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.912 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:53.912 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:53.912 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:55.282 nvme0n1 00:21:55.282 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:55.282 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:55.282 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:55.846 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.846 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:55.846 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.846 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.846 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.846 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:55.846 { 00:21:55.846 "cntlid": 1, 00:21:55.846 "qid": 0, 00:21:55.846 "state": "enabled", 00:21:55.846 "thread": "nvmf_tgt_poll_group_000", 00:21:55.846 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:55.846 "listen_address": { 00:21:55.846 "trtype": "TCP", 00:21:55.846 "adrfam": "IPv4", 00:21:55.846 "traddr": "10.0.0.2", 00:21:55.846 "trsvcid": "4420" 00:21:55.846 }, 00:21:55.846 "peer_address": { 00:21:55.846 "trtype": "TCP", 00:21:55.846 "adrfam": "IPv4", 00:21:55.846 "traddr": "10.0.0.1", 00:21:55.846 "trsvcid": "43030" 00:21:55.846 }, 00:21:55.846 "auth": { 00:21:55.846 "state": "completed", 00:21:55.846 "digest": "sha512", 00:21:55.846 "dhgroup": "ffdhe8192" 00:21:55.846 } 00:21:55.846 } 00:21:55.846 ]' 00:21:55.846 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:55.846 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:55.846 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:55.846 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:55.846 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:55.846 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:55.846 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:55.846 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:56.104 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmQzMmQyMzZlNGExOTRhZGYxNmM1YWY1N2Q3OWFiOTAxODUyYmM3YzFjMzUyMjQwM2VkNjYyYTNiYzgwYzhkNwO/ryk=: 00:21:56.104 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YmQzMmQyMzZlNGExOTRhZGYxNmM1YWY1N2Q3OWFiOTAxODUyYmM3YzFjMzUyMjQwM2VkNjYyYTNiYzgwYzhkNwO/ryk=: 00:21:57.035 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:57.035 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:57.035 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:57.035 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.035 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.035 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.035 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:57.035 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.035 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.035 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.035 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:21:57.035 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:21:57.292 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:21:57.292 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:57.292 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:21:57.292 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:57.292 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:57.292 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:57.292 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:57.292 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:57.292 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:57.292 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:57.550 request: 00:21:57.550 { 00:21:57.550 "name": "nvme0", 00:21:57.550 "trtype": "tcp", 00:21:57.550 "traddr": "10.0.0.2", 00:21:57.550 "adrfam": "ipv4", 00:21:57.550 "trsvcid": "4420", 00:21:57.550 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:57.550 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:57.550 "prchk_reftag": false, 00:21:57.550 "prchk_guard": false, 00:21:57.550 "hdgst": false, 00:21:57.550 "ddgst": false, 00:21:57.550 "dhchap_key": "key3", 00:21:57.550 "allow_unrecognized_csi": false, 00:21:57.550 "method": "bdev_nvme_attach_controller", 00:21:57.550 "req_id": 1 00:21:57.550 } 00:21:57.550 Got JSON-RPC error response 00:21:57.550 response: 00:21:57.550 { 00:21:57.550 "code": -5, 00:21:57.550 "message": "Input/output error" 00:21:57.550 } 00:21:57.550 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:57.550 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:57.550 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:57.550 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:57.550 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:21:57.550 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:21:57.550 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:57.550 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:57.808 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:21:57.808 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:57.808 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:21:57.808 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:57.808 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:57.808 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:57.808 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:57.808 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:57.808 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:57.808 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:58.374 request: 00:21:58.374 { 00:21:58.374 "name": "nvme0", 00:21:58.374 "trtype": "tcp", 00:21:58.374 "traddr": "10.0.0.2", 00:21:58.374 "adrfam": "ipv4", 00:21:58.374 "trsvcid": "4420", 00:21:58.374 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:58.374 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:58.374 "prchk_reftag": false, 00:21:58.374 "prchk_guard": false, 00:21:58.374 "hdgst": false, 00:21:58.374 "ddgst": false, 00:21:58.374 "dhchap_key": "key3", 00:21:58.374 "allow_unrecognized_csi": false, 00:21:58.374 "method": "bdev_nvme_attach_controller", 00:21:58.374 "req_id": 1 00:21:58.374 } 00:21:58.374 Got JSON-RPC error response 00:21:58.374 response: 00:21:58.374 { 00:21:58.374 "code": -5, 00:21:58.374 "message": "Input/output error" 00:21:58.374 } 00:21:58.374 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:58.374 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:58.374 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:58.374 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:58.374 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:21:58.374 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:21:58.374 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:21:58.374 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:58.374 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:58.374 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:58.374 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:58.374 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.374 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.631 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.631 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:58.631 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.631 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.631 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.631 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:58.631 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:58.631 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:58.631 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:58.631 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:58.631 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:58.631 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:58.631 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:58.631 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:58.631 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:59.195 request: 00:21:59.195 { 00:21:59.195 "name": "nvme0", 00:21:59.195 "trtype": "tcp", 00:21:59.195 "traddr": "10.0.0.2", 00:21:59.195 "adrfam": "ipv4", 00:21:59.195 "trsvcid": "4420", 00:21:59.195 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:59.195 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:59.195 "prchk_reftag": false, 00:21:59.195 "prchk_guard": false, 00:21:59.195 "hdgst": false, 00:21:59.195 "ddgst": false, 00:21:59.195 "dhchap_key": "key0", 00:21:59.195 "dhchap_ctrlr_key": "key1", 00:21:59.195 "allow_unrecognized_csi": false, 00:21:59.195 "method": "bdev_nvme_attach_controller", 00:21:59.195 "req_id": 1 00:21:59.195 } 00:21:59.195 Got JSON-RPC error response 00:21:59.195 response: 00:21:59.195 { 00:21:59.195 "code": -5, 00:21:59.195 "message": "Input/output error" 00:21:59.195 } 00:21:59.195 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:59.195 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:59.195 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:59.195 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:59.195 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:21:59.195 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:21:59.195 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:21:59.453 nvme0n1 00:21:59.453 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:21:59.453 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:21:59.453 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:59.711 16:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.711 16:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:59.711 16:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:59.969 16:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:21:59.969 16:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.969 16:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.969 16:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.969 16:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:21:59.969 16:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:59.969 16:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:01.870 nvme0n1 00:22:01.870 16:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:22:01.870 16:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:22:01.870 16:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:01.870 16:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.870 16:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:01.870 16:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.870 16:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.870 16:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.870 16:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:22:01.870 16:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:22:01.870 16:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:02.128 16:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.128 16:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWQ1MzZiMzE5M2I5NWQ3NmM2NjFhYWVlYjJkMTRmNmRiNzk5NDM2MzA5ZmQ3YTNk3WThYw==: --dhchap-ctrl-secret DHHC-1:03:YmQzMmQyMzZlNGExOTRhZGYxNmM1YWY1N2Q3OWFiOTAxODUyYmM3YzFjMzUyMjQwM2VkNjYyYTNiYzgwYzhkNwO/ryk=: 00:22:02.128 16:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:ZWQ1MzZiMzE5M2I5NWQ3NmM2NjFhYWVlYjJkMTRmNmRiNzk5NDM2MzA5ZmQ3YTNk3WThYw==: --dhchap-ctrl-secret DHHC-1:03:YmQzMmQyMzZlNGExOTRhZGYxNmM1YWY1N2Q3OWFiOTAxODUyYmM3YzFjMzUyMjQwM2VkNjYyYTNiYzgwYzhkNwO/ryk=: 00:22:03.108 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:22:03.108 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:22:03.108 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:22:03.108 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:22:03.108 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:22:03.108 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:22:03.108 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:22:03.108 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:03.108 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:03.365 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:22:03.365 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:03.365 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:22:03.365 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:03.365 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:03.365 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:03.365 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:03.365 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:03.365 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:03.366 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:04.300 request: 00:22:04.300 { 00:22:04.300 "name": "nvme0", 00:22:04.300 "trtype": "tcp", 00:22:04.300 "traddr": "10.0.0.2", 00:22:04.300 "adrfam": "ipv4", 00:22:04.300 "trsvcid": "4420", 00:22:04.300 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:04.300 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:04.300 "prchk_reftag": false, 00:22:04.300 "prchk_guard": false, 00:22:04.300 "hdgst": false, 00:22:04.300 "ddgst": false, 00:22:04.300 "dhchap_key": "key1", 00:22:04.300 "allow_unrecognized_csi": false, 00:22:04.300 "method": "bdev_nvme_attach_controller", 00:22:04.300 "req_id": 1 00:22:04.300 } 00:22:04.300 Got JSON-RPC error response 00:22:04.300 response: 00:22:04.300 { 00:22:04.300 "code": -5, 00:22:04.300 "message": "Input/output error" 00:22:04.300 } 00:22:04.300 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:04.300 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:04.300 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:04.300 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:04.300 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:04.300 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:04.300 16:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:05.672 nvme0n1 00:22:05.672 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:22:05.672 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:22:05.672 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:05.929 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:05.929 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:05.929 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:06.496 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:06.496 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.496 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.496 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.496 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:22:06.496 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:06.496 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:06.753 nvme0n1 00:22:06.753 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:22:06.753 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:22:06.753 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:07.011 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.011 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:07.011 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:07.269 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:07.269 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.269 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.269 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.269 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:OTQxNGM4YzkyOTYyOGQxOTU5YjdkNDM1MzA5MDcxOWPUhFu+: '' 2s 00:22:07.269 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:07.269 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:07.269 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:OTQxNGM4YzkyOTYyOGQxOTU5YjdkNDM1MzA5MDcxOWPUhFu+: 00:22:07.269 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:22:07.269 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:07.269 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:07.269 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:OTQxNGM4YzkyOTYyOGQxOTU5YjdkNDM1MzA5MDcxOWPUhFu+: ]] 00:22:07.269 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:OTQxNGM4YzkyOTYyOGQxOTU5YjdkNDM1MzA5MDcxOWPUhFu+: 00:22:07.269 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:22:07.269 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:07.269 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:09.225 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:22:09.225 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:22:09.225 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:22:09.225 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:22:09.225 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:22:09.225 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:22:09.225 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:22:09.225 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key2 00:22:09.225 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.225 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.225 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.225 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ZWQ1MzZiMzE5M2I5NWQ3NmM2NjFhYWVlYjJkMTRmNmRiNzk5NDM2MzA5ZmQ3YTNk3WThYw==: 2s 00:22:09.225 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:09.225 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:09.225 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:22:09.225 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ZWQ1MzZiMzE5M2I5NWQ3NmM2NjFhYWVlYjJkMTRmNmRiNzk5NDM2MzA5ZmQ3YTNk3WThYw==: 00:22:09.225 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:09.225 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:09.225 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:22:09.225 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ZWQ1MzZiMzE5M2I5NWQ3NmM2NjFhYWVlYjJkMTRmNmRiNzk5NDM2MzA5ZmQ3YTNk3WThYw==: ]] 00:22:09.225 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ZWQ1MzZiMzE5M2I5NWQ3NmM2NjFhYWVlYjJkMTRmNmRiNzk5NDM2MzA5ZmQ3YTNk3WThYw==: 00:22:09.225 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:09.225 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:11.756 16:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:22:11.756 16:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:22:11.756 16:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:22:11.756 16:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:22:11.756 16:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:22:11.756 16:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:22:11.756 16:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:22:11.756 16:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:11.756 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:11.756 16:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:11.756 16:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.756 16:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.756 16:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.756 16:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:11.756 16:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:11.756 16:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:13.130 nvme0n1 00:22:13.130 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:13.130 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.130 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.130 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.130 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:13.130 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:13.694 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:22:13.694 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:22:13.694 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:13.952 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.952 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:13.952 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.952 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.952 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.952 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:22:13.952 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:22:14.518 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:22:14.518 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:22:14.518 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:14.776 16:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.776 16:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:14.776 16:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.776 16:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.776 16:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.776 16:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:14.776 16:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:14.776 16:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:14.776 16:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:22:14.776 16:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:14.776 16:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:22:14.776 16:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:14.776 16:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:14.776 16:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:15.709 request: 00:22:15.709 { 00:22:15.709 "name": "nvme0", 00:22:15.709 "dhchap_key": "key1", 00:22:15.709 "dhchap_ctrlr_key": "key3", 00:22:15.709 "method": "bdev_nvme_set_keys", 00:22:15.709 "req_id": 1 00:22:15.709 } 00:22:15.709 Got JSON-RPC error response 00:22:15.709 response: 00:22:15.709 { 00:22:15.709 "code": -13, 00:22:15.709 "message": "Permission denied" 00:22:15.709 } 00:22:15.709 16:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:15.709 16:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:15.709 16:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:15.709 16:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:15.709 16:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:15.709 16:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:15.709 16:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:15.968 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:22:15.968 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:22:16.901 16:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:16.901 16:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:16.901 16:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:17.158 16:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:22:17.158 16:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:17.158 16:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.158 16:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.158 16:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.159 16:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:17.159 16:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:17.159 16:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:18.531 nvme0n1 00:22:18.531 16:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:18.531 16:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.531 16:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.531 16:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.531 16:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:18.531 16:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:18.531 16:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:18.531 16:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:22:18.531 16:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:18.531 16:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:22:18.531 16:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:18.531 16:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:18.531 16:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:19.466 request: 00:22:19.466 { 00:22:19.466 "name": "nvme0", 00:22:19.466 "dhchap_key": "key2", 00:22:19.466 "dhchap_ctrlr_key": "key0", 00:22:19.466 "method": "bdev_nvme_set_keys", 00:22:19.466 "req_id": 1 00:22:19.466 } 00:22:19.466 Got JSON-RPC error response 00:22:19.466 response: 00:22:19.466 { 00:22:19.466 "code": -13, 00:22:19.466 "message": "Permission denied" 00:22:19.466 } 00:22:19.466 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:19.466 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:19.466 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:19.466 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:19.466 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:19.466 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:19.466 16:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:20.030 16:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:22:20.030 16:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:22:20.963 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:20.963 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:20.963 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:21.220 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:22:21.221 16:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:22:22.154 16:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:22.154 16:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:22.154 16:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:22.413 16:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:22:22.413 16:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:22:22.413 16:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:22:22.413 16:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3152098 00:22:22.413 16:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 3152098 ']' 00:22:22.413 16:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 3152098 00:22:22.413 16:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:22:22.413 16:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:22.413 16:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3152098 00:22:22.413 16:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:22.413 16:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:22.413 16:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3152098' 00:22:22.413 killing process with pid 3152098 00:22:22.413 16:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 3152098 00:22:22.413 16:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 3152098 00:22:24.943 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:24.943 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:22:24.943 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:22:24.943 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:24.943 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:22:24.943 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:24.943 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:24.943 rmmod nvme_tcp 00:22:24.943 rmmod nvme_fabrics 00:22:24.943 rmmod nvme_keyring 00:22:24.943 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:24.943 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:22:24.943 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:22:24.943 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@513 -- # '[' -n 3175503 ']' 00:22:24.943 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # killprocess 3175503 00:22:24.943 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 3175503 ']' 00:22:24.943 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 3175503 00:22:24.943 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:22:24.943 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:24.943 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3175503 00:22:24.943 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:24.943 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:24.943 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3175503' 00:22:24.943 killing process with pid 3175503 00:22:24.943 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 3175503 00:22:24.943 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 3175503 00:22:26.318 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:22:26.318 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:22:26.318 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:22:26.318 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:22:26.318 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # iptables-save 00:22:26.318 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:22:26.318 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # iptables-restore 00:22:26.318 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:26.318 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:26.318 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:26.318 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:26.318 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:28.220 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:28.221 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.dHI /tmp/spdk.key-sha256.dh6 /tmp/spdk.key-sha384.bxT /tmp/spdk.key-sha512.L7P /tmp/spdk.key-sha512.ddv /tmp/spdk.key-sha384.y0V /tmp/spdk.key-sha256.L5D '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:28.221 00:22:28.221 real 3m48.039s 00:22:28.221 user 8m48.131s 00:22:28.221 sys 0m27.457s 00:22:28.221 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:28.221 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.221 ************************************ 00:22:28.221 END TEST nvmf_auth_target 00:22:28.221 ************************************ 00:22:28.221 16:30:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:22:28.221 16:30:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:28.221 16:30:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:22:28.221 16:30:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:28.221 16:30:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:28.221 ************************************ 00:22:28.221 START TEST nvmf_bdevio_no_huge 00:22:28.221 ************************************ 00:22:28.221 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:28.221 * Looking for test storage... 00:22:28.221 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:28.221 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:28.221 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lcov --version 00:22:28.221 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:28.480 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:28.480 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:28.480 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:28.480 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:28.480 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:22:28.480 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:22:28.480 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:22:28.480 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:22:28.480 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:22:28.480 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:22:28.480 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:22:28.480 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:28.480 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:22:28.480 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:22:28.480 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:28.480 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:28.480 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:22:28.480 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:22:28.480 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:28.481 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:22:28.481 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:22:28.481 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:22:28.481 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:22:28.481 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:28.481 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:22:28.481 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:22:28.481 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:28.481 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:28.481 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:22:28.481 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:28.481 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:28.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:28.481 --rc genhtml_branch_coverage=1 00:22:28.481 --rc genhtml_function_coverage=1 00:22:28.481 --rc genhtml_legend=1 00:22:28.481 --rc geninfo_all_blocks=1 00:22:28.481 --rc geninfo_unexecuted_blocks=1 00:22:28.481 00:22:28.481 ' 00:22:28.481 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:28.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:28.481 --rc genhtml_branch_coverage=1 00:22:28.481 --rc genhtml_function_coverage=1 00:22:28.481 --rc genhtml_legend=1 00:22:28.481 --rc geninfo_all_blocks=1 00:22:28.481 --rc geninfo_unexecuted_blocks=1 00:22:28.481 00:22:28.481 ' 00:22:28.481 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:28.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:28.481 --rc genhtml_branch_coverage=1 00:22:28.481 --rc genhtml_function_coverage=1 00:22:28.481 --rc genhtml_legend=1 00:22:28.481 --rc geninfo_all_blocks=1 00:22:28.481 --rc geninfo_unexecuted_blocks=1 00:22:28.481 00:22:28.481 ' 00:22:28.481 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:28.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:28.481 --rc genhtml_branch_coverage=1 00:22:28.481 --rc genhtml_function_coverage=1 00:22:28.481 --rc genhtml_legend=1 00:22:28.481 --rc geninfo_all_blocks=1 00:22:28.481 --rc geninfo_unexecuted_blocks=1 00:22:28.481 00:22:28.481 ' 00:22:28.481 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:28.481 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:28.481 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:28.481 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:28.481 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:28.481 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:28.481 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:28.481 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:28.481 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:28.481 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:28.481 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:28.481 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:28.481 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:28.481 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:28.481 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:28.481 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:28.481 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:28.481 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:28.481 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:28.481 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:22:28.481 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:28.481 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:28.481 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:28.481 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.481 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.481 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.481 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:28.481 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.481 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:22:28.481 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:28.481 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:28.481 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:28.481 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:28.481 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:28.481 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:28.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:28.481 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:28.481 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:28.481 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:28.481 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:28.481 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:28.481 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:28.481 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:22:28.481 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:28.481 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # prepare_net_devs 00:22:28.481 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@434 -- # local -g is_hw=no 00:22:28.481 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # remove_spdk_ns 00:22:28.481 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:28.481 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:28.481 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:28.481 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:22:28.481 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:22:28.481 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:22:28.481 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:30.391 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:30.391 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:22:30.391 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:30.391 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:30.391 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:30.391 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:30.391 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:30.391 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:22:30.391 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:30.391 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:22:30.391 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:22:30.391 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:30.392 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:30.392 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ up == up ]] 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:30.392 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ up == up ]] 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:30.392 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # is_hw=yes 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:30.392 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:30.651 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:30.651 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:30.651 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:30.651 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:30.651 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:30.651 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:30.651 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:30.651 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:30.651 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:30.651 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.263 ms 00:22:30.651 00:22:30.651 --- 10.0.0.2 ping statistics --- 00:22:30.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:30.651 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:22:30.651 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:30.651 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:30.651 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.081 ms 00:22:30.651 00:22:30.651 --- 10.0.0.1 ping statistics --- 00:22:30.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:30.651 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:22:30.651 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:30.651 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # return 0 00:22:30.651 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:22:30.651 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:30.651 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:22:30.651 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:22:30.651 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:30.651 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:22:30.651 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:22:30.651 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:30.651 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:22:30.651 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:30.651 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:30.651 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@505 -- # nvmfpid=3182036 00:22:30.651 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:30.651 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@506 -- # waitforlisten 3182036 00:22:30.651 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 3182036 ']' 00:22:30.651 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:30.651 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:30.651 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:30.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:30.651 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:30.651 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:30.651 [2024-09-29 16:30:31.145040] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:22:30.651 [2024-09-29 16:30:31.145189] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:30.909 [2024-09-29 16:30:31.304306] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:31.167 [2024-09-29 16:30:31.591627] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:31.167 [2024-09-29 16:30:31.591732] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:31.167 [2024-09-29 16:30:31.591759] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:31.167 [2024-09-29 16:30:31.591784] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:31.167 [2024-09-29 16:30:31.591805] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:31.167 [2024-09-29 16:30:31.591953] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:22:31.167 [2024-09-29 16:30:31.592014] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:22:31.167 [2024-09-29 16:30:31.592080] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:22:31.167 [2024-09-29 16:30:31.592087] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:22:31.733 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:31.733 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:22:31.733 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:22:31.733 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:31.733 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:31.733 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:31.733 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:31.733 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.733 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:31.733 [2024-09-29 16:30:32.153944] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:31.733 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.733 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:31.733 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.733 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:31.733 Malloc0 00:22:31.733 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.733 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:31.733 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.733 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:31.733 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.733 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:31.733 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.733 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:31.733 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.733 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:31.733 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.733 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:31.733 [2024-09-29 16:30:32.244255] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:31.733 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.733 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:31.733 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:31.733 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # config=() 00:22:31.733 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # local subsystem config 00:22:31.733 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:31.733 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:31.733 { 00:22:31.733 "params": { 00:22:31.733 "name": "Nvme$subsystem", 00:22:31.733 "trtype": "$TEST_TRANSPORT", 00:22:31.733 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:31.733 "adrfam": "ipv4", 00:22:31.733 "trsvcid": "$NVMF_PORT", 00:22:31.733 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:31.733 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:31.733 "hdgst": ${hdgst:-false}, 00:22:31.733 "ddgst": ${ddgst:-false} 00:22:31.733 }, 00:22:31.733 "method": "bdev_nvme_attach_controller" 00:22:31.733 } 00:22:31.733 EOF 00:22:31.733 )") 00:22:31.733 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@578 -- # cat 00:22:31.733 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # jq . 00:22:31.733 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@581 -- # IFS=, 00:22:31.733 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:22:31.733 "params": { 00:22:31.733 "name": "Nvme1", 00:22:31.733 "trtype": "tcp", 00:22:31.733 "traddr": "10.0.0.2", 00:22:31.733 "adrfam": "ipv4", 00:22:31.733 "trsvcid": "4420", 00:22:31.733 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:31.733 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:31.733 "hdgst": false, 00:22:31.733 "ddgst": false 00:22:31.733 }, 00:22:31.733 "method": "bdev_nvme_attach_controller" 00:22:31.733 }' 00:22:31.991 [2024-09-29 16:30:32.330380] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:22:31.992 [2024-09-29 16:30:32.330530] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3182192 ] 00:22:31.992 [2024-09-29 16:30:32.480823] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:32.250 [2024-09-29 16:30:32.739538] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:32.250 [2024-09-29 16:30:32.739579] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:32.250 [2024-09-29 16:30:32.739588] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:32.815 I/O targets: 00:22:32.815 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:32.815 00:22:32.815 00:22:32.815 CUnit - A unit testing framework for C - Version 2.1-3 00:22:32.815 http://cunit.sourceforge.net/ 00:22:32.815 00:22:32.815 00:22:32.815 Suite: bdevio tests on: Nvme1n1 00:22:32.815 Test: blockdev write read block ...passed 00:22:32.815 Test: blockdev write zeroes read block ...passed 00:22:32.815 Test: blockdev write zeroes read no split ...passed 00:22:32.815 Test: blockdev write zeroes read split ...passed 00:22:33.073 Test: blockdev write zeroes read split partial ...passed 00:22:33.073 Test: blockdev reset ...[2024-09-29 16:30:33.401918] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:33.073 [2024-09-29 16:30:33.402121] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f1100 (9): Bad file descriptor 00:22:33.073 [2024-09-29 16:30:33.418550] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:33.073 passed 00:22:33.073 Test: blockdev write read 8 blocks ...passed 00:22:33.073 Test: blockdev write read size > 128k ...passed 00:22:33.073 Test: blockdev write read invalid size ...passed 00:22:33.073 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:33.073 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:33.073 Test: blockdev write read max offset ...passed 00:22:33.073 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:33.073 Test: blockdev writev readv 8 blocks ...passed 00:22:33.073 Test: blockdev writev readv 30 x 1block ...passed 00:22:33.073 Test: blockdev writev readv block ...passed 00:22:33.073 Test: blockdev writev readv size > 128k ...passed 00:22:33.073 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:33.073 Test: blockdev comparev and writev ...[2024-09-29 16:30:33.635195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:33.073 [2024-09-29 16:30:33.635263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.073 [2024-09-29 16:30:33.635310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:33.073 [2024-09-29 16:30:33.635337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:33.073 [2024-09-29 16:30:33.635840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:33.073 [2024-09-29 16:30:33.635881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:33.073 [2024-09-29 16:30:33.635917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:33.073 [2024-09-29 16:30:33.635942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:33.331 [2024-09-29 16:30:33.636413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:33.331 [2024-09-29 16:30:33.636445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:33.331 [2024-09-29 16:30:33.636480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:33.331 [2024-09-29 16:30:33.636504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:33.331 [2024-09-29 16:30:33.636954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:33.331 [2024-09-29 16:30:33.636987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:33.331 [2024-09-29 16:30:33.637021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:33.331 [2024-09-29 16:30:33.637046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:33.331 passed 00:22:33.331 Test: blockdev nvme passthru rw ...passed 00:22:33.331 Test: blockdev nvme passthru vendor specific ...[2024-09-29 16:30:33.719140] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:33.331 [2024-09-29 16:30:33.719204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:33.331 [2024-09-29 16:30:33.719459] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:33.331 [2024-09-29 16:30:33.719491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:33.331 [2024-09-29 16:30:33.719718] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:33.331 [2024-09-29 16:30:33.719751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:33.331 [2024-09-29 16:30:33.719981] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:33.331 [2024-09-29 16:30:33.720013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:33.331 passed 00:22:33.331 Test: blockdev nvme admin passthru ...passed 00:22:33.331 Test: blockdev copy ...passed 00:22:33.331 00:22:33.331 Run Summary: Type Total Ran Passed Failed Inactive 00:22:33.331 suites 1 1 n/a 0 0 00:22:33.331 tests 23 23 23 0 0 00:22:33.331 asserts 152 152 152 0 n/a 00:22:33.331 00:22:33.331 Elapsed time = 1.066 seconds 00:22:34.266 16:30:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:34.266 16:30:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.266 16:30:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:34.266 16:30:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.266 16:30:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:34.266 16:30:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:34.266 16:30:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # nvmfcleanup 00:22:34.266 16:30:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:22:34.266 16:30:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:34.266 16:30:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:22:34.266 16:30:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:34.266 16:30:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:34.266 rmmod nvme_tcp 00:22:34.266 rmmod nvme_fabrics 00:22:34.266 rmmod nvme_keyring 00:22:34.266 16:30:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:34.266 16:30:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:22:34.266 16:30:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:22:34.266 16:30:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@513 -- # '[' -n 3182036 ']' 00:22:34.266 16:30:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@514 -- # killprocess 3182036 00:22:34.266 16:30:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 3182036 ']' 00:22:34.266 16:30:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 3182036 00:22:34.266 16:30:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:22:34.266 16:30:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:34.266 16:30:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3182036 00:22:34.266 16:30:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:22:34.266 16:30:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:22:34.266 16:30:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3182036' 00:22:34.266 killing process with pid 3182036 00:22:34.266 16:30:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 3182036 00:22:34.266 16:30:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 3182036 00:22:35.201 16:30:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:22:35.201 16:30:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:22:35.201 16:30:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:22:35.201 16:30:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:22:35.201 16:30:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # iptables-save 00:22:35.201 16:30:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:22:35.201 16:30:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # iptables-restore 00:22:35.201 16:30:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:35.201 16:30:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:35.201 16:30:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.201 16:30:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:35.201 16:30:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.103 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:37.103 00:22:37.103 real 0m8.948s 00:22:37.103 user 0m20.112s 00:22:37.103 sys 0m2.971s 00:22:37.103 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:37.103 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:37.103 ************************************ 00:22:37.103 END TEST nvmf_bdevio_no_huge 00:22:37.103 ************************************ 00:22:37.362 16:30:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:37.362 16:30:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:37.362 16:30:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:37.362 16:30:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:37.362 ************************************ 00:22:37.362 START TEST nvmf_tls 00:22:37.362 ************************************ 00:22:37.362 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:37.362 * Looking for test storage... 00:22:37.362 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:37.362 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:37.362 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lcov --version 00:22:37.362 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:37.362 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:37.362 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:37.362 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:37.362 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:37.362 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:22:37.362 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:22:37.362 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:22:37.362 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:22:37.362 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:22:37.362 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:22:37.362 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:22:37.362 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:37.362 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:22:37.362 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:22:37.362 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:37.362 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:37.362 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:22:37.362 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:22:37.362 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:37.362 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:22:37.362 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:22:37.362 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:22:37.362 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:22:37.362 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:37.362 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:22:37.362 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:22:37.362 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:37.362 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:37.362 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:22:37.362 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:37.362 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:37.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.362 --rc genhtml_branch_coverage=1 00:22:37.362 --rc genhtml_function_coverage=1 00:22:37.362 --rc genhtml_legend=1 00:22:37.362 --rc geninfo_all_blocks=1 00:22:37.362 --rc geninfo_unexecuted_blocks=1 00:22:37.362 00:22:37.362 ' 00:22:37.362 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:37.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.362 --rc genhtml_branch_coverage=1 00:22:37.362 --rc genhtml_function_coverage=1 00:22:37.362 --rc genhtml_legend=1 00:22:37.362 --rc geninfo_all_blocks=1 00:22:37.362 --rc geninfo_unexecuted_blocks=1 00:22:37.362 00:22:37.362 ' 00:22:37.362 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:37.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.362 --rc genhtml_branch_coverage=1 00:22:37.362 --rc genhtml_function_coverage=1 00:22:37.362 --rc genhtml_legend=1 00:22:37.362 --rc geninfo_all_blocks=1 00:22:37.362 --rc geninfo_unexecuted_blocks=1 00:22:37.362 00:22:37.362 ' 00:22:37.362 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:37.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.362 --rc genhtml_branch_coverage=1 00:22:37.362 --rc genhtml_function_coverage=1 00:22:37.362 --rc genhtml_legend=1 00:22:37.362 --rc geninfo_all_blocks=1 00:22:37.362 --rc geninfo_unexecuted_blocks=1 00:22:37.362 00:22:37.362 ' 00:22:37.362 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:37.362 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:37.362 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:37.362 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:37.362 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:37.362 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:37.362 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:37.362 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:37.362 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:37.362 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:37.362 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:37.362 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:37.362 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:37.362 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:37.362 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:37.362 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:37.362 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:37.362 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:37.362 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:37.362 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:22:37.362 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:37.362 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:37.362 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:37.362 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.363 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.363 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.363 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:37.363 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.363 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:22:37.363 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:37.363 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:37.363 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:37.363 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:37.363 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:37.363 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:37.363 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:37.363 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:37.363 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:37.363 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:37.363 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:37.363 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:22:37.363 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:22:37.363 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:37.363 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@472 -- # prepare_net_devs 00:22:37.363 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@434 -- # local -g is_hw=no 00:22:37.363 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@436 -- # remove_spdk_ns 00:22:37.363 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:37.363 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:37.363 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.363 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:22:37.363 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:22:37.363 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:22:37.363 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:39.888 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:39.888 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:22:39.888 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:39.888 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:39.889 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:39.889 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ up == up ]] 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:39.889 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ up == up ]] 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:39.889 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # is_hw=yes 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:39.889 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:39.889 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:39.889 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:39.889 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:39.889 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:39.889 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:39.889 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.151 ms 00:22:39.889 00:22:39.889 --- 10.0.0.2 ping statistics --- 00:22:39.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:39.889 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:22:39.889 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:39.889 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:39.889 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:22:39.889 00:22:39.889 --- 10.0.0.1 ping statistics --- 00:22:39.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:39.889 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:22:39.889 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:39.889 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # return 0 00:22:39.889 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:22:39.889 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:39.889 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:22:39.889 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:22:39.889 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:39.889 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:22:39.889 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:22:39.889 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:39.889 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:22:39.890 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:39.890 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:39.890 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=3184528 00:22:39.890 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:39.890 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 3184528 00:22:39.890 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3184528 ']' 00:22:39.890 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:39.890 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:39.890 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:39.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:39.890 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:39.890 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:39.890 [2024-09-29 16:30:40.160173] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:22:39.890 [2024-09-29 16:30:40.160328] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:39.890 [2024-09-29 16:30:40.301539] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:40.148 [2024-09-29 16:30:40.560776] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:40.148 [2024-09-29 16:30:40.560863] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:40.148 [2024-09-29 16:30:40.560889] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:40.148 [2024-09-29 16:30:40.560913] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:40.148 [2024-09-29 16:30:40.560932] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:40.148 [2024-09-29 16:30:40.560987] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:40.713 16:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:40.713 16:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:40.713 16:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:22:40.713 16:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:40.713 16:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:40.713 16:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:40.713 16:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:22:40.713 16:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:40.970 true 00:22:40.970 16:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:40.970 16:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:22:41.228 16:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:22:41.228 16:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:22:41.228 16:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:41.486 16:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:41.486 16:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:22:41.743 16:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:22:41.743 16:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:22:41.743 16:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:42.001 16:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:42.001 16:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:22:42.259 16:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:22:42.259 16:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:22:42.259 16:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:42.259 16:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:22:42.885 16:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:22:42.885 16:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:22:42.885 16:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:43.188 16:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:43.188 16:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:22:43.188 16:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:22:43.188 16:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:22:43.188 16:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:43.445 16:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:43.445 16:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:22:43.703 16:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:22:43.703 16:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:22:43.703 16:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:43.703 16:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:43.703 16:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:22:43.703 16:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:22:43.703 16:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:22:43.703 16:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=1 00:22:43.703 16:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:22:43.960 16:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:43.960 16:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:43.960 16:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:43.960 16:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:22:43.960 16:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:22:43.960 16:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=ffeeddccbbaa99887766554433221100 00:22:43.960 16:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=1 00:22:43.960 16:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:22:43.960 16:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:43.960 16:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:43.960 16:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.IxBN9ielQ6 00:22:43.960 16:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:22:43.960 16:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.e1XkjMW6O6 00:22:43.960 16:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:43.960 16:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:43.960 16:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.IxBN9ielQ6 00:22:43.960 16:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.e1XkjMW6O6 00:22:43.960 16:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:44.218 16:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:44.783 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.IxBN9ielQ6 00:22:44.783 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.IxBN9ielQ6 00:22:44.783 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:45.042 [2024-09-29 16:30:45.491385] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:45.042 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:45.299 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:45.557 [2024-09-29 16:30:46.028940] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:45.557 [2024-09-29 16:30:46.029317] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:45.557 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:45.814 malloc0 00:22:45.814 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:46.071 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.IxBN9ielQ6 00:22:46.636 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:46.636 16:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.IxBN9ielQ6 00:22:58.832 Initializing NVMe Controllers 00:22:58.832 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:58.832 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:58.832 Initialization complete. Launching workers. 00:22:58.832 ======================================================== 00:22:58.832 Latency(us) 00:22:58.832 Device Information : IOPS MiB/s Average min max 00:22:58.832 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5433.48 21.22 11784.18 1989.79 13146.15 00:22:58.832 ======================================================== 00:22:58.832 Total : 5433.48 21.22 11784.18 1989.79 13146.15 00:22:58.832 00:22:58.832 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.IxBN9ielQ6 00:22:58.832 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:58.832 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:58.832 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:58.832 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.IxBN9ielQ6 00:22:58.832 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:58.832 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3186563 00:22:58.832 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:58.832 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:58.832 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3186563 /var/tmp/bdevperf.sock 00:22:58.832 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3186563 ']' 00:22:58.832 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:58.832 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:58.832 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:58.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:58.832 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:58.832 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:58.832 [2024-09-29 16:30:57.509402] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:22:58.832 [2024-09-29 16:30:57.509547] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3186563 ] 00:22:58.832 [2024-09-29 16:30:57.635088] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.832 [2024-09-29 16:30:57.857571] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:58.832 16:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:58.832 16:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:58.832 16:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.IxBN9ielQ6 00:22:58.832 16:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:58.832 [2024-09-29 16:30:59.023746] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:58.832 TLSTESTn1 00:22:58.832 16:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:58.832 Running I/O for 10 seconds... 00:23:09.042 2286.00 IOPS, 8.93 MiB/s 2374.50 IOPS, 9.28 MiB/s 2378.33 IOPS, 9.29 MiB/s 2359.75 IOPS, 9.22 MiB/s 2365.20 IOPS, 9.24 MiB/s 2375.50 IOPS, 9.28 MiB/s 2384.43 IOPS, 9.31 MiB/s 2398.62 IOPS, 9.37 MiB/s 2391.78 IOPS, 9.34 MiB/s 2398.80 IOPS, 9.37 MiB/s 00:23:09.042 Latency(us) 00:23:09.042 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:09.042 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:09.042 Verification LBA range: start 0x0 length 0x2000 00:23:09.042 TLSTESTn1 : 10.03 2403.53 9.39 0.00 0.00 53143.30 7524.50 82332.63 00:23:09.042 =================================================================================================================== 00:23:09.042 Total : 2403.53 9.39 0.00 0.00 53143.30 7524.50 82332.63 00:23:09.042 { 00:23:09.042 "results": [ 00:23:09.042 { 00:23:09.042 "job": "TLSTESTn1", 00:23:09.042 "core_mask": "0x4", 00:23:09.042 "workload": "verify", 00:23:09.042 "status": "finished", 00:23:09.042 "verify_range": { 00:23:09.042 "start": 0, 00:23:09.042 "length": 8192 00:23:09.042 }, 00:23:09.042 "queue_depth": 128, 00:23:09.042 "io_size": 4096, 00:23:09.042 "runtime": 10.032725, 00:23:09.042 "iops": 2403.534433566155, 00:23:09.042 "mibps": 9.388806381117792, 00:23:09.042 "io_failed": 0, 00:23:09.042 "io_timeout": 0, 00:23:09.042 "avg_latency_us": 53143.29638611656, 00:23:09.042 "min_latency_us": 7524.503703703704, 00:23:09.042 "max_latency_us": 82332.63407407407 00:23:09.042 } 00:23:09.042 ], 00:23:09.042 "core_count": 1 00:23:09.042 } 00:23:09.042 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:09.042 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3186563 00:23:09.042 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3186563 ']' 00:23:09.042 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3186563 00:23:09.042 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:09.042 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:09.042 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3186563 00:23:09.042 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:09.042 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:09.042 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3186563' 00:23:09.042 killing process with pid 3186563 00:23:09.042 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3186563 00:23:09.042 Received shutdown signal, test time was about 10.000000 seconds 00:23:09.042 00:23:09.042 Latency(us) 00:23:09.042 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:09.042 =================================================================================================================== 00:23:09.042 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:09.042 16:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3186563 00:23:09.977 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.e1XkjMW6O6 00:23:09.977 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:09.977 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.e1XkjMW6O6 00:23:09.977 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:09.977 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:09.977 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:09.977 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:09.977 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.e1XkjMW6O6 00:23:09.977 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:09.977 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:09.977 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:09.977 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.e1XkjMW6O6 00:23:09.977 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:09.977 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3188134 00:23:09.977 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:09.977 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:09.977 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3188134 /var/tmp/bdevperf.sock 00:23:09.977 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3188134 ']' 00:23:09.977 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:09.977 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:09.977 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:09.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:09.977 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:09.977 16:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:09.977 [2024-09-29 16:31:10.399614] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:23:09.977 [2024-09-29 16:31:10.399825] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3188134 ] 00:23:10.235 [2024-09-29 16:31:10.546282] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.235 [2024-09-29 16:31:10.772561] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:11.169 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:11.169 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:11.169 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.e1XkjMW6O6 00:23:11.169 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:11.427 [2024-09-29 16:31:11.940428] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:11.427 [2024-09-29 16:31:11.950440] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:11.427 [2024-09-29 16:31:11.951333] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (107): Transport endpoint is not connected 00:23:11.427 [2024-09-29 16:31:11.952307] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:23:11.427 [2024-09-29 16:31:11.953301] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.427 [2024-09-29 16:31:11.953350] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:11.427 [2024-09-29 16:31:11.953374] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:11.427 [2024-09-29 16:31:11.953407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.427 request: 00:23:11.427 { 00:23:11.427 "name": "TLSTEST", 00:23:11.427 "trtype": "tcp", 00:23:11.427 "traddr": "10.0.0.2", 00:23:11.427 "adrfam": "ipv4", 00:23:11.427 "trsvcid": "4420", 00:23:11.427 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:11.427 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:11.427 "prchk_reftag": false, 00:23:11.427 "prchk_guard": false, 00:23:11.427 "hdgst": false, 00:23:11.427 "ddgst": false, 00:23:11.427 "psk": "key0", 00:23:11.427 "allow_unrecognized_csi": false, 00:23:11.427 "method": "bdev_nvme_attach_controller", 00:23:11.427 "req_id": 1 00:23:11.427 } 00:23:11.427 Got JSON-RPC error response 00:23:11.427 response: 00:23:11.427 { 00:23:11.427 "code": -5, 00:23:11.427 "message": "Input/output error" 00:23:11.427 } 00:23:11.427 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3188134 00:23:11.427 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3188134 ']' 00:23:11.428 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3188134 00:23:11.428 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:11.428 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:11.428 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3188134 00:23:11.686 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:11.686 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:11.686 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3188134' 00:23:11.686 killing process with pid 3188134 00:23:11.686 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3188134 00:23:11.686 Received shutdown signal, test time was about 10.000000 seconds 00:23:11.686 00:23:11.686 Latency(us) 00:23:11.686 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:11.686 =================================================================================================================== 00:23:11.686 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:11.686 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3188134 00:23:12.620 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:12.620 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:12.620 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:12.620 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:12.620 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:12.620 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.IxBN9ielQ6 00:23:12.620 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:12.620 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.IxBN9ielQ6 00:23:12.620 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:12.620 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:12.620 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:12.620 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:12.620 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.IxBN9ielQ6 00:23:12.620 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:12.620 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:12.620 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:12.620 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.IxBN9ielQ6 00:23:12.620 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:12.620 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3188411 00:23:12.620 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:12.620 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:12.620 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3188411 /var/tmp/bdevperf.sock 00:23:12.620 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3188411 ']' 00:23:12.620 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:12.620 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:12.620 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:12.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:12.620 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:12.620 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:12.620 [2024-09-29 16:31:13.074079] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:23:12.620 [2024-09-29 16:31:13.074212] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3188411 ] 00:23:12.878 [2024-09-29 16:31:13.199715] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:12.879 [2024-09-29 16:31:13.428037] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:13.813 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:13.813 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:13.813 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.IxBN9ielQ6 00:23:14.071 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:23:14.329 [2024-09-29 16:31:14.695963] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:14.329 [2024-09-29 16:31:14.705945] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:14.329 [2024-09-29 16:31:14.706006] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:14.329 [2024-09-29 16:31:14.706085] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:14.329 [2024-09-29 16:31:14.706131] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (107): Transport endpoint is not connected 00:23:14.329 [2024-09-29 16:31:14.707106] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:23:14.329 [2024-09-29 16:31:14.708107] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:14.329 [2024-09-29 16:31:14.708135] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:14.329 [2024-09-29 16:31:14.708176] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:14.329 [2024-09-29 16:31:14.708203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:14.329 request: 00:23:14.329 { 00:23:14.329 "name": "TLSTEST", 00:23:14.329 "trtype": "tcp", 00:23:14.329 "traddr": "10.0.0.2", 00:23:14.329 "adrfam": "ipv4", 00:23:14.329 "trsvcid": "4420", 00:23:14.329 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:14.329 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:14.329 "prchk_reftag": false, 00:23:14.329 "prchk_guard": false, 00:23:14.329 "hdgst": false, 00:23:14.329 "ddgst": false, 00:23:14.329 "psk": "key0", 00:23:14.329 "allow_unrecognized_csi": false, 00:23:14.329 "method": "bdev_nvme_attach_controller", 00:23:14.329 "req_id": 1 00:23:14.329 } 00:23:14.329 Got JSON-RPC error response 00:23:14.329 response: 00:23:14.329 { 00:23:14.329 "code": -5, 00:23:14.329 "message": "Input/output error" 00:23:14.329 } 00:23:14.329 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3188411 00:23:14.329 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3188411 ']' 00:23:14.329 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3188411 00:23:14.329 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:14.329 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:14.329 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3188411 00:23:14.329 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:14.329 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:14.329 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3188411' 00:23:14.329 killing process with pid 3188411 00:23:14.329 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3188411 00:23:14.329 Received shutdown signal, test time was about 10.000000 seconds 00:23:14.329 00:23:14.329 Latency(us) 00:23:14.329 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:14.329 =================================================================================================================== 00:23:14.329 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:14.329 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3188411 00:23:15.262 16:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:15.262 16:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:15.262 16:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:15.262 16:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:15.262 16:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:15.262 16:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.IxBN9ielQ6 00:23:15.262 16:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:15.262 16:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.IxBN9ielQ6 00:23:15.262 16:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:15.262 16:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:15.262 16:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:15.262 16:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:15.262 16:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.IxBN9ielQ6 00:23:15.262 16:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:15.262 16:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:15.262 16:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:15.262 16:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.IxBN9ielQ6 00:23:15.262 16:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:15.262 16:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3188772 00:23:15.262 16:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:15.262 16:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:15.262 16:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3188772 /var/tmp/bdevperf.sock 00:23:15.262 16:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3188772 ']' 00:23:15.262 16:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:15.262 16:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:15.262 16:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:15.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:15.262 16:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:15.262 16:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:15.521 [2024-09-29 16:31:15.836539] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:23:15.521 [2024-09-29 16:31:15.836709] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3188772 ] 00:23:15.521 [2024-09-29 16:31:15.964315] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.779 [2024-09-29 16:31:16.185465] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:16.345 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:16.345 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:16.345 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.IxBN9ielQ6 00:23:16.604 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:16.862 [2024-09-29 16:31:17.368060] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:16.862 [2024-09-29 16:31:17.381304] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:16.862 [2024-09-29 16:31:17.381339] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:16.862 [2024-09-29 16:31:17.381425] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:16.862 [2024-09-29 16:31:17.381582] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (107): Transport endpoint is not connected 00:23:16.862 [2024-09-29 16:31:17.382548] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:23:16.862 [2024-09-29 16:31:17.383550] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:16.862 [2024-09-29 16:31:17.383578] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:16.862 [2024-09-29 16:31:17.383618] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:23:16.862 [2024-09-29 16:31:17.383660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:16.862 request: 00:23:16.862 { 00:23:16.862 "name": "TLSTEST", 00:23:16.862 "trtype": "tcp", 00:23:16.862 "traddr": "10.0.0.2", 00:23:16.862 "adrfam": "ipv4", 00:23:16.862 "trsvcid": "4420", 00:23:16.862 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:16.862 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:16.862 "prchk_reftag": false, 00:23:16.862 "prchk_guard": false, 00:23:16.863 "hdgst": false, 00:23:16.863 "ddgst": false, 00:23:16.863 "psk": "key0", 00:23:16.863 "allow_unrecognized_csi": false, 00:23:16.863 "method": "bdev_nvme_attach_controller", 00:23:16.863 "req_id": 1 00:23:16.863 } 00:23:16.863 Got JSON-RPC error response 00:23:16.863 response: 00:23:16.863 { 00:23:16.863 "code": -5, 00:23:16.863 "message": "Input/output error" 00:23:16.863 } 00:23:16.863 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3188772 00:23:16.863 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3188772 ']' 00:23:16.863 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3188772 00:23:16.863 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:16.863 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:16.863 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3188772 00:23:17.121 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:17.121 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:17.121 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3188772' 00:23:17.121 killing process with pid 3188772 00:23:17.121 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3188772 00:23:17.121 Received shutdown signal, test time was about 10.000000 seconds 00:23:17.121 00:23:17.121 Latency(us) 00:23:17.121 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:17.121 =================================================================================================================== 00:23:17.121 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:17.121 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3188772 00:23:18.055 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:18.055 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:18.055 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:18.055 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:18.055 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:18.055 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:18.055 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:18.055 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:18.055 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:18.055 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:18.055 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:18.055 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:18.055 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:18.055 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:18.055 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:18.055 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:18.055 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:18.055 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:18.055 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3189088 00:23:18.055 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:18.055 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:18.055 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3189088 /var/tmp/bdevperf.sock 00:23:18.055 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3189088 ']' 00:23:18.055 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:18.055 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:18.055 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:18.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:18.055 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:18.055 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:18.055 [2024-09-29 16:31:18.456610] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:23:18.055 [2024-09-29 16:31:18.456760] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3189088 ] 00:23:18.055 [2024-09-29 16:31:18.581807] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.314 [2024-09-29 16:31:18.800549] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:18.880 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:18.880 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:18.880 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:23:19.137 [2024-09-29 16:31:19.658051] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:23:19.137 [2024-09-29 16:31:19.658105] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:19.137 request: 00:23:19.137 { 00:23:19.137 "name": "key0", 00:23:19.137 "path": "", 00:23:19.137 "method": "keyring_file_add_key", 00:23:19.137 "req_id": 1 00:23:19.137 } 00:23:19.137 Got JSON-RPC error response 00:23:19.137 response: 00:23:19.137 { 00:23:19.137 "code": -1, 00:23:19.137 "message": "Operation not permitted" 00:23:19.137 } 00:23:19.138 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:19.396 [2024-09-29 16:31:19.918925] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:19.396 [2024-09-29 16:31:19.919007] bdev_nvme.c:6410:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:19.396 request: 00:23:19.396 { 00:23:19.396 "name": "TLSTEST", 00:23:19.396 "trtype": "tcp", 00:23:19.396 "traddr": "10.0.0.2", 00:23:19.396 "adrfam": "ipv4", 00:23:19.396 "trsvcid": "4420", 00:23:19.396 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:19.396 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:19.396 "prchk_reftag": false, 00:23:19.396 "prchk_guard": false, 00:23:19.396 "hdgst": false, 00:23:19.396 "ddgst": false, 00:23:19.396 "psk": "key0", 00:23:19.396 "allow_unrecognized_csi": false, 00:23:19.396 "method": "bdev_nvme_attach_controller", 00:23:19.396 "req_id": 1 00:23:19.396 } 00:23:19.396 Got JSON-RPC error response 00:23:19.396 response: 00:23:19.396 { 00:23:19.396 "code": -126, 00:23:19.396 "message": "Required key not available" 00:23:19.396 } 00:23:19.396 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3189088 00:23:19.396 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3189088 ']' 00:23:19.396 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3189088 00:23:19.396 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:19.396 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:19.396 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3189088 00:23:19.654 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:19.654 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:19.654 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3189088' 00:23:19.654 killing process with pid 3189088 00:23:19.654 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3189088 00:23:19.654 Received shutdown signal, test time was about 10.000000 seconds 00:23:19.654 00:23:19.654 Latency(us) 00:23:19.654 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:19.654 =================================================================================================================== 00:23:19.654 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:19.654 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3189088 00:23:20.589 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:20.589 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:20.589 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:20.589 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:20.589 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:20.589 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 3184528 00:23:20.589 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3184528 ']' 00:23:20.589 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3184528 00:23:20.589 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:20.589 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:20.589 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3184528 00:23:20.589 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:20.589 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:20.589 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3184528' 00:23:20.589 killing process with pid 3184528 00:23:20.589 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3184528 00:23:20.589 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3184528 00:23:21.965 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:21.965 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:21.965 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:23:21.965 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:23:21.965 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:21.965 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=2 00:23:21.965 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:23:22.250 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:22.250 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:23:22.250 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.lZx38bCGaF 00:23:22.250 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:22.250 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.lZx38bCGaF 00:23:22.250 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:23:22.250 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:22.250 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:22.250 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:22.250 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=3189589 00:23:22.250 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:22.250 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 3189589 00:23:22.250 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3189589 ']' 00:23:22.250 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:22.250 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:22.250 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:22.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:22.250 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:22.250 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:22.250 [2024-09-29 16:31:22.643056] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:23:22.250 [2024-09-29 16:31:22.643196] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:22.250 [2024-09-29 16:31:22.784179] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:22.508 [2024-09-29 16:31:23.036797] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:22.508 [2024-09-29 16:31:23.036886] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:22.508 [2024-09-29 16:31:23.036911] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:22.508 [2024-09-29 16:31:23.036936] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:22.508 [2024-09-29 16:31:23.036961] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:22.508 [2024-09-29 16:31:23.037012] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:23.494 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:23.494 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:23.494 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:23.494 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:23.494 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:23.494 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:23.494 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.lZx38bCGaF 00:23:23.494 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.lZx38bCGaF 00:23:23.494 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:23.494 [2024-09-29 16:31:23.978090] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:23.494 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:23.784 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:24.042 [2024-09-29 16:31:24.531575] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:24.042 [2024-09-29 16:31:24.531933] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:24.042 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:24.300 malloc0 00:23:24.557 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:24.814 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.lZx38bCGaF 00:23:25.071 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:25.330 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lZx38bCGaF 00:23:25.330 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:25.330 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:25.330 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:25.330 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.lZx38bCGaF 00:23:25.330 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:25.330 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3189931 00:23:25.330 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:25.330 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:25.330 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3189931 /var/tmp/bdevperf.sock 00:23:25.330 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3189931 ']' 00:23:25.330 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:25.330 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:25.330 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:25.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:25.330 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:25.330 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:25.330 [2024-09-29 16:31:25.741732] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:23:25.330 [2024-09-29 16:31:25.741864] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3189931 ] 00:23:25.330 [2024-09-29 16:31:25.863305] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:25.588 [2024-09-29 16:31:26.093148] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:26.155 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:26.155 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:26.155 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lZx38bCGaF 00:23:26.721 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:26.721 [2024-09-29 16:31:27.244013] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:26.981 TLSTESTn1 00:23:26.981 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:26.981 Running I/O for 10 seconds... 00:23:37.190 2638.00 IOPS, 10.30 MiB/s 2664.00 IOPS, 10.41 MiB/s 2676.67 IOPS, 10.46 MiB/s 2689.50 IOPS, 10.51 MiB/s 2692.60 IOPS, 10.52 MiB/s 2697.17 IOPS, 10.54 MiB/s 2703.14 IOPS, 10.56 MiB/s 2706.62 IOPS, 10.57 MiB/s 2702.67 IOPS, 10.56 MiB/s 2706.60 IOPS, 10.57 MiB/s 00:23:37.190 Latency(us) 00:23:37.190 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:37.190 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:37.190 Verification LBA range: start 0x0 length 0x2000 00:23:37.190 TLSTESTn1 : 10.03 2710.80 10.59 0.00 0.00 47122.28 8107.05 45438.29 00:23:37.190 =================================================================================================================== 00:23:37.190 Total : 2710.80 10.59 0.00 0.00 47122.28 8107.05 45438.29 00:23:37.190 { 00:23:37.190 "results": [ 00:23:37.190 { 00:23:37.190 "job": "TLSTESTn1", 00:23:37.190 "core_mask": "0x4", 00:23:37.190 "workload": "verify", 00:23:37.190 "status": "finished", 00:23:37.190 "verify_range": { 00:23:37.190 "start": 0, 00:23:37.190 "length": 8192 00:23:37.190 }, 00:23:37.190 "queue_depth": 128, 00:23:37.190 "io_size": 4096, 00:23:37.190 "runtime": 10.03134, 00:23:37.190 "iops": 2710.804339200944, 00:23:37.190 "mibps": 10.589079450003688, 00:23:37.190 "io_failed": 0, 00:23:37.190 "io_timeout": 0, 00:23:37.190 "avg_latency_us": 47122.282658908676, 00:23:37.190 "min_latency_us": 8107.045925925926, 00:23:37.190 "max_latency_us": 45438.293333333335 00:23:37.190 } 00:23:37.190 ], 00:23:37.190 "core_count": 1 00:23:37.190 } 00:23:37.190 16:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:37.190 16:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3189931 00:23:37.190 16:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3189931 ']' 00:23:37.190 16:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3189931 00:23:37.190 16:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:37.190 16:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:37.190 16:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3189931 00:23:37.190 16:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:37.190 16:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:37.190 16:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3189931' 00:23:37.190 killing process with pid 3189931 00:23:37.190 16:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3189931 00:23:37.190 Received shutdown signal, test time was about 10.000000 seconds 00:23:37.190 00:23:37.190 Latency(us) 00:23:37.190 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:37.190 =================================================================================================================== 00:23:37.190 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:37.190 16:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3189931 00:23:38.124 16:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.lZx38bCGaF 00:23:38.124 16:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lZx38bCGaF 00:23:38.124 16:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:38.124 16:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lZx38bCGaF 00:23:38.124 16:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:38.124 16:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:38.124 16:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:38.124 16:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:38.124 16:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lZx38bCGaF 00:23:38.124 16:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:38.124 16:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:38.124 16:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:38.124 16:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.lZx38bCGaF 00:23:38.124 16:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:38.125 16:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3191459 00:23:38.125 16:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:38.125 16:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:38.125 16:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3191459 /var/tmp/bdevperf.sock 00:23:38.125 16:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3191459 ']' 00:23:38.125 16:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:38.125 16:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:38.125 16:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:38.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:38.125 16:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:38.125 16:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:38.125 [2024-09-29 16:31:38.620593] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:23:38.125 [2024-09-29 16:31:38.620748] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3191459 ] 00:23:38.383 [2024-09-29 16:31:38.746503] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:38.641 [2024-09-29 16:31:38.970989] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:39.206 16:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:39.206 16:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:39.206 16:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lZx38bCGaF 00:23:39.464 [2024-09-29 16:31:39.893147] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.lZx38bCGaF': 0100666 00:23:39.464 [2024-09-29 16:31:39.893209] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:39.464 request: 00:23:39.464 { 00:23:39.464 "name": "key0", 00:23:39.464 "path": "/tmp/tmp.lZx38bCGaF", 00:23:39.464 "method": "keyring_file_add_key", 00:23:39.464 "req_id": 1 00:23:39.464 } 00:23:39.464 Got JSON-RPC error response 00:23:39.464 response: 00:23:39.464 { 00:23:39.464 "code": -1, 00:23:39.464 "message": "Operation not permitted" 00:23:39.464 } 00:23:39.464 16:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:39.723 [2024-09-29 16:31:40.166110] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:39.723 [2024-09-29 16:31:40.166210] bdev_nvme.c:6410:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:39.723 request: 00:23:39.723 { 00:23:39.723 "name": "TLSTEST", 00:23:39.723 "trtype": "tcp", 00:23:39.723 "traddr": "10.0.0.2", 00:23:39.723 "adrfam": "ipv4", 00:23:39.723 "trsvcid": "4420", 00:23:39.723 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:39.723 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:39.723 "prchk_reftag": false, 00:23:39.723 "prchk_guard": false, 00:23:39.723 "hdgst": false, 00:23:39.723 "ddgst": false, 00:23:39.723 "psk": "key0", 00:23:39.723 "allow_unrecognized_csi": false, 00:23:39.723 "method": "bdev_nvme_attach_controller", 00:23:39.723 "req_id": 1 00:23:39.723 } 00:23:39.723 Got JSON-RPC error response 00:23:39.723 response: 00:23:39.723 { 00:23:39.723 "code": -126, 00:23:39.723 "message": "Required key not available" 00:23:39.723 } 00:23:39.723 16:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3191459 00:23:39.723 16:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3191459 ']' 00:23:39.723 16:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3191459 00:23:39.723 16:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:39.723 16:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:39.723 16:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3191459 00:23:39.723 16:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:39.723 16:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:39.723 16:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3191459' 00:23:39.723 killing process with pid 3191459 00:23:39.723 16:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3191459 00:23:39.723 Received shutdown signal, test time was about 10.000000 seconds 00:23:39.723 00:23:39.723 Latency(us) 00:23:39.723 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:39.723 =================================================================================================================== 00:23:39.723 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:39.723 16:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3191459 00:23:40.658 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:40.658 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:40.658 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:40.658 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:40.658 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:40.658 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 3189589 00:23:40.658 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3189589 ']' 00:23:40.658 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3189589 00:23:40.658 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:40.658 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:40.658 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3189589 00:23:40.916 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:40.916 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:40.916 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3189589' 00:23:40.916 killing process with pid 3189589 00:23:40.916 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3189589 00:23:40.916 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3189589 00:23:42.291 16:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:23:42.291 16:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:42.291 16:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:42.291 16:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:42.291 16:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=3191926 00:23:42.291 16:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:42.291 16:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 3191926 00:23:42.291 16:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3191926 ']' 00:23:42.291 16:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:42.291 16:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:42.291 16:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:42.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:42.291 16:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:42.291 16:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:42.291 [2024-09-29 16:31:42.824546] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:23:42.291 [2024-09-29 16:31:42.824708] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:42.549 [2024-09-29 16:31:42.968313] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:42.807 [2024-09-29 16:31:43.196168] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:42.807 [2024-09-29 16:31:43.196266] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:42.807 [2024-09-29 16:31:43.196289] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:42.807 [2024-09-29 16:31:43.196310] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:42.807 [2024-09-29 16:31:43.196327] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:42.807 [2024-09-29 16:31:43.196373] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:43.373 16:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:43.373 16:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:43.373 16:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:43.373 16:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:43.373 16:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:43.373 16:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:43.373 16:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.lZx38bCGaF 00:23:43.373 16:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:43.373 16:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.lZx38bCGaF 00:23:43.373 16:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:23:43.373 16:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:43.373 16:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:23:43.373 16:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:43.373 16:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.lZx38bCGaF 00:23:43.373 16:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.lZx38bCGaF 00:23:43.373 16:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:43.631 [2024-09-29 16:31:44.117413] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:43.631 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:44.196 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:44.196 [2024-09-29 16:31:44.711071] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:44.196 [2024-09-29 16:31:44.711418] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:44.196 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:44.762 malloc0 00:23:44.762 16:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:45.020 16:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.lZx38bCGaF 00:23:45.278 [2024-09-29 16:31:45.701179] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.lZx38bCGaF': 0100666 00:23:45.278 [2024-09-29 16:31:45.701239] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:45.278 request: 00:23:45.278 { 00:23:45.278 "name": "key0", 00:23:45.278 "path": "/tmp/tmp.lZx38bCGaF", 00:23:45.278 "method": "keyring_file_add_key", 00:23:45.278 "req_id": 1 00:23:45.278 } 00:23:45.278 Got JSON-RPC error response 00:23:45.278 response: 00:23:45.278 { 00:23:45.278 "code": -1, 00:23:45.278 "message": "Operation not permitted" 00:23:45.278 } 00:23:45.278 16:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:45.536 [2024-09-29 16:31:45.965969] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:23:45.536 [2024-09-29 16:31:45.966041] subsystem.c:1055:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:45.536 request: 00:23:45.536 { 00:23:45.536 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:45.536 "host": "nqn.2016-06.io.spdk:host1", 00:23:45.536 "psk": "key0", 00:23:45.536 "method": "nvmf_subsystem_add_host", 00:23:45.536 "req_id": 1 00:23:45.536 } 00:23:45.536 Got JSON-RPC error response 00:23:45.536 response: 00:23:45.536 { 00:23:45.536 "code": -32603, 00:23:45.536 "message": "Internal error" 00:23:45.536 } 00:23:45.536 16:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:45.536 16:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:45.536 16:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:45.536 16:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:45.536 16:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 3191926 00:23:45.536 16:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3191926 ']' 00:23:45.536 16:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3191926 00:23:45.536 16:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:45.536 16:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:45.536 16:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3191926 00:23:45.536 16:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:45.536 16:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:45.536 16:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3191926' 00:23:45.536 killing process with pid 3191926 00:23:45.536 16:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3191926 00:23:45.536 16:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3191926 00:23:47.437 16:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.lZx38bCGaF 00:23:47.437 16:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:23:47.437 16:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:47.437 16:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:47.437 16:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:47.437 16:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=3192486 00:23:47.437 16:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:47.437 16:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 3192486 00:23:47.437 16:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3192486 ']' 00:23:47.437 16:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:47.437 16:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:47.437 16:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:47.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:47.437 16:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:47.437 16:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:47.437 [2024-09-29 16:31:47.572847] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:23:47.437 [2024-09-29 16:31:47.573011] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:47.438 [2024-09-29 16:31:47.710276] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:47.438 [2024-09-29 16:31:47.960251] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:47.438 [2024-09-29 16:31:47.960346] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:47.438 [2024-09-29 16:31:47.960373] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:47.438 [2024-09-29 16:31:47.960398] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:47.438 [2024-09-29 16:31:47.960419] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:47.438 [2024-09-29 16:31:47.960479] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:48.372 16:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:48.372 16:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:48.372 16:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:48.372 16:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:48.372 16:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:48.372 16:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:48.372 16:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.lZx38bCGaF 00:23:48.372 16:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.lZx38bCGaF 00:23:48.372 16:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:48.372 [2024-09-29 16:31:48.870846] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:48.372 16:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:48.629 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:48.886 [2024-09-29 16:31:49.412342] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:48.886 [2024-09-29 16:31:49.412709] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:48.886 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:49.451 malloc0 00:23:49.451 16:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:49.708 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.lZx38bCGaF 00:23:49.966 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:50.223 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=3192904 00:23:50.223 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:50.223 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:50.223 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 3192904 /var/tmp/bdevperf.sock 00:23:50.223 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3192904 ']' 00:23:50.223 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:50.223 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:50.223 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:50.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:50.223 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:50.223 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:50.223 [2024-09-29 16:31:50.665821] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:23:50.223 [2024-09-29 16:31:50.665962] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3192904 ] 00:23:50.481 [2024-09-29 16:31:50.792106] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:50.481 [2024-09-29 16:31:51.015535] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:51.413 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:51.413 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:51.413 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lZx38bCGaF 00:23:51.414 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:51.670 [2024-09-29 16:31:52.139076] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:51.670 TLSTESTn1 00:23:51.927 16:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:52.184 16:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:23:52.184 "subsystems": [ 00:23:52.184 { 00:23:52.184 "subsystem": "keyring", 00:23:52.184 "config": [ 00:23:52.184 { 00:23:52.184 "method": "keyring_file_add_key", 00:23:52.184 "params": { 00:23:52.184 "name": "key0", 00:23:52.184 "path": "/tmp/tmp.lZx38bCGaF" 00:23:52.184 } 00:23:52.184 } 00:23:52.184 ] 00:23:52.184 }, 00:23:52.184 { 00:23:52.184 "subsystem": "iobuf", 00:23:52.184 "config": [ 00:23:52.184 { 00:23:52.184 "method": "iobuf_set_options", 00:23:52.184 "params": { 00:23:52.184 "small_pool_count": 8192, 00:23:52.184 "large_pool_count": 1024, 00:23:52.184 "small_bufsize": 8192, 00:23:52.184 "large_bufsize": 135168 00:23:52.184 } 00:23:52.184 } 00:23:52.184 ] 00:23:52.184 }, 00:23:52.184 { 00:23:52.184 "subsystem": "sock", 00:23:52.184 "config": [ 00:23:52.184 { 00:23:52.184 "method": "sock_set_default_impl", 00:23:52.184 "params": { 00:23:52.184 "impl_name": "posix" 00:23:52.184 } 00:23:52.184 }, 00:23:52.184 { 00:23:52.184 "method": "sock_impl_set_options", 00:23:52.184 "params": { 00:23:52.184 "impl_name": "ssl", 00:23:52.184 "recv_buf_size": 4096, 00:23:52.184 "send_buf_size": 4096, 00:23:52.184 "enable_recv_pipe": true, 00:23:52.184 "enable_quickack": false, 00:23:52.184 "enable_placement_id": 0, 00:23:52.184 "enable_zerocopy_send_server": true, 00:23:52.184 "enable_zerocopy_send_client": false, 00:23:52.184 "zerocopy_threshold": 0, 00:23:52.184 "tls_version": 0, 00:23:52.184 "enable_ktls": false 00:23:52.184 } 00:23:52.184 }, 00:23:52.184 { 00:23:52.184 "method": "sock_impl_set_options", 00:23:52.184 "params": { 00:23:52.184 "impl_name": "posix", 00:23:52.184 "recv_buf_size": 2097152, 00:23:52.184 "send_buf_size": 2097152, 00:23:52.184 "enable_recv_pipe": true, 00:23:52.184 "enable_quickack": false, 00:23:52.184 "enable_placement_id": 0, 00:23:52.184 "enable_zerocopy_send_server": true, 00:23:52.184 "enable_zerocopy_send_client": false, 00:23:52.184 "zerocopy_threshold": 0, 00:23:52.185 "tls_version": 0, 00:23:52.185 "enable_ktls": false 00:23:52.185 } 00:23:52.185 } 00:23:52.185 ] 00:23:52.185 }, 00:23:52.185 { 00:23:52.185 "subsystem": "vmd", 00:23:52.185 "config": [] 00:23:52.185 }, 00:23:52.185 { 00:23:52.185 "subsystem": "accel", 00:23:52.185 "config": [ 00:23:52.185 { 00:23:52.185 "method": "accel_set_options", 00:23:52.185 "params": { 00:23:52.185 "small_cache_size": 128, 00:23:52.185 "large_cache_size": 16, 00:23:52.185 "task_count": 2048, 00:23:52.185 "sequence_count": 2048, 00:23:52.185 "buf_count": 2048 00:23:52.185 } 00:23:52.185 } 00:23:52.185 ] 00:23:52.185 }, 00:23:52.185 { 00:23:52.185 "subsystem": "bdev", 00:23:52.185 "config": [ 00:23:52.185 { 00:23:52.185 "method": "bdev_set_options", 00:23:52.185 "params": { 00:23:52.185 "bdev_io_pool_size": 65535, 00:23:52.185 "bdev_io_cache_size": 256, 00:23:52.185 "bdev_auto_examine": true, 00:23:52.185 "iobuf_small_cache_size": 128, 00:23:52.185 "iobuf_large_cache_size": 16 00:23:52.185 } 00:23:52.185 }, 00:23:52.185 { 00:23:52.185 "method": "bdev_raid_set_options", 00:23:52.185 "params": { 00:23:52.185 "process_window_size_kb": 1024, 00:23:52.185 "process_max_bandwidth_mb_sec": 0 00:23:52.185 } 00:23:52.185 }, 00:23:52.185 { 00:23:52.185 "method": "bdev_iscsi_set_options", 00:23:52.185 "params": { 00:23:52.185 "timeout_sec": 30 00:23:52.185 } 00:23:52.185 }, 00:23:52.185 { 00:23:52.185 "method": "bdev_nvme_set_options", 00:23:52.185 "params": { 00:23:52.185 "action_on_timeout": "none", 00:23:52.185 "timeout_us": 0, 00:23:52.185 "timeout_admin_us": 0, 00:23:52.185 "keep_alive_timeout_ms": 10000, 00:23:52.185 "arbitration_burst": 0, 00:23:52.185 "low_priority_weight": 0, 00:23:52.185 "medium_priority_weight": 0, 00:23:52.185 "high_priority_weight": 0, 00:23:52.185 "nvme_adminq_poll_period_us": 10000, 00:23:52.185 "nvme_ioq_poll_period_us": 0, 00:23:52.185 "io_queue_requests": 0, 00:23:52.185 "delay_cmd_submit": true, 00:23:52.185 "transport_retry_count": 4, 00:23:52.185 "bdev_retry_count": 3, 00:23:52.185 "transport_ack_timeout": 0, 00:23:52.185 "ctrlr_loss_timeout_sec": 0, 00:23:52.185 "reconnect_delay_sec": 0, 00:23:52.185 "fast_io_fail_timeout_sec": 0, 00:23:52.185 "disable_auto_failback": false, 00:23:52.185 "generate_uuids": false, 00:23:52.185 "transport_tos": 0, 00:23:52.185 "nvme_error_stat": false, 00:23:52.185 "rdma_srq_size": 0, 00:23:52.185 "io_path_stat": false, 00:23:52.185 "allow_accel_sequence": false, 00:23:52.185 "rdma_max_cq_size": 0, 00:23:52.185 "rdma_cm_event_timeout_ms": 0, 00:23:52.185 "dhchap_digests": [ 00:23:52.185 "sha256", 00:23:52.185 "sha384", 00:23:52.185 "sha512" 00:23:52.185 ], 00:23:52.185 "dhchap_dhgroups": [ 00:23:52.185 "null", 00:23:52.185 "ffdhe2048", 00:23:52.185 "ffdhe3072", 00:23:52.185 "ffdhe4096", 00:23:52.185 "ffdhe6144", 00:23:52.185 "ffdhe8192" 00:23:52.185 ] 00:23:52.185 } 00:23:52.185 }, 00:23:52.185 { 00:23:52.185 "method": "bdev_nvme_set_hotplug", 00:23:52.185 "params": { 00:23:52.185 "period_us": 100000, 00:23:52.185 "enable": false 00:23:52.185 } 00:23:52.185 }, 00:23:52.185 { 00:23:52.185 "method": "bdev_malloc_create", 00:23:52.185 "params": { 00:23:52.185 "name": "malloc0", 00:23:52.185 "num_blocks": 8192, 00:23:52.185 "block_size": 4096, 00:23:52.185 "physical_block_size": 4096, 00:23:52.185 "uuid": "aefdaf8d-3c0b-4253-b4d1-f4e45c8f2ad8", 00:23:52.185 "optimal_io_boundary": 0, 00:23:52.185 "md_size": 0, 00:23:52.185 "dif_type": 0, 00:23:52.185 "dif_is_head_of_md": false, 00:23:52.185 "dif_pi_format": 0 00:23:52.185 } 00:23:52.185 }, 00:23:52.185 { 00:23:52.185 "method": "bdev_wait_for_examine" 00:23:52.185 } 00:23:52.185 ] 00:23:52.185 }, 00:23:52.185 { 00:23:52.185 "subsystem": "nbd", 00:23:52.185 "config": [] 00:23:52.185 }, 00:23:52.185 { 00:23:52.185 "subsystem": "scheduler", 00:23:52.185 "config": [ 00:23:52.185 { 00:23:52.185 "method": "framework_set_scheduler", 00:23:52.185 "params": { 00:23:52.185 "name": "static" 00:23:52.185 } 00:23:52.185 } 00:23:52.185 ] 00:23:52.185 }, 00:23:52.185 { 00:23:52.185 "subsystem": "nvmf", 00:23:52.185 "config": [ 00:23:52.185 { 00:23:52.185 "method": "nvmf_set_config", 00:23:52.185 "params": { 00:23:52.185 "discovery_filter": "match_any", 00:23:52.185 "admin_cmd_passthru": { 00:23:52.185 "identify_ctrlr": false 00:23:52.185 }, 00:23:52.185 "dhchap_digests": [ 00:23:52.185 "sha256", 00:23:52.185 "sha384", 00:23:52.185 "sha512" 00:23:52.185 ], 00:23:52.185 "dhchap_dhgroups": [ 00:23:52.185 "null", 00:23:52.185 "ffdhe2048", 00:23:52.185 "ffdhe3072", 00:23:52.185 "ffdhe4096", 00:23:52.185 "ffdhe6144", 00:23:52.185 "ffdhe8192" 00:23:52.185 ] 00:23:52.185 } 00:23:52.185 }, 00:23:52.185 { 00:23:52.185 "method": "nvmf_set_max_subsystems", 00:23:52.185 "params": { 00:23:52.185 "max_subsystems": 1024 00:23:52.185 } 00:23:52.185 }, 00:23:52.185 { 00:23:52.185 "method": "nvmf_set_crdt", 00:23:52.185 "params": { 00:23:52.185 "crdt1": 0, 00:23:52.185 "crdt2": 0, 00:23:52.185 "crdt3": 0 00:23:52.185 } 00:23:52.185 }, 00:23:52.185 { 00:23:52.185 "method": "nvmf_create_transport", 00:23:52.185 "params": { 00:23:52.185 "trtype": "TCP", 00:23:52.185 "max_queue_depth": 128, 00:23:52.185 "max_io_qpairs_per_ctrlr": 127, 00:23:52.185 "in_capsule_data_size": 4096, 00:23:52.185 "max_io_size": 131072, 00:23:52.185 "io_unit_size": 131072, 00:23:52.185 "max_aq_depth": 128, 00:23:52.185 "num_shared_buffers": 511, 00:23:52.185 "buf_cache_size": 4294967295, 00:23:52.185 "dif_insert_or_strip": false, 00:23:52.185 "zcopy": false, 00:23:52.185 "c2h_success": false, 00:23:52.185 "sock_priority": 0, 00:23:52.185 "abort_timeout_sec": 1, 00:23:52.185 "ack_timeout": 0, 00:23:52.185 "data_wr_pool_size": 0 00:23:52.185 } 00:23:52.185 }, 00:23:52.185 { 00:23:52.185 "method": "nvmf_create_subsystem", 00:23:52.185 "params": { 00:23:52.185 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:52.185 "allow_any_host": false, 00:23:52.185 "serial_number": "SPDK00000000000001", 00:23:52.185 "model_number": "SPDK bdev Controller", 00:23:52.185 "max_namespaces": 10, 00:23:52.185 "min_cntlid": 1, 00:23:52.185 "max_cntlid": 65519, 00:23:52.185 "ana_reporting": false 00:23:52.185 } 00:23:52.185 }, 00:23:52.185 { 00:23:52.185 "method": "nvmf_subsystem_add_host", 00:23:52.185 "params": { 00:23:52.185 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:52.185 "host": "nqn.2016-06.io.spdk:host1", 00:23:52.185 "psk": "key0" 00:23:52.185 } 00:23:52.185 }, 00:23:52.185 { 00:23:52.185 "method": "nvmf_subsystem_add_ns", 00:23:52.185 "params": { 00:23:52.185 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:52.185 "namespace": { 00:23:52.185 "nsid": 1, 00:23:52.185 "bdev_name": "malloc0", 00:23:52.185 "nguid": "AEFDAF8D3C0B4253B4D1F4E45C8F2AD8", 00:23:52.185 "uuid": "aefdaf8d-3c0b-4253-b4d1-f4e45c8f2ad8", 00:23:52.185 "no_auto_visible": false 00:23:52.185 } 00:23:52.185 } 00:23:52.185 }, 00:23:52.185 { 00:23:52.185 "method": "nvmf_subsystem_add_listener", 00:23:52.185 "params": { 00:23:52.185 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:52.185 "listen_address": { 00:23:52.185 "trtype": "TCP", 00:23:52.185 "adrfam": "IPv4", 00:23:52.185 "traddr": "10.0.0.2", 00:23:52.185 "trsvcid": "4420" 00:23:52.185 }, 00:23:52.185 "secure_channel": true 00:23:52.185 } 00:23:52.185 } 00:23:52.185 ] 00:23:52.185 } 00:23:52.185 ] 00:23:52.185 }' 00:23:52.185 16:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:52.443 16:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:23:52.443 "subsystems": [ 00:23:52.443 { 00:23:52.443 "subsystem": "keyring", 00:23:52.443 "config": [ 00:23:52.443 { 00:23:52.443 "method": "keyring_file_add_key", 00:23:52.443 "params": { 00:23:52.443 "name": "key0", 00:23:52.443 "path": "/tmp/tmp.lZx38bCGaF" 00:23:52.443 } 00:23:52.443 } 00:23:52.443 ] 00:23:52.443 }, 00:23:52.443 { 00:23:52.443 "subsystem": "iobuf", 00:23:52.443 "config": [ 00:23:52.443 { 00:23:52.443 "method": "iobuf_set_options", 00:23:52.443 "params": { 00:23:52.443 "small_pool_count": 8192, 00:23:52.443 "large_pool_count": 1024, 00:23:52.443 "small_bufsize": 8192, 00:23:52.443 "large_bufsize": 135168 00:23:52.443 } 00:23:52.443 } 00:23:52.443 ] 00:23:52.443 }, 00:23:52.443 { 00:23:52.443 "subsystem": "sock", 00:23:52.443 "config": [ 00:23:52.443 { 00:23:52.443 "method": "sock_set_default_impl", 00:23:52.443 "params": { 00:23:52.443 "impl_name": "posix" 00:23:52.443 } 00:23:52.443 }, 00:23:52.443 { 00:23:52.443 "method": "sock_impl_set_options", 00:23:52.443 "params": { 00:23:52.443 "impl_name": "ssl", 00:23:52.443 "recv_buf_size": 4096, 00:23:52.443 "send_buf_size": 4096, 00:23:52.443 "enable_recv_pipe": true, 00:23:52.443 "enable_quickack": false, 00:23:52.443 "enable_placement_id": 0, 00:23:52.443 "enable_zerocopy_send_server": true, 00:23:52.443 "enable_zerocopy_send_client": false, 00:23:52.443 "zerocopy_threshold": 0, 00:23:52.443 "tls_version": 0, 00:23:52.443 "enable_ktls": false 00:23:52.443 } 00:23:52.443 }, 00:23:52.443 { 00:23:52.443 "method": "sock_impl_set_options", 00:23:52.443 "params": { 00:23:52.443 "impl_name": "posix", 00:23:52.443 "recv_buf_size": 2097152, 00:23:52.443 "send_buf_size": 2097152, 00:23:52.443 "enable_recv_pipe": true, 00:23:52.443 "enable_quickack": false, 00:23:52.443 "enable_placement_id": 0, 00:23:52.443 "enable_zerocopy_send_server": true, 00:23:52.443 "enable_zerocopy_send_client": false, 00:23:52.443 "zerocopy_threshold": 0, 00:23:52.443 "tls_version": 0, 00:23:52.443 "enable_ktls": false 00:23:52.443 } 00:23:52.443 } 00:23:52.443 ] 00:23:52.443 }, 00:23:52.444 { 00:23:52.444 "subsystem": "vmd", 00:23:52.444 "config": [] 00:23:52.444 }, 00:23:52.444 { 00:23:52.444 "subsystem": "accel", 00:23:52.444 "config": [ 00:23:52.444 { 00:23:52.444 "method": "accel_set_options", 00:23:52.444 "params": { 00:23:52.444 "small_cache_size": 128, 00:23:52.444 "large_cache_size": 16, 00:23:52.444 "task_count": 2048, 00:23:52.444 "sequence_count": 2048, 00:23:52.444 "buf_count": 2048 00:23:52.444 } 00:23:52.444 } 00:23:52.444 ] 00:23:52.444 }, 00:23:52.444 { 00:23:52.444 "subsystem": "bdev", 00:23:52.444 "config": [ 00:23:52.444 { 00:23:52.444 "method": "bdev_set_options", 00:23:52.444 "params": { 00:23:52.444 "bdev_io_pool_size": 65535, 00:23:52.444 "bdev_io_cache_size": 256, 00:23:52.444 "bdev_auto_examine": true, 00:23:52.444 "iobuf_small_cache_size": 128, 00:23:52.444 "iobuf_large_cache_size": 16 00:23:52.444 } 00:23:52.444 }, 00:23:52.444 { 00:23:52.444 "method": "bdev_raid_set_options", 00:23:52.444 "params": { 00:23:52.444 "process_window_size_kb": 1024, 00:23:52.444 "process_max_bandwidth_mb_sec": 0 00:23:52.444 } 00:23:52.444 }, 00:23:52.444 { 00:23:52.444 "method": "bdev_iscsi_set_options", 00:23:52.444 "params": { 00:23:52.444 "timeout_sec": 30 00:23:52.444 } 00:23:52.444 }, 00:23:52.444 { 00:23:52.444 "method": "bdev_nvme_set_options", 00:23:52.444 "params": { 00:23:52.444 "action_on_timeout": "none", 00:23:52.444 "timeout_us": 0, 00:23:52.444 "timeout_admin_us": 0, 00:23:52.444 "keep_alive_timeout_ms": 10000, 00:23:52.444 "arbitration_burst": 0, 00:23:52.444 "low_priority_weight": 0, 00:23:52.444 "medium_priority_weight": 0, 00:23:52.444 "high_priority_weight": 0, 00:23:52.444 "nvme_adminq_poll_period_us": 10000, 00:23:52.444 "nvme_ioq_poll_period_us": 0, 00:23:52.444 "io_queue_requests": 512, 00:23:52.444 "delay_cmd_submit": true, 00:23:52.444 "transport_retry_count": 4, 00:23:52.444 "bdev_retry_count": 3, 00:23:52.444 "transport_ack_timeout": 0, 00:23:52.444 "ctrlr_loss_timeout_sec": 0, 00:23:52.444 "reconnect_delay_sec": 0, 00:23:52.444 "fast_io_fail_timeout_sec": 0, 00:23:52.444 "disable_auto_failback": false, 00:23:52.444 "generate_uuids": false, 00:23:52.444 "transport_tos": 0, 00:23:52.444 "nvme_error_stat": false, 00:23:52.444 "rdma_srq_size": 0, 00:23:52.444 "io_path_stat": false, 00:23:52.444 "allow_accel_sequence": false, 00:23:52.444 "rdma_max_cq_size": 0, 00:23:52.444 "rdma_cm_event_timeout_ms": 0, 00:23:52.444 "dhchap_digests": [ 00:23:52.444 "sha256", 00:23:52.444 "sha384", 00:23:52.444 "sha512" 00:23:52.444 ], 00:23:52.444 "dhchap_dhgroups": [ 00:23:52.444 "null", 00:23:52.444 "ffdhe2048", 00:23:52.444 "ffdhe3072", 00:23:52.444 "ffdhe4096", 00:23:52.444 "ffdhe6144", 00:23:52.444 "ffdhe8192" 00:23:52.444 ] 00:23:52.444 } 00:23:52.444 }, 00:23:52.444 { 00:23:52.444 "method": "bdev_nvme_attach_controller", 00:23:52.444 "params": { 00:23:52.444 "name": "TLSTEST", 00:23:52.444 "trtype": "TCP", 00:23:52.444 "adrfam": "IPv4", 00:23:52.444 "traddr": "10.0.0.2", 00:23:52.444 "trsvcid": "4420", 00:23:52.444 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:52.444 "prchk_reftag": false, 00:23:52.444 "prchk_guard": false, 00:23:52.444 "ctrlr_loss_timeout_sec": 0, 00:23:52.444 "reconnect_delay_sec": 0, 00:23:52.444 "fast_io_fail_timeout_sec": 0, 00:23:52.444 "psk": "key0", 00:23:52.444 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:52.444 "hdgst": false, 00:23:52.444 "ddgst": false 00:23:52.444 } 00:23:52.444 }, 00:23:52.444 { 00:23:52.444 "method": "bdev_nvme_set_hotplug", 00:23:52.444 "params": { 00:23:52.444 "period_us": 100000, 00:23:52.444 "enable": false 00:23:52.444 } 00:23:52.444 }, 00:23:52.444 { 00:23:52.444 "method": "bdev_wait_for_examine" 00:23:52.444 } 00:23:52.444 ] 00:23:52.444 }, 00:23:52.444 { 00:23:52.444 "subsystem": "nbd", 00:23:52.444 "config": [] 00:23:52.444 } 00:23:52.444 ] 00:23:52.444 }' 00:23:52.444 16:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 3192904 00:23:52.444 16:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3192904 ']' 00:23:52.444 16:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3192904 00:23:52.444 16:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:52.444 16:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:52.444 16:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3192904 00:23:52.444 16:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:52.444 16:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:52.444 16:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3192904' 00:23:52.444 killing process with pid 3192904 00:23:52.444 16:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3192904 00:23:52.444 Received shutdown signal, test time was about 10.000000 seconds 00:23:52.444 00:23:52.444 Latency(us) 00:23:52.444 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:52.444 =================================================================================================================== 00:23:52.444 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:52.444 16:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3192904 00:23:53.376 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 3192486 00:23:53.376 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3192486 ']' 00:23:53.376 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3192486 00:23:53.376 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:53.376 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:53.376 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3192486 00:23:53.634 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:53.634 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:53.634 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3192486' 00:23:53.634 killing process with pid 3192486 00:23:53.634 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3192486 00:23:53.634 16:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3192486 00:23:55.011 16:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:55.011 16:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:55.011 16:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:23:55.011 "subsystems": [ 00:23:55.011 { 00:23:55.011 "subsystem": "keyring", 00:23:55.011 "config": [ 00:23:55.011 { 00:23:55.011 "method": "keyring_file_add_key", 00:23:55.011 "params": { 00:23:55.011 "name": "key0", 00:23:55.011 "path": "/tmp/tmp.lZx38bCGaF" 00:23:55.011 } 00:23:55.011 } 00:23:55.011 ] 00:23:55.011 }, 00:23:55.011 { 00:23:55.011 "subsystem": "iobuf", 00:23:55.011 "config": [ 00:23:55.011 { 00:23:55.011 "method": "iobuf_set_options", 00:23:55.011 "params": { 00:23:55.011 "small_pool_count": 8192, 00:23:55.011 "large_pool_count": 1024, 00:23:55.011 "small_bufsize": 8192, 00:23:55.011 "large_bufsize": 135168 00:23:55.011 } 00:23:55.011 } 00:23:55.011 ] 00:23:55.011 }, 00:23:55.011 { 00:23:55.011 "subsystem": "sock", 00:23:55.011 "config": [ 00:23:55.011 { 00:23:55.011 "method": "sock_set_default_impl", 00:23:55.011 "params": { 00:23:55.011 "impl_name": "posix" 00:23:55.011 } 00:23:55.011 }, 00:23:55.011 { 00:23:55.011 "method": "sock_impl_set_options", 00:23:55.011 "params": { 00:23:55.011 "impl_name": "ssl", 00:23:55.011 "recv_buf_size": 4096, 00:23:55.011 "send_buf_size": 4096, 00:23:55.011 "enable_recv_pipe": true, 00:23:55.011 "enable_quickack": false, 00:23:55.011 "enable_placement_id": 0, 00:23:55.011 "enable_zerocopy_send_server": true, 00:23:55.011 "enable_zerocopy_send_client": false, 00:23:55.011 "zerocopy_threshold": 0, 00:23:55.011 "tls_version": 0, 00:23:55.011 "enable_ktls": false 00:23:55.011 } 00:23:55.011 }, 00:23:55.011 { 00:23:55.011 "method": "sock_impl_set_options", 00:23:55.011 "params": { 00:23:55.011 "impl_name": "posix", 00:23:55.011 "recv_buf_size": 2097152, 00:23:55.011 "send_buf_size": 2097152, 00:23:55.011 "enable_recv_pipe": true, 00:23:55.011 "enable_quickack": false, 00:23:55.011 "enable_placement_id": 0, 00:23:55.011 "enable_zerocopy_send_server": true, 00:23:55.011 "enable_zerocopy_send_client": false, 00:23:55.011 "zerocopy_threshold": 0, 00:23:55.011 "tls_version": 0, 00:23:55.011 "enable_ktls": false 00:23:55.011 } 00:23:55.011 } 00:23:55.011 ] 00:23:55.011 }, 00:23:55.011 { 00:23:55.011 "subsystem": "vmd", 00:23:55.011 "config": [] 00:23:55.011 }, 00:23:55.011 { 00:23:55.011 "subsystem": "accel", 00:23:55.011 "config": [ 00:23:55.011 { 00:23:55.011 "method": "accel_set_options", 00:23:55.011 "params": { 00:23:55.011 "small_cache_size": 128, 00:23:55.011 "large_cache_size": 16, 00:23:55.011 "task_count": 2048, 00:23:55.011 "sequence_count": 2048, 00:23:55.011 "buf_count": 2048 00:23:55.011 } 00:23:55.011 } 00:23:55.011 ] 00:23:55.011 }, 00:23:55.011 { 00:23:55.011 "subsystem": "bdev", 00:23:55.011 "config": [ 00:23:55.011 { 00:23:55.011 "method": "bdev_set_options", 00:23:55.011 "params": { 00:23:55.011 "bdev_io_pool_size": 65535, 00:23:55.011 "bdev_io_cache_size": 256, 00:23:55.011 "bdev_auto_examine": true, 00:23:55.011 "iobuf_small_cache_size": 128, 00:23:55.011 "iobuf_large_cache_size": 16 00:23:55.011 } 00:23:55.011 }, 00:23:55.011 { 00:23:55.011 "method": "bdev_raid_set_options", 00:23:55.011 "params": { 00:23:55.011 "process_window_size_kb": 1024, 00:23:55.011 "process_max_bandwidth_mb_sec": 0 00:23:55.011 } 00:23:55.011 }, 00:23:55.011 { 00:23:55.011 "method": "bdev_iscsi_set_options", 00:23:55.011 "params": { 00:23:55.011 "timeout_sec": 30 00:23:55.011 } 00:23:55.011 }, 00:23:55.011 { 00:23:55.011 "method": "bdev_nvme_set_options", 00:23:55.011 "params": { 00:23:55.011 "action_on_timeout": "none", 00:23:55.011 "timeout_us": 0, 00:23:55.011 "timeout_admin_us": 0, 00:23:55.011 "keep_alive_timeout_ms": 10000, 00:23:55.011 "arbitration_burst": 0, 00:23:55.011 "low_priority_weight": 0, 00:23:55.011 "medium_priority_weight": 0, 00:23:55.011 "high_priority_weight": 0, 00:23:55.011 "nvme_adminq_poll_period_us": 10000, 00:23:55.011 "nvme_ioq_poll_period_us": 0, 00:23:55.011 "io_queue_requests": 0, 00:23:55.011 "delay_cmd_submit": true, 00:23:55.011 "transport_retry_count": 4, 00:23:55.011 "bdev_retry_count": 3, 00:23:55.011 "transport_ack_timeout": 0, 00:23:55.011 "ctrlr_loss_timeout_sec": 0, 00:23:55.011 "reconnect_delay_sec": 0, 00:23:55.011 "fast_io_fail_timeout_sec": 0, 00:23:55.011 "disable_auto_failback": false, 00:23:55.011 "generate_uuids": false, 00:23:55.011 "transport_tos": 0, 00:23:55.011 "nvme_error_stat": false, 00:23:55.011 "rdma_srq_size": 0, 00:23:55.011 "io_path_stat": false, 00:23:55.011 "allow_accel_sequence": false, 00:23:55.011 "rdma_max_cq_size": 0, 00:23:55.011 "rdma_cm_event_timeout_ms": 0, 00:23:55.011 "dhchap_digests": [ 00:23:55.011 "sha256", 00:23:55.011 "sha384", 00:23:55.011 "sha512" 00:23:55.011 ], 00:23:55.011 "dhchap_dhgroups": [ 00:23:55.011 "null", 00:23:55.011 "ffdhe2048", 00:23:55.011 "ffdhe3072", 00:23:55.011 "ffdhe4096", 00:23:55.011 "ffdhe6144", 00:23:55.011 "ffdhe8192" 00:23:55.011 ] 00:23:55.011 } 00:23:55.011 }, 00:23:55.011 { 00:23:55.011 "method": "bdev_nvme_set_hotplug", 00:23:55.011 "params": { 00:23:55.011 "period_us": 100000, 00:23:55.011 "enable": false 00:23:55.011 } 00:23:55.011 }, 00:23:55.011 { 00:23:55.011 "method": "bdev_malloc_create", 00:23:55.011 "params": { 00:23:55.011 "name": "malloc0", 00:23:55.011 "num_blocks": 8192, 00:23:55.011 "block_size": 4096, 00:23:55.011 "physical_block_size": 4096, 00:23:55.012 "uuid": "aefdaf8d-3c0b-4253-b4d1-f4e45c8f2ad8", 00:23:55.012 "optimal_io_boundary": 0, 00:23:55.012 "md_size": 0, 00:23:55.012 "dif_type": 0, 00:23:55.012 "dif_is_head_of_md": false, 00:23:55.012 "dif_pi_format": 0 00:23:55.012 } 00:23:55.012 }, 00:23:55.012 { 00:23:55.012 "method": "bdev_wait_for_examine" 00:23:55.012 } 00:23:55.012 ] 00:23:55.012 }, 00:23:55.012 { 00:23:55.012 "subsystem": "nbd", 00:23:55.012 "config": [] 00:23:55.012 }, 00:23:55.012 { 00:23:55.012 "subsystem": "scheduler", 00:23:55.012 "config": [ 00:23:55.012 { 00:23:55.012 "method": "framework_set_scheduler", 00:23:55.012 "params": { 00:23:55.012 "name": "static" 00:23:55.012 } 00:23:55.012 } 00:23:55.012 ] 00:23:55.012 }, 00:23:55.012 { 00:23:55.012 "subsystem": "nvmf", 00:23:55.012 "config": [ 00:23:55.012 { 00:23:55.012 "method": "nvmf_set_config", 00:23:55.012 "params": { 00:23:55.012 "discovery_filter": "match_any", 00:23:55.012 "admin_cmd_passthru": { 00:23:55.012 "identify_ctrlr": false 00:23:55.012 }, 00:23:55.012 "dhchap_digests": [ 00:23:55.012 "sha256", 00:23:55.012 "sha384", 00:23:55.012 "sha512" 00:23:55.012 ], 00:23:55.012 "dhchap_dhgroups": [ 00:23:55.012 "null", 00:23:55.012 "ffdhe2048", 00:23:55.012 "ffdhe3072", 00:23:55.012 "ffdhe4096", 00:23:55.012 "ffdhe6144", 00:23:55.012 "ffdhe8192" 00:23:55.012 ] 00:23:55.012 } 00:23:55.012 }, 00:23:55.012 { 00:23:55.012 "method": "nvmf_set_max_subsystems", 00:23:55.012 "params": { 00:23:55.012 "max_subsystems": 1024 00:23:55.012 } 00:23:55.012 }, 00:23:55.012 { 00:23:55.012 "method": "nvmf_set_crdt", 00:23:55.012 "params": { 00:23:55.012 "crdt1": 0, 00:23:55.012 "crdt2": 0, 00:23:55.012 "crdt3": 0 00:23:55.012 } 00:23:55.012 }, 00:23:55.012 { 00:23:55.012 "method": "nvmf_create_transport", 00:23:55.012 "params": { 00:23:55.012 "trtype": "TCP", 00:23:55.012 "max_queue_depth": 128, 00:23:55.012 "max_io_qpairs_per_ctrlr": 127, 00:23:55.012 "in_capsule_data_size": 4096, 00:23:55.012 "max_io_size": 131072, 00:23:55.012 "io_unit_size": 131072, 00:23:55.012 "max_aq_depth": 128, 00:23:55.012 "num_shared_buffers": 511, 00:23:55.012 "buf_cache_size": 4294967295, 00:23:55.012 "dif_insert_or_strip": false, 00:23:55.012 "zcopy": false, 00:23:55.012 "c2h_success": false, 00:23:55.012 "sock_priority": 0, 00:23:55.012 "abort_timeout_sec": 1, 00:23:55.012 "ack_timeout": 0, 00:23:55.012 "data_wr_pool_size": 0 00:23:55.012 } 00:23:55.012 }, 00:23:55.012 { 00:23:55.012 "method": "nvmf_create_subsystem", 00:23:55.012 "params": { 00:23:55.012 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:55.012 "allow_any_host": false, 00:23:55.012 "serial_number": "SPDK00000000000001", 00:23:55.012 "model_number": "SPDK bdev Controller", 00:23:55.012 "max_namespaces": 10, 00:23:55.012 "min_cntlid": 1, 00:23:55.012 "max_cntlid": 65519, 00:23:55.012 "ana_reporting": false 00:23:55.012 } 00:23:55.012 }, 00:23:55.012 { 00:23:55.012 "method": "nvmf_subsystem_add_host", 00:23:55.012 "params": { 00:23:55.012 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:55.012 "host": "nqn.2016-06.io.spdk:host1", 00:23:55.012 "psk": "key0" 00:23:55.012 } 00:23:55.012 }, 00:23:55.012 { 00:23:55.012 "method": "nvmf_subsystem_add_ns", 00:23:55.012 "params": { 00:23:55.012 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:55.012 "namespace": { 00:23:55.012 "nsid": 1, 00:23:55.012 "bdev_name": "malloc0", 00:23:55.012 "nguid": "AEFDAF8D3C0B4253B4D1F4E45C8F2AD8", 00:23:55.012 "uuid": "aefdaf8d-3c0b-4253-b4d1-f4e45c8f2ad8", 00:23:55.012 "no_auto_visible": false 00:23:55.012 } 00:23:55.012 } 00:23:55.012 }, 00:23:55.012 { 00:23:55.012 "method": "nvmf_subsystem_add_listener", 00:23:55.012 "params": { 00:23:55.012 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:55.012 "listen_address": { 00:23:55.012 "trtype": "TCP", 00:23:55.012 "adrfam": "IPv4", 00:23:55.012 "traddr": "10.0.0.2", 00:23:55.012 "trsvcid": "4420" 00:23:55.012 }, 00:23:55.012 "secure_channel": true 00:23:55.012 } 00:23:55.012 } 00:23:55.012 ] 00:23:55.012 } 00:23:55.012 ] 00:23:55.012 }' 00:23:55.012 16:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:55.012 16:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:55.012 16:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=3193465 00:23:55.012 16:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:55.012 16:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 3193465 00:23:55.012 16:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3193465 ']' 00:23:55.012 16:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:55.012 16:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:55.012 16:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:55.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:55.012 16:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:55.012 16:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:55.012 [2024-09-29 16:31:55.372395] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:23:55.012 [2024-09-29 16:31:55.372550] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:55.012 [2024-09-29 16:31:55.518582] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:55.300 [2024-09-29 16:31:55.777882] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:55.300 [2024-09-29 16:31:55.777973] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:55.301 [2024-09-29 16:31:55.778000] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:55.301 [2024-09-29 16:31:55.778025] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:55.301 [2024-09-29 16:31:55.778046] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:55.301 [2024-09-29 16:31:55.778191] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:55.887 [2024-09-29 16:31:56.338700] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:55.887 [2024-09-29 16:31:56.370682] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:55.887 [2024-09-29 16:31:56.371013] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:55.887 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:55.887 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:55.887 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:55.887 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:55.887 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:55.887 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:55.887 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=3193622 00:23:55.887 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 3193622 /var/tmp/bdevperf.sock 00:23:55.887 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3193622 ']' 00:23:55.887 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:55.887 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:55.887 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:55.887 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:23:55.887 "subsystems": [ 00:23:55.887 { 00:23:55.887 "subsystem": "keyring", 00:23:55.887 "config": [ 00:23:55.887 { 00:23:55.887 "method": "keyring_file_add_key", 00:23:55.887 "params": { 00:23:55.887 "name": "key0", 00:23:55.887 "path": "/tmp/tmp.lZx38bCGaF" 00:23:55.887 } 00:23:55.887 } 00:23:55.887 ] 00:23:55.887 }, 00:23:55.887 { 00:23:55.887 "subsystem": "iobuf", 00:23:55.887 "config": [ 00:23:55.887 { 00:23:55.887 "method": "iobuf_set_options", 00:23:55.887 "params": { 00:23:55.887 "small_pool_count": 8192, 00:23:55.887 "large_pool_count": 1024, 00:23:55.887 "small_bufsize": 8192, 00:23:55.887 "large_bufsize": 135168 00:23:55.887 } 00:23:55.887 } 00:23:55.887 ] 00:23:55.887 }, 00:23:55.887 { 00:23:55.887 "subsystem": "sock", 00:23:55.887 "config": [ 00:23:55.887 { 00:23:55.887 "method": "sock_set_default_impl", 00:23:55.887 "params": { 00:23:55.887 "impl_name": "posix" 00:23:55.887 } 00:23:55.887 }, 00:23:55.887 { 00:23:55.887 "method": "sock_impl_set_options", 00:23:55.887 "params": { 00:23:55.887 "impl_name": "ssl", 00:23:55.887 "recv_buf_size": 4096, 00:23:55.887 "send_buf_size": 4096, 00:23:55.887 "enable_recv_pipe": true, 00:23:55.887 "enable_quickack": false, 00:23:55.887 "enable_placement_id": 0, 00:23:55.887 "enable_zerocopy_send_server": true, 00:23:55.887 "enable_zerocopy_send_client": false, 00:23:55.887 "zerocopy_threshold": 0, 00:23:55.887 "tls_version": 0, 00:23:55.887 "enable_ktls": false 00:23:55.887 } 00:23:55.887 }, 00:23:55.887 { 00:23:55.887 "method": "sock_impl_set_options", 00:23:55.887 "params": { 00:23:55.887 "impl_name": "posix", 00:23:55.887 "recv_buf_size": 2097152, 00:23:55.887 "send_buf_size": 2097152, 00:23:55.887 "enable_recv_pipe": true, 00:23:55.887 "enable_quickack": false, 00:23:55.887 "enable_placement_id": 0, 00:23:55.887 "enable_zerocopy_send_server": true, 00:23:55.887 "enable_zerocopy_send_client": false, 00:23:55.887 "zerocopy_threshold": 0, 00:23:55.887 "tls_version": 0, 00:23:55.887 "enable_ktls": false 00:23:55.887 } 00:23:55.887 } 00:23:55.887 ] 00:23:55.887 }, 00:23:55.887 { 00:23:55.887 "subsystem": "vmd", 00:23:55.887 "config": [] 00:23:55.887 }, 00:23:55.887 { 00:23:55.887 "subsystem": "accel", 00:23:55.887 "config": [ 00:23:55.887 { 00:23:55.887 "method": "accel_set_options", 00:23:55.887 "params": { 00:23:55.887 "small_cache_size": 128, 00:23:55.887 "large_cache_size": 16, 00:23:55.887 "task_count": 2048, 00:23:55.887 "sequence_count": 2048, 00:23:55.887 "buf_count": 2048 00:23:55.887 } 00:23:55.887 } 00:23:55.887 ] 00:23:55.887 }, 00:23:55.887 { 00:23:55.887 "subsystem": "bdev", 00:23:55.887 "config": [ 00:23:55.887 { 00:23:55.887 "method": "bdev_set_options", 00:23:55.887 "params": { 00:23:55.887 "bdev_io_pool_size": 65535, 00:23:55.887 "bdev_io_cache_size": 256, 00:23:55.887 "bdev_auto_examine": true, 00:23:55.887 "iobuf_small_cache_size": 128, 00:23:55.887 "iobuf_large_cache_size": 16 00:23:55.887 } 00:23:55.887 }, 00:23:55.887 { 00:23:55.887 "method": "bdev_raid_set_options", 00:23:55.887 "params": { 00:23:55.887 "process_window_size_kb": 1024, 00:23:55.887 "process_max_bandwidth_mb_sec": 0 00:23:55.887 } 00:23:55.887 }, 00:23:55.887 { 00:23:55.887 "method": "bdev_iscsi_set_options", 00:23:55.887 "params": { 00:23:55.887 "timeout_sec": 30 00:23:55.887 } 00:23:55.887 }, 00:23:55.887 { 00:23:55.887 "method": "bdev_nvme_set_options", 00:23:55.887 "params": { 00:23:55.887 "action_on_timeout": "none", 00:23:55.887 "timeout_us": 0, 00:23:55.887 "timeout_admin_us": 0, 00:23:55.887 "keep_alive_timeout_ms": 10000, 00:23:55.887 "arbitration_burst": 0, 00:23:55.887 "low_priority_weight": 0, 00:23:55.887 "medium_priority_weight": 0, 00:23:55.887 "high_priority_weight": 0, 00:23:55.887 "nvme_adminq_poll_period_us": 10000, 00:23:55.887 "nvme_ioq_poll_period_us": 0, 00:23:55.887 "io_queue_requests": 512, 00:23:55.887 "delay_cmd_submit": true, 00:23:55.887 "transport_retry_count": 4, 00:23:55.887 "bdev_retry_count": 3, 00:23:55.887 "transport_ack_timeout": 0, 00:23:55.887 "ctrlr_loss_timeout_sec": 0, 00:23:55.887 "reconnect_delay_sec": 0, 00:23:55.887 "fast_io_fail_timeout_sec": 0, 00:23:55.887 "disable_auto_failback": false, 00:23:55.887 "generate_uuids": false, 00:23:55.887 "transport_tos": 0, 00:23:55.887 "nvme_error_stat": false, 00:23:55.887 "rdma_srq_size": 0, 00:23:55.887 "io_path_stat": false, 00:23:55.887 "allow_accel_sequence": false, 00:23:55.887 "rdma_max_cq_size": 0, 00:23:55.887 "rdma_cm_event_timeout_ms": 0, 00:23:55.887 "dhchap_digests": [ 00:23:55.887 "sha256", 00:23:55.887 "sha384", 00:23:55.887 "sha512" 00:23:55.887 ], 00:23:55.887 "dhchap_dhgroups": [ 00:23:55.887 "null", 00:23:55.887 "ffdhe2048", 00:23:55.887 "ffdhe3072", 00:23:55.887 "ffdhe4096", 00:23:55.887 "ffdhe6144", 00:23:55.887 "ffdhe8192" 00:23:55.887 ] 00:23:55.887 } 00:23:55.887 }, 00:23:55.887 { 00:23:55.887 "method": "bdev_nvme_attach_controller", 00:23:55.887 "params": { 00:23:55.887 "name": "TLSTEST", 00:23:55.888 "trtype": "TCP", 00:23:55.888 "adrfam": "IPv4", 00:23:55.888 "traddr": "10.0.0.2", 00:23:55.888 "trsvcid": "4420", 00:23:55.888 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:55.888 "prchk_reftag": false, 00:23:55.888 "prchk_guard": false, 00:23:55.888 "ctrlr_loss_timeout_sec": 0, 00:23:55.888 "reconnect_delay_sec": 0, 00:23:55.888 "fast_io_fail_timeout_sec": 0, 00:23:55.888 "psk": "key0", 00:23:55.888 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:55.888 "hdgst": false, 00:23:55.888 "ddgst": false 00:23:55.888 } 00:23:55.888 }, 00:23:55.888 { 00:23:55.888 "method": "bdev_nvme_set_hotplug", 00:23:55.888 "params": { 00:23:55.888 "period_us": 100000, 00:23:55.888 "enable": false 00:23:55.888 } 00:23:55.888 }, 00:23:55.888 { 00:23:55.888 "method": "bdev_wait_for_examine" 00:23:55.888 } 00:23:55.888 ] 00:23:55.888 }, 00:23:55.888 { 00:23:55.888 "subsystem": "nbd", 00:23:55.888 "config": [] 00:23:55.888 } 00:23:55.888 ] 00:23:55.888 }' 00:23:55.888 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:55.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:55.888 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:55.888 16:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:56.145 [2024-09-29 16:31:56.502645] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:23:56.145 [2024-09-29 16:31:56.502817] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3193622 ] 00:23:56.145 [2024-09-29 16:31:56.637118] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:56.403 [2024-09-29 16:31:56.863773] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:56.967 [2024-09-29 16:31:57.262506] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:56.967 16:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:56.967 16:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:56.967 16:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:57.226 Running I/O for 10 seconds... 00:24:07.443 2492.00 IOPS, 9.73 MiB/s 2579.50 IOPS, 10.08 MiB/s 2607.00 IOPS, 10.18 MiB/s 2630.50 IOPS, 10.28 MiB/s 2645.00 IOPS, 10.33 MiB/s 2650.17 IOPS, 10.35 MiB/s 2655.57 IOPS, 10.37 MiB/s 2659.62 IOPS, 10.39 MiB/s 2665.56 IOPS, 10.41 MiB/s 2670.80 IOPS, 10.43 MiB/s 00:24:07.443 Latency(us) 00:24:07.443 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:07.443 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:07.443 Verification LBA range: start 0x0 length 0x2000 00:24:07.443 TLSTESTn1 : 10.04 2672.47 10.44 0.00 0.00 47771.28 7767.23 51263.72 00:24:07.443 =================================================================================================================== 00:24:07.443 Total : 2672.47 10.44 0.00 0.00 47771.28 7767.23 51263.72 00:24:07.443 { 00:24:07.443 "results": [ 00:24:07.443 { 00:24:07.443 "job": "TLSTESTn1", 00:24:07.443 "core_mask": "0x4", 00:24:07.443 "workload": "verify", 00:24:07.443 "status": "finished", 00:24:07.443 "verify_range": { 00:24:07.443 "start": 0, 00:24:07.443 "length": 8192 00:24:07.443 }, 00:24:07.443 "queue_depth": 128, 00:24:07.443 "io_size": 4096, 00:24:07.443 "runtime": 10.040519, 00:24:07.443 "iops": 2672.471413081336, 00:24:07.443 "mibps": 10.439341457348968, 00:24:07.443 "io_failed": 0, 00:24:07.443 "io_timeout": 0, 00:24:07.443 "avg_latency_us": 47771.28360694612, 00:24:07.443 "min_latency_us": 7767.22962962963, 00:24:07.443 "max_latency_us": 51263.71555555556 00:24:07.443 } 00:24:07.443 ], 00:24:07.443 "core_count": 1 00:24:07.443 } 00:24:07.443 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:07.443 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 3193622 00:24:07.443 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3193622 ']' 00:24:07.443 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3193622 00:24:07.443 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:07.443 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:07.443 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3193622 00:24:07.443 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:07.443 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:07.443 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3193622' 00:24:07.443 killing process with pid 3193622 00:24:07.443 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3193622 00:24:07.443 Received shutdown signal, test time was about 10.000000 seconds 00:24:07.443 00:24:07.443 Latency(us) 00:24:07.443 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:07.443 =================================================================================================================== 00:24:07.443 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:07.443 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3193622 00:24:08.378 16:32:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 3193465 00:24:08.378 16:32:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3193465 ']' 00:24:08.378 16:32:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3193465 00:24:08.378 16:32:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:08.378 16:32:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:08.378 16:32:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3193465 00:24:08.378 16:32:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:08.378 16:32:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:08.378 16:32:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3193465' 00:24:08.378 killing process with pid 3193465 00:24:08.378 16:32:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3193465 00:24:08.378 16:32:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3193465 00:24:09.748 16:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:24:09.748 16:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:09.748 16:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:09.748 16:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:09.748 16:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=3195210 00:24:09.748 16:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:09.748 16:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 3195210 00:24:09.748 16:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3195210 ']' 00:24:09.748 16:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:09.748 16:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:09.748 16:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:09.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:09.748 16:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:09.749 16:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:09.749 [2024-09-29 16:32:10.161260] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:24:09.749 [2024-09-29 16:32:10.161418] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:09.749 [2024-09-29 16:32:10.304858] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:10.006 [2024-09-29 16:32:10.556457] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:10.006 [2024-09-29 16:32:10.556551] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:10.006 [2024-09-29 16:32:10.556577] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:10.006 [2024-09-29 16:32:10.556601] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:10.006 [2024-09-29 16:32:10.556621] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:10.006 [2024-09-29 16:32:10.556686] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:10.941 16:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:10.941 16:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:10.941 16:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:10.941 16:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:10.941 16:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:10.941 16:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:10.941 16:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.lZx38bCGaF 00:24:10.941 16:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.lZx38bCGaF 00:24:10.941 16:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:10.941 [2024-09-29 16:32:11.421962] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:10.941 16:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:11.200 16:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:11.458 [2024-09-29 16:32:11.967600] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:11.458 [2024-09-29 16:32:11.967970] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:11.458 16:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:12.024 malloc0 00:24:12.024 16:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:12.024 16:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.lZx38bCGaF 00:24:12.282 16:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:12.849 16:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=3195627 00:24:12.849 16:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:12.849 16:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:12.849 16:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 3195627 /var/tmp/bdevperf.sock 00:24:12.849 16:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3195627 ']' 00:24:12.849 16:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:12.849 16:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:12.849 16:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:12.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:12.849 16:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:12.849 16:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:12.849 [2024-09-29 16:32:13.188407] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:24:12.849 [2024-09-29 16:32:13.188551] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3195627 ] 00:24:12.849 [2024-09-29 16:32:13.322414] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:13.107 [2024-09-29 16:32:13.572516] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:13.673 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:13.673 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:13.673 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lZx38bCGaF 00:24:13.931 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:14.189 [2024-09-29 16:32:14.655408] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:14.189 nvme0n1 00:24:14.447 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:14.447 Running I/O for 1 seconds... 00:24:15.380 2676.00 IOPS, 10.45 MiB/s 00:24:15.380 Latency(us) 00:24:15.380 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:15.380 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:15.380 Verification LBA range: start 0x0 length 0x2000 00:24:15.380 nvme0n1 : 1.04 2704.91 10.57 0.00 0.00 46637.61 8107.05 38447.79 00:24:15.380 =================================================================================================================== 00:24:15.380 Total : 2704.91 10.57 0.00 0.00 46637.61 8107.05 38447.79 00:24:15.380 { 00:24:15.380 "results": [ 00:24:15.380 { 00:24:15.380 "job": "nvme0n1", 00:24:15.380 "core_mask": "0x2", 00:24:15.380 "workload": "verify", 00:24:15.380 "status": "finished", 00:24:15.380 "verify_range": { 00:24:15.380 "start": 0, 00:24:15.380 "length": 8192 00:24:15.380 }, 00:24:15.380 "queue_depth": 128, 00:24:15.380 "io_size": 4096, 00:24:15.380 "runtime": 1.036632, 00:24:15.380 "iops": 2704.913604827943, 00:24:15.380 "mibps": 10.566068768859152, 00:24:15.380 "io_failed": 0, 00:24:15.380 "io_timeout": 0, 00:24:15.380 "avg_latency_us": 46637.60524964336, 00:24:15.380 "min_latency_us": 8107.045925925926, 00:24:15.380 "max_latency_us": 38447.78666666667 00:24:15.380 } 00:24:15.380 ], 00:24:15.380 "core_count": 1 00:24:15.380 } 00:24:15.380 16:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 3195627 00:24:15.380 16:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3195627 ']' 00:24:15.380 16:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3195627 00:24:15.380 16:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:15.380 16:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:15.380 16:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3195627 00:24:15.638 16:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:15.638 16:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:15.638 16:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3195627' 00:24:15.638 killing process with pid 3195627 00:24:15.638 16:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3195627 00:24:15.638 Received shutdown signal, test time was about 1.000000 seconds 00:24:15.638 00:24:15.638 Latency(us) 00:24:15.638 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:15.638 =================================================================================================================== 00:24:15.638 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:15.638 16:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3195627 00:24:16.570 16:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 3195210 00:24:16.570 16:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3195210 ']' 00:24:16.570 16:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3195210 00:24:16.570 16:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:16.570 16:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:16.570 16:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3195210 00:24:16.570 16:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:16.570 16:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:16.570 16:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3195210' 00:24:16.570 killing process with pid 3195210 00:24:16.570 16:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3195210 00:24:16.570 16:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3195210 00:24:17.941 16:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:24:17.941 16:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:17.941 16:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:17.941 16:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:17.941 16:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=3196293 00:24:17.941 16:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:17.941 16:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 3196293 00:24:17.941 16:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3196293 ']' 00:24:17.941 16:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:17.941 16:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:17.941 16:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:17.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:17.941 16:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:17.941 16:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:18.199 [2024-09-29 16:32:18.572781] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:24:18.199 [2024-09-29 16:32:18.572932] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:18.199 [2024-09-29 16:32:18.706874] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:18.457 [2024-09-29 16:32:18.931196] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:18.457 [2024-09-29 16:32:18.931288] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:18.457 [2024-09-29 16:32:18.931311] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:18.457 [2024-09-29 16:32:18.931331] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:18.457 [2024-09-29 16:32:18.931347] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:18.457 [2024-09-29 16:32:18.931390] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:19.021 16:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:19.021 16:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:19.021 16:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:19.021 16:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:19.021 16:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:19.021 16:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:19.021 16:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:24:19.021 16:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.021 16:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:19.279 [2024-09-29 16:32:19.585064] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:19.279 malloc0 00:24:19.279 [2024-09-29 16:32:19.654537] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:19.279 [2024-09-29 16:32:19.654940] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:19.279 16:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.279 16:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=3196442 00:24:19.279 16:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:19.279 16:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 3196442 /var/tmp/bdevperf.sock 00:24:19.279 16:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3196442 ']' 00:24:19.279 16:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:19.279 16:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:19.279 16:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:19.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:19.279 16:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:19.279 16:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:19.279 [2024-09-29 16:32:19.764831] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:24:19.279 [2024-09-29 16:32:19.764980] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3196442 ] 00:24:19.537 [2024-09-29 16:32:19.899401] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:19.795 [2024-09-29 16:32:20.154820] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:20.361 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:20.361 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:20.361 16:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lZx38bCGaF 00:24:20.619 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:20.877 [2024-09-29 16:32:21.275202] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:20.877 nvme0n1 00:24:20.877 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:21.135 Running I/O for 1 seconds... 00:24:22.068 2590.00 IOPS, 10.12 MiB/s 00:24:22.068 Latency(us) 00:24:22.068 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:22.068 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:22.068 Verification LBA range: start 0x0 length 0x2000 00:24:22.068 nvme0n1 : 1.03 2637.47 10.30 0.00 0.00 47964.77 8301.23 39224.51 00:24:22.068 =================================================================================================================== 00:24:22.068 Total : 2637.47 10.30 0.00 0.00 47964.77 8301.23 39224.51 00:24:22.068 { 00:24:22.068 "results": [ 00:24:22.068 { 00:24:22.068 "job": "nvme0n1", 00:24:22.068 "core_mask": "0x2", 00:24:22.068 "workload": "verify", 00:24:22.068 "status": "finished", 00:24:22.068 "verify_range": { 00:24:22.068 "start": 0, 00:24:22.068 "length": 8192 00:24:22.068 }, 00:24:22.068 "queue_depth": 128, 00:24:22.068 "io_size": 4096, 00:24:22.068 "runtime": 1.030534, 00:24:22.069 "iops": 2637.4675653593185, 00:24:22.069 "mibps": 10.302607677184838, 00:24:22.069 "io_failed": 0, 00:24:22.069 "io_timeout": 0, 00:24:22.069 "avg_latency_us": 47964.76837761971, 00:24:22.069 "min_latency_us": 8301.226666666667, 00:24:22.069 "max_latency_us": 39224.50962962963 00:24:22.069 } 00:24:22.069 ], 00:24:22.069 "core_count": 1 00:24:22.069 } 00:24:22.069 16:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:24:22.069 16:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.069 16:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:22.327 16:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.327 16:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:24:22.327 "subsystems": [ 00:24:22.327 { 00:24:22.327 "subsystem": "keyring", 00:24:22.327 "config": [ 00:24:22.327 { 00:24:22.327 "method": "keyring_file_add_key", 00:24:22.327 "params": { 00:24:22.327 "name": "key0", 00:24:22.327 "path": "/tmp/tmp.lZx38bCGaF" 00:24:22.327 } 00:24:22.327 } 00:24:22.327 ] 00:24:22.327 }, 00:24:22.327 { 00:24:22.327 "subsystem": "iobuf", 00:24:22.327 "config": [ 00:24:22.327 { 00:24:22.327 "method": "iobuf_set_options", 00:24:22.327 "params": { 00:24:22.327 "small_pool_count": 8192, 00:24:22.327 "large_pool_count": 1024, 00:24:22.327 "small_bufsize": 8192, 00:24:22.327 "large_bufsize": 135168 00:24:22.327 } 00:24:22.327 } 00:24:22.327 ] 00:24:22.327 }, 00:24:22.327 { 00:24:22.327 "subsystem": "sock", 00:24:22.327 "config": [ 00:24:22.327 { 00:24:22.327 "method": "sock_set_default_impl", 00:24:22.327 "params": { 00:24:22.327 "impl_name": "posix" 00:24:22.327 } 00:24:22.327 }, 00:24:22.327 { 00:24:22.327 "method": "sock_impl_set_options", 00:24:22.327 "params": { 00:24:22.327 "impl_name": "ssl", 00:24:22.327 "recv_buf_size": 4096, 00:24:22.327 "send_buf_size": 4096, 00:24:22.327 "enable_recv_pipe": true, 00:24:22.327 "enable_quickack": false, 00:24:22.327 "enable_placement_id": 0, 00:24:22.327 "enable_zerocopy_send_server": true, 00:24:22.327 "enable_zerocopy_send_client": false, 00:24:22.327 "zerocopy_threshold": 0, 00:24:22.327 "tls_version": 0, 00:24:22.327 "enable_ktls": false 00:24:22.327 } 00:24:22.327 }, 00:24:22.327 { 00:24:22.327 "method": "sock_impl_set_options", 00:24:22.327 "params": { 00:24:22.327 "impl_name": "posix", 00:24:22.327 "recv_buf_size": 2097152, 00:24:22.327 "send_buf_size": 2097152, 00:24:22.327 "enable_recv_pipe": true, 00:24:22.327 "enable_quickack": false, 00:24:22.327 "enable_placement_id": 0, 00:24:22.327 "enable_zerocopy_send_server": true, 00:24:22.327 "enable_zerocopy_send_client": false, 00:24:22.327 "zerocopy_threshold": 0, 00:24:22.327 "tls_version": 0, 00:24:22.327 "enable_ktls": false 00:24:22.327 } 00:24:22.327 } 00:24:22.327 ] 00:24:22.327 }, 00:24:22.327 { 00:24:22.327 "subsystem": "vmd", 00:24:22.327 "config": [] 00:24:22.327 }, 00:24:22.327 { 00:24:22.327 "subsystem": "accel", 00:24:22.327 "config": [ 00:24:22.327 { 00:24:22.327 "method": "accel_set_options", 00:24:22.327 "params": { 00:24:22.327 "small_cache_size": 128, 00:24:22.327 "large_cache_size": 16, 00:24:22.327 "task_count": 2048, 00:24:22.327 "sequence_count": 2048, 00:24:22.327 "buf_count": 2048 00:24:22.327 } 00:24:22.327 } 00:24:22.327 ] 00:24:22.327 }, 00:24:22.327 { 00:24:22.327 "subsystem": "bdev", 00:24:22.327 "config": [ 00:24:22.327 { 00:24:22.327 "method": "bdev_set_options", 00:24:22.327 "params": { 00:24:22.327 "bdev_io_pool_size": 65535, 00:24:22.327 "bdev_io_cache_size": 256, 00:24:22.327 "bdev_auto_examine": true, 00:24:22.327 "iobuf_small_cache_size": 128, 00:24:22.327 "iobuf_large_cache_size": 16 00:24:22.327 } 00:24:22.327 }, 00:24:22.327 { 00:24:22.327 "method": "bdev_raid_set_options", 00:24:22.327 "params": { 00:24:22.327 "process_window_size_kb": 1024, 00:24:22.327 "process_max_bandwidth_mb_sec": 0 00:24:22.327 } 00:24:22.327 }, 00:24:22.327 { 00:24:22.327 "method": "bdev_iscsi_set_options", 00:24:22.327 "params": { 00:24:22.327 "timeout_sec": 30 00:24:22.327 } 00:24:22.327 }, 00:24:22.327 { 00:24:22.327 "method": "bdev_nvme_set_options", 00:24:22.327 "params": { 00:24:22.327 "action_on_timeout": "none", 00:24:22.327 "timeout_us": 0, 00:24:22.327 "timeout_admin_us": 0, 00:24:22.327 "keep_alive_timeout_ms": 10000, 00:24:22.327 "arbitration_burst": 0, 00:24:22.327 "low_priority_weight": 0, 00:24:22.327 "medium_priority_weight": 0, 00:24:22.327 "high_priority_weight": 0, 00:24:22.327 "nvme_adminq_poll_period_us": 10000, 00:24:22.327 "nvme_ioq_poll_period_us": 0, 00:24:22.327 "io_queue_requests": 0, 00:24:22.327 "delay_cmd_submit": true, 00:24:22.327 "transport_retry_count": 4, 00:24:22.327 "bdev_retry_count": 3, 00:24:22.327 "transport_ack_timeout": 0, 00:24:22.327 "ctrlr_loss_timeout_sec": 0, 00:24:22.327 "reconnect_delay_sec": 0, 00:24:22.327 "fast_io_fail_timeout_sec": 0, 00:24:22.327 "disable_auto_failback": false, 00:24:22.327 "generate_uuids": false, 00:24:22.327 "transport_tos": 0, 00:24:22.327 "nvme_error_stat": false, 00:24:22.327 "rdma_srq_size": 0, 00:24:22.327 "io_path_stat": false, 00:24:22.327 "allow_accel_sequence": false, 00:24:22.327 "rdma_max_cq_size": 0, 00:24:22.327 "rdma_cm_event_timeout_ms": 0, 00:24:22.327 "dhchap_digests": [ 00:24:22.327 "sha256", 00:24:22.327 "sha384", 00:24:22.327 "sha512" 00:24:22.327 ], 00:24:22.327 "dhchap_dhgroups": [ 00:24:22.327 "null", 00:24:22.327 "ffdhe2048", 00:24:22.327 "ffdhe3072", 00:24:22.327 "ffdhe4096", 00:24:22.327 "ffdhe6144", 00:24:22.327 "ffdhe8192" 00:24:22.327 ] 00:24:22.327 } 00:24:22.327 }, 00:24:22.327 { 00:24:22.327 "method": "bdev_nvme_set_hotplug", 00:24:22.328 "params": { 00:24:22.328 "period_us": 100000, 00:24:22.328 "enable": false 00:24:22.328 } 00:24:22.328 }, 00:24:22.328 { 00:24:22.328 "method": "bdev_malloc_create", 00:24:22.328 "params": { 00:24:22.328 "name": "malloc0", 00:24:22.328 "num_blocks": 8192, 00:24:22.328 "block_size": 4096, 00:24:22.328 "physical_block_size": 4096, 00:24:22.328 "uuid": "dec114fe-841e-40ea-a887-ed86508958e9", 00:24:22.328 "optimal_io_boundary": 0, 00:24:22.328 "md_size": 0, 00:24:22.328 "dif_type": 0, 00:24:22.328 "dif_is_head_of_md": false, 00:24:22.328 "dif_pi_format": 0 00:24:22.328 } 00:24:22.328 }, 00:24:22.328 { 00:24:22.328 "method": "bdev_wait_for_examine" 00:24:22.328 } 00:24:22.328 ] 00:24:22.328 }, 00:24:22.328 { 00:24:22.328 "subsystem": "nbd", 00:24:22.328 "config": [] 00:24:22.328 }, 00:24:22.328 { 00:24:22.328 "subsystem": "scheduler", 00:24:22.328 "config": [ 00:24:22.328 { 00:24:22.328 "method": "framework_set_scheduler", 00:24:22.328 "params": { 00:24:22.328 "name": "static" 00:24:22.328 } 00:24:22.328 } 00:24:22.328 ] 00:24:22.328 }, 00:24:22.328 { 00:24:22.328 "subsystem": "nvmf", 00:24:22.328 "config": [ 00:24:22.328 { 00:24:22.328 "method": "nvmf_set_config", 00:24:22.328 "params": { 00:24:22.328 "discovery_filter": "match_any", 00:24:22.328 "admin_cmd_passthru": { 00:24:22.328 "identify_ctrlr": false 00:24:22.328 }, 00:24:22.328 "dhchap_digests": [ 00:24:22.328 "sha256", 00:24:22.328 "sha384", 00:24:22.328 "sha512" 00:24:22.328 ], 00:24:22.328 "dhchap_dhgroups": [ 00:24:22.328 "null", 00:24:22.328 "ffdhe2048", 00:24:22.328 "ffdhe3072", 00:24:22.328 "ffdhe4096", 00:24:22.328 "ffdhe6144", 00:24:22.328 "ffdhe8192" 00:24:22.328 ] 00:24:22.328 } 00:24:22.328 }, 00:24:22.328 { 00:24:22.328 "method": "nvmf_set_max_subsystems", 00:24:22.328 "params": { 00:24:22.328 "max_subsystems": 1024 00:24:22.328 } 00:24:22.328 }, 00:24:22.328 { 00:24:22.328 "method": "nvmf_set_crdt", 00:24:22.328 "params": { 00:24:22.328 "crdt1": 0, 00:24:22.328 "crdt2": 0, 00:24:22.328 "crdt3": 0 00:24:22.328 } 00:24:22.328 }, 00:24:22.328 { 00:24:22.328 "method": "nvmf_create_transport", 00:24:22.328 "params": { 00:24:22.328 "trtype": "TCP", 00:24:22.328 "max_queue_depth": 128, 00:24:22.328 "max_io_qpairs_per_ctrlr": 127, 00:24:22.328 "in_capsule_data_size": 4096, 00:24:22.328 "max_io_size": 131072, 00:24:22.328 "io_unit_size": 131072, 00:24:22.328 "max_aq_depth": 128, 00:24:22.328 "num_shared_buffers": 511, 00:24:22.328 "buf_cache_size": 4294967295, 00:24:22.328 "dif_insert_or_strip": false, 00:24:22.328 "zcopy": false, 00:24:22.328 "c2h_success": false, 00:24:22.328 "sock_priority": 0, 00:24:22.328 "abort_timeout_sec": 1, 00:24:22.328 "ack_timeout": 0, 00:24:22.328 "data_wr_pool_size": 0 00:24:22.328 } 00:24:22.328 }, 00:24:22.328 { 00:24:22.328 "method": "nvmf_create_subsystem", 00:24:22.328 "params": { 00:24:22.328 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:22.328 "allow_any_host": false, 00:24:22.328 "serial_number": "00000000000000000000", 00:24:22.328 "model_number": "SPDK bdev Controller", 00:24:22.328 "max_namespaces": 32, 00:24:22.328 "min_cntlid": 1, 00:24:22.328 "max_cntlid": 65519, 00:24:22.328 "ana_reporting": false 00:24:22.328 } 00:24:22.328 }, 00:24:22.328 { 00:24:22.328 "method": "nvmf_subsystem_add_host", 00:24:22.328 "params": { 00:24:22.328 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:22.328 "host": "nqn.2016-06.io.spdk:host1", 00:24:22.328 "psk": "key0" 00:24:22.328 } 00:24:22.328 }, 00:24:22.328 { 00:24:22.328 "method": "nvmf_subsystem_add_ns", 00:24:22.328 "params": { 00:24:22.328 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:22.328 "namespace": { 00:24:22.328 "nsid": 1, 00:24:22.328 "bdev_name": "malloc0", 00:24:22.328 "nguid": "DEC114FE841E40EAA887ED86508958E9", 00:24:22.328 "uuid": "dec114fe-841e-40ea-a887-ed86508958e9", 00:24:22.328 "no_auto_visible": false 00:24:22.328 } 00:24:22.328 } 00:24:22.328 }, 00:24:22.328 { 00:24:22.328 "method": "nvmf_subsystem_add_listener", 00:24:22.328 "params": { 00:24:22.328 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:22.328 "listen_address": { 00:24:22.328 "trtype": "TCP", 00:24:22.328 "adrfam": "IPv4", 00:24:22.328 "traddr": "10.0.0.2", 00:24:22.328 "trsvcid": "4420" 00:24:22.328 }, 00:24:22.328 "secure_channel": false, 00:24:22.328 "sock_impl": "ssl" 00:24:22.328 } 00:24:22.328 } 00:24:22.328 ] 00:24:22.328 } 00:24:22.328 ] 00:24:22.328 }' 00:24:22.328 16:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:22.587 16:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:24:22.587 "subsystems": [ 00:24:22.587 { 00:24:22.587 "subsystem": "keyring", 00:24:22.587 "config": [ 00:24:22.587 { 00:24:22.587 "method": "keyring_file_add_key", 00:24:22.587 "params": { 00:24:22.587 "name": "key0", 00:24:22.587 "path": "/tmp/tmp.lZx38bCGaF" 00:24:22.587 } 00:24:22.587 } 00:24:22.587 ] 00:24:22.587 }, 00:24:22.587 { 00:24:22.587 "subsystem": "iobuf", 00:24:22.587 "config": [ 00:24:22.587 { 00:24:22.587 "method": "iobuf_set_options", 00:24:22.587 "params": { 00:24:22.587 "small_pool_count": 8192, 00:24:22.587 "large_pool_count": 1024, 00:24:22.587 "small_bufsize": 8192, 00:24:22.587 "large_bufsize": 135168 00:24:22.587 } 00:24:22.587 } 00:24:22.587 ] 00:24:22.587 }, 00:24:22.587 { 00:24:22.587 "subsystem": "sock", 00:24:22.587 "config": [ 00:24:22.587 { 00:24:22.587 "method": "sock_set_default_impl", 00:24:22.587 "params": { 00:24:22.587 "impl_name": "posix" 00:24:22.587 } 00:24:22.587 }, 00:24:22.587 { 00:24:22.587 "method": "sock_impl_set_options", 00:24:22.587 "params": { 00:24:22.587 "impl_name": "ssl", 00:24:22.587 "recv_buf_size": 4096, 00:24:22.587 "send_buf_size": 4096, 00:24:22.587 "enable_recv_pipe": true, 00:24:22.587 "enable_quickack": false, 00:24:22.587 "enable_placement_id": 0, 00:24:22.587 "enable_zerocopy_send_server": true, 00:24:22.587 "enable_zerocopy_send_client": false, 00:24:22.587 "zerocopy_threshold": 0, 00:24:22.587 "tls_version": 0, 00:24:22.587 "enable_ktls": false 00:24:22.587 } 00:24:22.587 }, 00:24:22.587 { 00:24:22.587 "method": "sock_impl_set_options", 00:24:22.587 "params": { 00:24:22.587 "impl_name": "posix", 00:24:22.587 "recv_buf_size": 2097152, 00:24:22.587 "send_buf_size": 2097152, 00:24:22.587 "enable_recv_pipe": true, 00:24:22.587 "enable_quickack": false, 00:24:22.587 "enable_placement_id": 0, 00:24:22.587 "enable_zerocopy_send_server": true, 00:24:22.587 "enable_zerocopy_send_client": false, 00:24:22.587 "zerocopy_threshold": 0, 00:24:22.587 "tls_version": 0, 00:24:22.587 "enable_ktls": false 00:24:22.587 } 00:24:22.587 } 00:24:22.587 ] 00:24:22.587 }, 00:24:22.587 { 00:24:22.587 "subsystem": "vmd", 00:24:22.587 "config": [] 00:24:22.587 }, 00:24:22.587 { 00:24:22.587 "subsystem": "accel", 00:24:22.587 "config": [ 00:24:22.587 { 00:24:22.587 "method": "accel_set_options", 00:24:22.587 "params": { 00:24:22.587 "small_cache_size": 128, 00:24:22.587 "large_cache_size": 16, 00:24:22.587 "task_count": 2048, 00:24:22.587 "sequence_count": 2048, 00:24:22.587 "buf_count": 2048 00:24:22.587 } 00:24:22.587 } 00:24:22.587 ] 00:24:22.587 }, 00:24:22.587 { 00:24:22.587 "subsystem": "bdev", 00:24:22.587 "config": [ 00:24:22.587 { 00:24:22.587 "method": "bdev_set_options", 00:24:22.587 "params": { 00:24:22.587 "bdev_io_pool_size": 65535, 00:24:22.587 "bdev_io_cache_size": 256, 00:24:22.587 "bdev_auto_examine": true, 00:24:22.587 "iobuf_small_cache_size": 128, 00:24:22.588 "iobuf_large_cache_size": 16 00:24:22.588 } 00:24:22.588 }, 00:24:22.588 { 00:24:22.588 "method": "bdev_raid_set_options", 00:24:22.588 "params": { 00:24:22.588 "process_window_size_kb": 1024, 00:24:22.588 "process_max_bandwidth_mb_sec": 0 00:24:22.588 } 00:24:22.588 }, 00:24:22.588 { 00:24:22.588 "method": "bdev_iscsi_set_options", 00:24:22.588 "params": { 00:24:22.588 "timeout_sec": 30 00:24:22.588 } 00:24:22.588 }, 00:24:22.588 { 00:24:22.588 "method": "bdev_nvme_set_options", 00:24:22.588 "params": { 00:24:22.588 "action_on_timeout": "none", 00:24:22.588 "timeout_us": 0, 00:24:22.588 "timeout_admin_us": 0, 00:24:22.588 "keep_alive_timeout_ms": 10000, 00:24:22.588 "arbitration_burst": 0, 00:24:22.588 "low_priority_weight": 0, 00:24:22.588 "medium_priority_weight": 0, 00:24:22.588 "high_priority_weight": 0, 00:24:22.588 "nvme_adminq_poll_period_us": 10000, 00:24:22.588 "nvme_ioq_poll_period_us": 0, 00:24:22.588 "io_queue_requests": 512, 00:24:22.588 "delay_cmd_submit": true, 00:24:22.588 "transport_retry_count": 4, 00:24:22.588 "bdev_retry_count": 3, 00:24:22.588 "transport_ack_timeout": 0, 00:24:22.588 "ctrlr_loss_timeout_sec": 0, 00:24:22.588 "reconnect_delay_sec": 0, 00:24:22.588 "fast_io_fail_timeout_sec": 0, 00:24:22.588 "disable_auto_failback": false, 00:24:22.588 "generate_uuids": false, 00:24:22.588 "transport_tos": 0, 00:24:22.588 "nvme_error_stat": false, 00:24:22.588 "rdma_srq_size": 0, 00:24:22.588 "io_path_stat": false, 00:24:22.588 "allow_accel_sequence": false, 00:24:22.588 "rdma_max_cq_size": 0, 00:24:22.588 "rdma_cm_event_timeout_ms": 0, 00:24:22.588 "dhchap_digests": [ 00:24:22.588 "sha256", 00:24:22.588 "sha384", 00:24:22.588 "sha512" 00:24:22.588 ], 00:24:22.588 "dhchap_dhgroups": [ 00:24:22.588 "null", 00:24:22.588 "ffdhe2048", 00:24:22.588 "ffdhe3072", 00:24:22.588 "ffdhe4096", 00:24:22.588 "ffdhe6144", 00:24:22.588 "ffdhe8192" 00:24:22.588 ] 00:24:22.588 } 00:24:22.588 }, 00:24:22.588 { 00:24:22.588 "method": "bdev_nvme_attach_controller", 00:24:22.588 "params": { 00:24:22.588 "name": "nvme0", 00:24:22.588 "trtype": "TCP", 00:24:22.588 "adrfam": "IPv4", 00:24:22.588 "traddr": "10.0.0.2", 00:24:22.588 "trsvcid": "4420", 00:24:22.588 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:22.588 "prchk_reftag": false, 00:24:22.588 "prchk_guard": false, 00:24:22.588 "ctrlr_loss_timeout_sec": 0, 00:24:22.588 "reconnect_delay_sec": 0, 00:24:22.588 "fast_io_fail_timeout_sec": 0, 00:24:22.588 "psk": "key0", 00:24:22.588 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:22.588 "hdgst": false, 00:24:22.588 "ddgst": false 00:24:22.588 } 00:24:22.588 }, 00:24:22.588 { 00:24:22.588 "method": "bdev_nvme_set_hotplug", 00:24:22.588 "params": { 00:24:22.588 "period_us": 100000, 00:24:22.588 "enable": false 00:24:22.588 } 00:24:22.588 }, 00:24:22.588 { 00:24:22.588 "method": "bdev_enable_histogram", 00:24:22.588 "params": { 00:24:22.588 "name": "nvme0n1", 00:24:22.588 "enable": true 00:24:22.588 } 00:24:22.588 }, 00:24:22.588 { 00:24:22.588 "method": "bdev_wait_for_examine" 00:24:22.588 } 00:24:22.588 ] 00:24:22.588 }, 00:24:22.588 { 00:24:22.588 "subsystem": "nbd", 00:24:22.588 "config": [] 00:24:22.588 } 00:24:22.588 ] 00:24:22.588 }' 00:24:22.588 16:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 3196442 00:24:22.588 16:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3196442 ']' 00:24:22.588 16:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3196442 00:24:22.588 16:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:22.588 16:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:22.588 16:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3196442 00:24:22.588 16:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:22.588 16:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:22.588 16:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3196442' 00:24:22.588 killing process with pid 3196442 00:24:22.588 16:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3196442 00:24:22.588 Received shutdown signal, test time was about 1.000000 seconds 00:24:22.588 00:24:22.588 Latency(us) 00:24:22.588 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:22.588 =================================================================================================================== 00:24:22.588 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:22.588 16:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3196442 00:24:23.524 16:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 3196293 00:24:23.524 16:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3196293 ']' 00:24:23.524 16:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3196293 00:24:23.524 16:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:23.524 16:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:23.524 16:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3196293 00:24:23.783 16:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:23.783 16:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:23.783 16:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3196293' 00:24:23.783 killing process with pid 3196293 00:24:23.783 16:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3196293 00:24:23.783 16:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3196293 00:24:25.155 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:24:25.155 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:25.155 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:24:25.155 "subsystems": [ 00:24:25.155 { 00:24:25.155 "subsystem": "keyring", 00:24:25.155 "config": [ 00:24:25.155 { 00:24:25.155 "method": "keyring_file_add_key", 00:24:25.155 "params": { 00:24:25.155 "name": "key0", 00:24:25.155 "path": "/tmp/tmp.lZx38bCGaF" 00:24:25.155 } 00:24:25.155 } 00:24:25.155 ] 00:24:25.155 }, 00:24:25.155 { 00:24:25.155 "subsystem": "iobuf", 00:24:25.155 "config": [ 00:24:25.155 { 00:24:25.155 "method": "iobuf_set_options", 00:24:25.155 "params": { 00:24:25.155 "small_pool_count": 8192, 00:24:25.155 "large_pool_count": 1024, 00:24:25.155 "small_bufsize": 8192, 00:24:25.155 "large_bufsize": 135168 00:24:25.155 } 00:24:25.155 } 00:24:25.155 ] 00:24:25.155 }, 00:24:25.155 { 00:24:25.155 "subsystem": "sock", 00:24:25.155 "config": [ 00:24:25.155 { 00:24:25.155 "method": "sock_set_default_impl", 00:24:25.155 "params": { 00:24:25.155 "impl_name": "posix" 00:24:25.155 } 00:24:25.155 }, 00:24:25.155 { 00:24:25.155 "method": "sock_impl_set_options", 00:24:25.155 "params": { 00:24:25.155 "impl_name": "ssl", 00:24:25.155 "recv_buf_size": 4096, 00:24:25.155 "send_buf_size": 4096, 00:24:25.155 "enable_recv_pipe": true, 00:24:25.155 "enable_quickack": false, 00:24:25.155 "enable_placement_id": 0, 00:24:25.155 "enable_zerocopy_send_server": true, 00:24:25.155 "enable_zerocopy_send_client": false, 00:24:25.155 "zerocopy_threshold": 0, 00:24:25.155 "tls_version": 0, 00:24:25.155 "enable_ktls": false 00:24:25.155 } 00:24:25.155 }, 00:24:25.155 { 00:24:25.155 "method": "sock_impl_set_options", 00:24:25.155 "params": { 00:24:25.155 "impl_name": "posix", 00:24:25.155 "recv_buf_size": 2097152, 00:24:25.155 "send_buf_size": 2097152, 00:24:25.155 "enable_recv_pipe": true, 00:24:25.155 "enable_quickack": false, 00:24:25.155 "enable_placement_id": 0, 00:24:25.155 "enable_zerocopy_send_server": true, 00:24:25.155 "enable_zerocopy_send_client": false, 00:24:25.155 "zerocopy_threshold": 0, 00:24:25.155 "tls_version": 0, 00:24:25.155 "enable_ktls": false 00:24:25.155 } 00:24:25.155 } 00:24:25.155 ] 00:24:25.155 }, 00:24:25.155 { 00:24:25.155 "subsystem": "vmd", 00:24:25.155 "config": [] 00:24:25.155 }, 00:24:25.155 { 00:24:25.155 "subsystem": "accel", 00:24:25.155 "config": [ 00:24:25.155 { 00:24:25.155 "method": "accel_set_options", 00:24:25.155 "params": { 00:24:25.155 "small_cache_size": 128, 00:24:25.155 "large_cache_size": 16, 00:24:25.155 "task_count": 2048, 00:24:25.155 "sequence_count": 2048, 00:24:25.155 "buf_count": 2048 00:24:25.155 } 00:24:25.155 } 00:24:25.155 ] 00:24:25.155 }, 00:24:25.155 { 00:24:25.155 "subsystem": "bdev", 00:24:25.155 "config": [ 00:24:25.155 { 00:24:25.155 "method": "bdev_set_options", 00:24:25.155 "params": { 00:24:25.155 "bdev_io_pool_size": 65535, 00:24:25.155 "bdev_io_cache_size": 256, 00:24:25.155 "bdev_auto_examine": true, 00:24:25.155 "iobuf_small_cache_size": 128, 00:24:25.155 "iobuf_large_cache_size": 16 00:24:25.155 } 00:24:25.155 }, 00:24:25.155 { 00:24:25.155 "method": "bdev_raid_set_options", 00:24:25.155 "params": { 00:24:25.155 "process_window_size_kb": 1024, 00:24:25.155 "process_max_bandwidth_mb_sec": 0 00:24:25.155 } 00:24:25.155 }, 00:24:25.155 { 00:24:25.155 "method": "bdev_iscsi_set_options", 00:24:25.155 "params": { 00:24:25.156 "timeout_sec": 30 00:24:25.156 } 00:24:25.156 }, 00:24:25.156 { 00:24:25.156 "method": "bdev_nvme_set_options", 00:24:25.156 "params": { 00:24:25.156 "action_on_timeout": "none", 00:24:25.156 "timeout_us": 0, 00:24:25.156 "timeout_admin_us": 0, 00:24:25.156 "keep_alive_timeout_ms": 10000, 00:24:25.156 "arbitration_burst": 0, 00:24:25.156 "low_priority_weight": 0, 00:24:25.156 "medium_priority_weight": 0, 00:24:25.156 "high_priority_weight": 0, 00:24:25.156 "nvme_adminq_poll_period_us": 10000, 00:24:25.156 "nvme_ioq_poll_period_us": 0, 00:24:25.156 "io_queue_requests": 0, 00:24:25.156 "delay_cmd_submit": true, 00:24:25.156 "transport_retry_count": 4, 00:24:25.156 "bdev_retry_count": 3, 00:24:25.156 "transport_ack_timeout": 0, 00:24:25.156 "ctrlr_loss_timeout_sec": 0, 00:24:25.156 "reconnect_delay_sec": 0, 00:24:25.156 "fast_io_fail_timeout_sec": 0, 00:24:25.156 "disable_auto_failback": false, 00:24:25.156 "generate_uuids": false, 00:24:25.156 "transport_tos": 0, 00:24:25.156 "nvme_error_stat": false, 00:24:25.156 "rdma_srq_size": 0, 00:24:25.156 "io_path_stat": false, 00:24:25.156 "allow_accel_sequence": false, 00:24:25.156 "rdma_max_cq_size": 0, 00:24:25.156 "rdma_cm_event_timeout_ms": 0, 00:24:25.156 "dhchap_digests": [ 00:24:25.156 "sha256", 00:24:25.156 "sha384", 00:24:25.156 "sha512" 00:24:25.156 ], 00:24:25.156 "dhchap_dhgroups": [ 00:24:25.156 "null", 00:24:25.156 "ffdhe2048", 00:24:25.156 "ffdhe3072", 00:24:25.156 "ffdhe4096", 00:24:25.156 "ffdhe6144", 00:24:25.156 "ffdhe8192" 00:24:25.156 ] 00:24:25.156 } 00:24:25.156 }, 00:24:25.156 { 00:24:25.156 "method": "bdev_nvme_set_hotplug", 00:24:25.156 "params": { 00:24:25.156 "period_us": 100000, 00:24:25.156 "enable": false 00:24:25.156 } 00:24:25.156 }, 00:24:25.156 { 00:24:25.156 "method": "bdev_malloc_create", 00:24:25.156 "params": { 00:24:25.156 "name": "malloc0", 00:24:25.156 "num_blocks": 8192, 00:24:25.156 "block_size": 4096, 00:24:25.156 "physical_block_size": 4096, 00:24:25.156 "uuid": "dec114fe-841e-40ea-a887-ed86508958e9", 00:24:25.156 "optimal_io_boundary": 0, 00:24:25.156 "md_size": 0, 00:24:25.156 "dif_type": 0, 00:24:25.156 "dif_is_head_of_md": false, 00:24:25.156 "dif_pi_format": 0 00:24:25.156 } 00:24:25.156 }, 00:24:25.156 { 00:24:25.156 "method": "bdev_wait_for_examine" 00:24:25.156 } 00:24:25.156 ] 00:24:25.156 }, 00:24:25.156 { 00:24:25.156 "subsystem": "nbd", 00:24:25.156 "config": [] 00:24:25.156 }, 00:24:25.156 { 00:24:25.156 "subsystem": "scheduler", 00:24:25.156 "config": [ 00:24:25.156 { 00:24:25.156 "method": "framework_set_scheduler", 00:24:25.156 "params": { 00:24:25.156 "name": "static" 00:24:25.156 } 00:24:25.156 } 00:24:25.156 ] 00:24:25.156 }, 00:24:25.156 { 00:24:25.156 "subsystem": "nvmf", 00:24:25.156 "config": [ 00:24:25.156 { 00:24:25.156 "method": "nvmf_set_config", 00:24:25.156 "params": { 00:24:25.156 "discovery_filter": "match_any", 00:24:25.156 "admin_cmd_passthru": { 00:24:25.156 "identify_ctrlr": false 00:24:25.156 }, 00:24:25.156 "dhchap_digests": [ 00:24:25.156 "sha256", 00:24:25.156 "sha384", 00:24:25.156 "sha512" 00:24:25.156 ], 00:24:25.156 "dhchap_dhgroups": [ 00:24:25.156 "null", 00:24:25.156 "ffdhe2048", 00:24:25.156 "ffdhe3072", 00:24:25.156 "ffdhe4096", 00:24:25.156 "ffdhe6144", 00:24:25.156 "ffdhe8192" 00:24:25.156 ] 00:24:25.156 } 00:24:25.156 }, 00:24:25.156 { 00:24:25.156 "method": "nvmf_set_max_subsystems", 00:24:25.156 "params": { 00:24:25.156 "max_subsystems": 1024 00:24:25.156 } 00:24:25.156 }, 00:24:25.156 { 00:24:25.156 "method": "nvmf_set_crdt", 00:24:25.156 "params": { 00:24:25.156 "crdt1": 0, 00:24:25.156 "crdt2": 0, 00:24:25.156 "crdt3": 0 00:24:25.156 } 00:24:25.156 }, 00:24:25.156 { 00:24:25.156 "method": "nvmf_create_transport", 00:24:25.156 "params": { 00:24:25.156 "trtype": "TCP", 00:24:25.156 "max_queue_depth": 128, 00:24:25.156 "max_io_qpairs_per_ctrlr": 127, 00:24:25.156 "in_capsule_data_size": 4096, 00:24:25.156 "max_io_size": 131072, 00:24:25.156 "io_unit_size": 131072, 00:24:25.156 "max_aq_depth": 128, 00:24:25.156 "num_shared_buffers": 511, 00:24:25.156 "buf_cache_size": 4294967295, 00:24:25.156 "dif_insert_or_strip": false, 00:24:25.156 "zcopy": false, 00:24:25.156 "c2h_success": false, 00:24:25.156 "sock_priority": 0, 00:24:25.156 "abort_timeout_sec": 1, 00:24:25.156 "ack_timeout": 0, 00:24:25.156 "data_wr_pool_size": 0 00:24:25.156 } 00:24:25.156 }, 00:24:25.156 { 00:24:25.156 "method": "nvmf_create_subsystem", 00:24:25.156 "params": { 00:24:25.156 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:25.156 "allow_any_host": false, 00:24:25.156 "serial_number": "00000000000000000000", 00:24:25.156 "model_number": "SPDK bdev Controller", 00:24:25.156 "max_namespaces": 32, 00:24:25.156 "min_cntlid": 1, 00:24:25.156 "max_cntlid": 65519, 00:24:25.156 "ana_reporting": false 00:24:25.156 } 00:24:25.156 }, 00:24:25.156 { 00:24:25.156 "method": "nvmf_subsystem_add_host", 00:24:25.156 "params": { 00:24:25.156 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:25.156 "host": "nqn.2016-06.io.spdk:host1", 00:24:25.156 "psk": "key0" 00:24:25.156 } 00:24:25.156 }, 00:24:25.156 { 00:24:25.156 "method": "nvmf_subsystem_add_ns", 00:24:25.156 "params": { 00:24:25.156 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:25.156 "namespace": { 00:24:25.156 "nsid": 1, 00:24:25.156 "bdev_name": "malloc0", 00:24:25.156 "nguid": "DEC114FE841E40EAA887ED86508958E9", 00:24:25.156 "uuid": "dec114fe-841e-40ea-a887-ed86508958e9", 00:24:25.156 "no_auto_visible": false 00:24:25.156 } 00:24:25.156 } 00:24:25.156 }, 00:24:25.156 { 00:24:25.156 "method": "nvmf_subsystem_add_listener", 00:24:25.156 "params": { 00:24:25.156 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:25.156 "listen_address": { 00:24:25.156 "trtype": "TCP", 00:24:25.156 "adrfam": "IPv4", 00:24:25.156 "traddr": "10.0.0.2", 00:24:25.156 "trsvcid": "4420" 00:24:25.156 }, 00:24:25.156 "secure_channel": false, 00:24:25.156 "sock_impl": "ssl" 00:24:25.156 } 00:24:25.156 } 00:24:25.156 ] 00:24:25.156 } 00:24:25.156 ] 00:24:25.156 }' 00:24:25.156 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:25.156 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:25.156 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=3197121 00:24:25.156 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:24:25.156 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 3197121 00:24:25.156 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3197121 ']' 00:24:25.156 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:25.156 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:25.156 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:25.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:25.156 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:25.156 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:25.156 [2024-09-29 16:32:25.559091] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:24:25.156 [2024-09-29 16:32:25.559243] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:25.156 [2024-09-29 16:32:25.706180] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:25.414 [2024-09-29 16:32:25.959684] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:25.414 [2024-09-29 16:32:25.959774] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:25.414 [2024-09-29 16:32:25.959805] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:25.414 [2024-09-29 16:32:25.959831] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:25.414 [2024-09-29 16:32:25.959851] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:25.414 [2024-09-29 16:32:25.959990] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:25.979 [2024-09-29 16:32:26.518984] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:26.267 [2024-09-29 16:32:26.551068] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:26.267 [2024-09-29 16:32:26.551507] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:26.267 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:26.267 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:26.267 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:26.267 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:26.267 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:26.267 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:26.267 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=3197275 00:24:26.267 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 3197275 /var/tmp/bdevperf.sock 00:24:26.267 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3197275 ']' 00:24:26.267 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:26.267 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:24:26.267 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:26.267 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:26.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:26.267 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:24:26.267 "subsystems": [ 00:24:26.267 { 00:24:26.267 "subsystem": "keyring", 00:24:26.267 "config": [ 00:24:26.267 { 00:24:26.267 "method": "keyring_file_add_key", 00:24:26.267 "params": { 00:24:26.267 "name": "key0", 00:24:26.267 "path": "/tmp/tmp.lZx38bCGaF" 00:24:26.267 } 00:24:26.267 } 00:24:26.267 ] 00:24:26.267 }, 00:24:26.267 { 00:24:26.267 "subsystem": "iobuf", 00:24:26.267 "config": [ 00:24:26.267 { 00:24:26.267 "method": "iobuf_set_options", 00:24:26.267 "params": { 00:24:26.267 "small_pool_count": 8192, 00:24:26.267 "large_pool_count": 1024, 00:24:26.267 "small_bufsize": 8192, 00:24:26.267 "large_bufsize": 135168 00:24:26.267 } 00:24:26.267 } 00:24:26.267 ] 00:24:26.267 }, 00:24:26.267 { 00:24:26.267 "subsystem": "sock", 00:24:26.267 "config": [ 00:24:26.267 { 00:24:26.267 "method": "sock_set_default_impl", 00:24:26.267 "params": { 00:24:26.267 "impl_name": "posix" 00:24:26.267 } 00:24:26.267 }, 00:24:26.267 { 00:24:26.267 "method": "sock_impl_set_options", 00:24:26.267 "params": { 00:24:26.267 "impl_name": "ssl", 00:24:26.267 "recv_buf_size": 4096, 00:24:26.267 "send_buf_size": 4096, 00:24:26.267 "enable_recv_pipe": true, 00:24:26.267 "enable_quickack": false, 00:24:26.267 "enable_placement_id": 0, 00:24:26.267 "enable_zerocopy_send_server": true, 00:24:26.267 "enable_zerocopy_send_client": false, 00:24:26.267 "zerocopy_threshold": 0, 00:24:26.267 "tls_version": 0, 00:24:26.267 "enable_ktls": false 00:24:26.267 } 00:24:26.267 }, 00:24:26.267 { 00:24:26.267 "method": "sock_impl_set_options", 00:24:26.267 "params": { 00:24:26.267 "impl_name": "posix", 00:24:26.267 "recv_buf_size": 2097152, 00:24:26.267 "send_buf_size": 2097152, 00:24:26.267 "enable_recv_pipe": true, 00:24:26.267 "enable_quickack": false, 00:24:26.267 "enable_placement_id": 0, 00:24:26.267 "enable_zerocopy_send_server": true, 00:24:26.267 "enable_zerocopy_send_client": false, 00:24:26.267 "zerocopy_threshold": 0, 00:24:26.267 "tls_version": 0, 00:24:26.267 "enable_ktls": false 00:24:26.267 } 00:24:26.267 } 00:24:26.267 ] 00:24:26.267 }, 00:24:26.267 { 00:24:26.267 "subsystem": "vmd", 00:24:26.267 "config": [] 00:24:26.267 }, 00:24:26.267 { 00:24:26.267 "subsystem": "accel", 00:24:26.267 "config": [ 00:24:26.267 { 00:24:26.267 "method": "accel_set_options", 00:24:26.267 "params": { 00:24:26.267 "small_cache_size": 128, 00:24:26.267 "large_cache_size": 16, 00:24:26.267 "task_count": 2048, 00:24:26.267 "sequence_count": 2048, 00:24:26.267 "buf_count": 2048 00:24:26.267 } 00:24:26.267 } 00:24:26.267 ] 00:24:26.267 }, 00:24:26.267 { 00:24:26.267 "subsystem": "bdev", 00:24:26.267 "config": [ 00:24:26.267 { 00:24:26.267 "method": "bdev_set_options", 00:24:26.267 "params": { 00:24:26.267 "bdev_io_pool_size": 65535, 00:24:26.267 "bdev_io_cache_size": 256, 00:24:26.267 "bdev_auto_examine": true, 00:24:26.267 "iobuf_small_cache_size": 128, 00:24:26.267 "iobuf_large_cache_size": 16 00:24:26.267 } 00:24:26.267 }, 00:24:26.267 { 00:24:26.267 "method": "bdev_raid_set_options", 00:24:26.267 "params": { 00:24:26.267 "process_window_size_kb": 1024, 00:24:26.267 "process_max_bandwidth_mb_sec": 0 00:24:26.267 } 00:24:26.267 }, 00:24:26.267 { 00:24:26.267 "method": "bdev_iscsi_set_options", 00:24:26.267 "params": { 00:24:26.267 "timeout_sec": 30 00:24:26.267 } 00:24:26.267 }, 00:24:26.267 { 00:24:26.267 "method": "bdev_nvme_set_options", 00:24:26.267 "params": { 00:24:26.267 "action_on_timeout": "none", 00:24:26.267 "timeout_us": 0, 00:24:26.267 "timeout_admin_us": 0, 00:24:26.267 "keep_alive_timeout_ms": 10000, 00:24:26.267 "arbitration_burst": 0, 00:24:26.267 "low_priority_weight": 0, 00:24:26.267 "medium_priority_weight": 0, 00:24:26.267 "high_priority_weight": 0, 00:24:26.267 "nvme_adminq_poll_period_us": 10000, 00:24:26.268 "nvme_ioq_poll_period_us": 0, 00:24:26.268 "io_queue_requests": 512, 00:24:26.268 "delay_cmd_submit": true, 00:24:26.268 "transport_retry_count": 4, 00:24:26.268 "bdev_retry_count": 3, 00:24:26.268 "transport_ack_timeout": 0, 00:24:26.268 "ctrlr_loss_timeout_sec": 0, 00:24:26.268 "reconnect_delay_sec": 0, 00:24:26.268 "fast_io_fail_timeout_sec": 0, 00:24:26.268 "disable_auto_failback": false, 00:24:26.268 "generate_uuids": false, 00:24:26.268 "transport_tos": 0, 00:24:26.268 "nvme_error_stat": false, 00:24:26.268 "rdma_srq_size": 0, 00:24:26.268 "io_path_stat": false, 00:24:26.268 "allow_accel_sequence": false, 00:24:26.268 "rdma_max_cq_size": 0, 00:24:26.268 "rdma_cm_event_timeout_ms": 0, 00:24:26.268 "dhchap_digests": [ 00:24:26.268 "sha256", 00:24:26.268 "sha384", 00:24:26.268 "sha512" 00:24:26.268 ], 00:24:26.268 "dhchap_dhgroups": [ 00:24:26.268 "null", 00:24:26.268 "ffdhe2048", 00:24:26.268 "ffdhe3072", 00:24:26.268 "ffdhe4096", 00:24:26.268 "ffdhe6144", 00:24:26.268 "ffdhe8192" 00:24:26.268 ] 00:24:26.268 } 00:24:26.268 }, 00:24:26.268 { 00:24:26.268 "method": "bdev_nvme_attach_controller", 00:24:26.268 "params": { 00:24:26.268 "name": "nvme0", 00:24:26.268 "trtype": "TCP", 00:24:26.268 "adrfam": "IPv4", 00:24:26.268 "traddr": "10.0.0.2", 00:24:26.268 "trsvcid": "4420", 00:24:26.268 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:26.268 "prchk_reftag": false, 00:24:26.268 "prchk_guard": false, 00:24:26.268 "ctrlr_loss_timeout_sec": 0, 00:24:26.268 "reconnect_delay_sec": 0, 00:24:26.268 "fast_io_fail_timeout_sec": 0, 00:24:26.268 "psk": "key0", 00:24:26.268 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:26.268 "hdgst": false, 00:24:26.268 "ddgst": false 00:24:26.268 } 00:24:26.268 }, 00:24:26.268 { 00:24:26.268 "method": "bdev_nvme_set_hotplug", 00:24:26.268 "params": { 00:24:26.268 "period_us": 100000, 00:24:26.268 "enable": false 00:24:26.268 } 00:24:26.268 }, 00:24:26.268 { 00:24:26.268 "method": "bdev_enable_histogram", 00:24:26.268 "params": { 00:24:26.268 "name": "nvme0n1", 00:24:26.268 "enable": true 00:24:26.268 } 00:24:26.268 }, 00:24:26.268 { 00:24:26.268 "method": "bdev_wait_for_examine" 00:24:26.268 } 00:24:26.268 ] 00:24:26.268 }, 00:24:26.268 { 00:24:26.268 "subsystem": "nbd", 00:24:26.268 "config": [] 00:24:26.268 } 00:24:26.268 ] 00:24:26.268 }' 00:24:26.268 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:26.268 16:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:26.268 [2024-09-29 16:32:26.688773] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:24:26.268 [2024-09-29 16:32:26.688903] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3197275 ] 00:24:26.552 [2024-09-29 16:32:26.823424] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:26.552 [2024-09-29 16:32:27.076946] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:27.118 [2024-09-29 16:32:27.525048] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:27.118 16:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:27.118 16:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:27.118 16:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:27.118 16:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:24:27.376 16:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.376 16:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:27.634 Running I/O for 1 seconds... 00:24:28.567 1716.00 IOPS, 6.70 MiB/s 00:24:28.567 Latency(us) 00:24:28.567 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:28.567 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:28.567 Verification LBA range: start 0x0 length 0x2000 00:24:28.567 nvme0n1 : 1.03 1790.92 7.00 0.00 0.00 70835.98 13398.47 64856.37 00:24:28.567 =================================================================================================================== 00:24:28.567 Total : 1790.92 7.00 0.00 0.00 70835.98 13398.47 64856.37 00:24:28.567 { 00:24:28.567 "results": [ 00:24:28.567 { 00:24:28.567 "job": "nvme0n1", 00:24:28.567 "core_mask": "0x2", 00:24:28.567 "workload": "verify", 00:24:28.567 "status": "finished", 00:24:28.567 "verify_range": { 00:24:28.567 "start": 0, 00:24:28.567 "length": 8192 00:24:28.567 }, 00:24:28.567 "queue_depth": 128, 00:24:28.567 "io_size": 4096, 00:24:28.567 "runtime": 1.029641, 00:24:28.567 "iops": 1790.9154744226387, 00:24:28.567 "mibps": 6.995763571963432, 00:24:28.567 "io_failed": 0, 00:24:28.567 "io_timeout": 0, 00:24:28.567 "avg_latency_us": 70835.98114244395, 00:24:28.567 "min_latency_us": 13398.471111111112, 00:24:28.567 "max_latency_us": 64856.36740740741 00:24:28.567 } 00:24:28.567 ], 00:24:28.567 "core_count": 1 00:24:28.567 } 00:24:28.567 16:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:24:28.567 16:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:24:28.567 16:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:24:28.567 16:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:24:28.567 16:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:24:28.567 16:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:24:28.567 16:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:28.567 16:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:24:28.567 16:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:24:28.567 16:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:24:28.567 16:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:28.567 nvmf_trace.0 00:24:28.825 16:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:24:28.825 16:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 3197275 00:24:28.825 16:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3197275 ']' 00:24:28.825 16:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3197275 00:24:28.825 16:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:28.825 16:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:28.825 16:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3197275 00:24:28.825 16:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:28.825 16:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:28.825 16:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3197275' 00:24:28.825 killing process with pid 3197275 00:24:28.825 16:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3197275 00:24:28.825 Received shutdown signal, test time was about 1.000000 seconds 00:24:28.825 00:24:28.825 Latency(us) 00:24:28.825 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:28.825 =================================================================================================================== 00:24:28.825 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:28.825 16:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3197275 00:24:29.757 16:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:24:29.757 16:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # nvmfcleanup 00:24:29.757 16:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:24:29.757 16:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:29.757 16:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:24:29.757 16:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:29.757 16:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:29.757 rmmod nvme_tcp 00:24:29.757 rmmod nvme_fabrics 00:24:29.757 rmmod nvme_keyring 00:24:30.016 16:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:30.016 16:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:24:30.016 16:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:24:30.017 16:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@513 -- # '[' -n 3197121 ']' 00:24:30.017 16:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@514 -- # killprocess 3197121 00:24:30.017 16:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3197121 ']' 00:24:30.017 16:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3197121 00:24:30.017 16:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:30.017 16:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:30.017 16:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3197121 00:24:30.017 16:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:30.017 16:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:30.017 16:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3197121' 00:24:30.017 killing process with pid 3197121 00:24:30.017 16:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3197121 00:24:30.017 16:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3197121 00:24:31.392 16:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:24:31.392 16:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:24:31.392 16:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:24:31.392 16:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:24:31.392 16:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # iptables-save 00:24:31.392 16:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:24:31.392 16:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # iptables-restore 00:24:31.392 16:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:31.392 16:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:31.392 16:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:31.392 16:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:31.392 16:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:33.296 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:33.296 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.IxBN9ielQ6 /tmp/tmp.e1XkjMW6O6 /tmp/tmp.lZx38bCGaF 00:24:33.296 00:24:33.296 real 1m56.064s 00:24:33.296 user 3m14.254s 00:24:33.296 sys 0m26.372s 00:24:33.296 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:33.296 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:33.296 ************************************ 00:24:33.296 END TEST nvmf_tls 00:24:33.296 ************************************ 00:24:33.296 16:32:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:33.296 16:32:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:33.296 16:32:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:33.296 16:32:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:33.296 ************************************ 00:24:33.296 START TEST nvmf_fips 00:24:33.296 ************************************ 00:24:33.296 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:33.296 * Looking for test storage... 00:24:33.296 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:24:33.296 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:33.296 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lcov --version 00:24:33.296 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:33.556 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:33.556 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:33.556 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:33.556 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:33.556 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:33.556 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:33.556 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:33.556 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:33.556 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:24:33.556 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:24:33.556 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:24:33.556 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:33.556 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:33.556 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:24:33.556 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:33.556 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:33.556 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:33.556 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:33.556 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:33.556 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:33.556 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:33.556 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:24:33.556 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:24:33.556 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:33.556 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:24:33.556 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:24:33.556 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:33.556 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:33.556 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:24:33.556 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:33.556 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:33.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:33.556 --rc genhtml_branch_coverage=1 00:24:33.556 --rc genhtml_function_coverage=1 00:24:33.556 --rc genhtml_legend=1 00:24:33.556 --rc geninfo_all_blocks=1 00:24:33.556 --rc geninfo_unexecuted_blocks=1 00:24:33.556 00:24:33.556 ' 00:24:33.556 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:33.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:33.556 --rc genhtml_branch_coverage=1 00:24:33.556 --rc genhtml_function_coverage=1 00:24:33.556 --rc genhtml_legend=1 00:24:33.556 --rc geninfo_all_blocks=1 00:24:33.556 --rc geninfo_unexecuted_blocks=1 00:24:33.556 00:24:33.556 ' 00:24:33.556 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:33.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:33.556 --rc genhtml_branch_coverage=1 00:24:33.556 --rc genhtml_function_coverage=1 00:24:33.556 --rc genhtml_legend=1 00:24:33.556 --rc geninfo_all_blocks=1 00:24:33.556 --rc geninfo_unexecuted_blocks=1 00:24:33.556 00:24:33.556 ' 00:24:33.556 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:33.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:33.556 --rc genhtml_branch_coverage=1 00:24:33.556 --rc genhtml_function_coverage=1 00:24:33.556 --rc genhtml_legend=1 00:24:33.556 --rc geninfo_all_blocks=1 00:24:33.556 --rc geninfo_unexecuted_blocks=1 00:24:33.556 00:24:33.556 ' 00:24:33.556 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:33.556 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:24:33.556 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:33.556 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:33.556 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:33.556 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:33.556 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:33.556 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:33.556 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:33.556 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:33.556 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:33.556 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:33.556 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:33.556 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:33.556 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:33.556 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:33.556 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:33.556 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:33.556 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:33.556 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:24:33.556 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:33.556 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:33.556 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:33.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:24:33.557 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:24:33.557 16:32:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:24:33.557 16:32:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:24:33.557 16:32:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:24:33.557 16:32:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:24:33.557 16:32:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:24:33.557 16:32:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:24:33.557 16:32:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:24:33.557 16:32:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:24:33.557 16:32:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:33.557 16:32:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:24:33.557 16:32:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:33.557 16:32:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:24:33.557 16:32:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:33.557 16:32:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:24:33.557 16:32:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:24:33.557 16:32:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:24:33.557 Error setting digest 00:24:33.557 40E216F3FD7E0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:24:33.557 40E216F3FD7E0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:24:33.557 16:32:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:24:33.557 16:32:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:33.558 16:32:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:33.558 16:32:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:33.558 16:32:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:24:33.558 16:32:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:24:33.558 16:32:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:33.558 16:32:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@472 -- # prepare_net_devs 00:24:33.558 16:32:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@434 -- # local -g is_hw=no 00:24:33.558 16:32:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@436 -- # remove_spdk_ns 00:24:33.558 16:32:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:33.558 16:32:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:33.558 16:32:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:33.558 16:32:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:24:33.558 16:32:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:24:33.558 16:32:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:24:33.558 16:32:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:36.089 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:36.089 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ up == up ]] 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:36.089 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ up == up ]] 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:36.089 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # is_hw=yes 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:36.089 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:36.089 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:24:36.089 00:24:36.089 --- 10.0.0.2 ping statistics --- 00:24:36.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:36.089 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:36.089 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:36.089 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:24:36.089 00:24:36.089 --- 10.0.0.1 ping statistics --- 00:24:36.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:36.089 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # return 0 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:36.089 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:24:36.090 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:24:36.090 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:36.090 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:24:36.090 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:24:36.090 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:24:36.090 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:36.090 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:36.090 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:36.090 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@505 -- # nvmfpid=3199787 00:24:36.090 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@506 -- # waitforlisten 3199787 00:24:36.090 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:36.090 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 3199787 ']' 00:24:36.090 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:36.090 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:36.090 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:36.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:36.090 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:36.090 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:36.090 [2024-09-29 16:32:36.407591] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:24:36.090 [2024-09-29 16:32:36.407795] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:36.090 [2024-09-29 16:32:36.563558] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:36.348 [2024-09-29 16:32:36.825912] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:36.348 [2024-09-29 16:32:36.826008] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:36.348 [2024-09-29 16:32:36.826041] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:36.348 [2024-09-29 16:32:36.826066] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:36.348 [2024-09-29 16:32:36.826086] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:36.348 [2024-09-29 16:32:36.826143] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:36.914 16:32:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:36.914 16:32:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:24:36.914 16:32:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:36.914 16:32:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:36.914 16:32:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:36.914 16:32:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:36.914 16:32:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:24:36.914 16:32:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:36.914 16:32:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:24:36.914 16:32:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.c9r 00:24:36.914 16:32:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:36.914 16:32:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.c9r 00:24:36.914 16:32:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.c9r 00:24:36.914 16:32:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.c9r 00:24:36.914 16:32:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:37.173 [2024-09-29 16:32:37.633638] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:37.173 [2024-09-29 16:32:37.649590] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:37.173 [2024-09-29 16:32:37.649953] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:37.173 malloc0 00:24:37.431 16:32:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:37.431 16:32:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=3199950 00:24:37.431 16:32:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:37.431 16:32:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 3199950 /var/tmp/bdevperf.sock 00:24:37.431 16:32:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 3199950 ']' 00:24:37.431 16:32:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:37.431 16:32:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:37.431 16:32:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:37.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:37.432 16:32:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:37.432 16:32:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:37.432 [2024-09-29 16:32:37.870619] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:24:37.432 [2024-09-29 16:32:37.870802] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3199950 ] 00:24:37.690 [2024-09-29 16:32:37.996837] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:37.690 [2024-09-29 16:32:38.220659] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:38.256 16:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:38.256 16:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:24:38.256 16:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.c9r 00:24:38.823 16:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:38.823 [2024-09-29 16:32:39.335966] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:39.081 TLSTESTn1 00:24:39.081 16:32:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:39.081 Running I/O for 10 seconds... 00:24:49.298 2579.00 IOPS, 10.07 MiB/s 2636.00 IOPS, 10.30 MiB/s 2646.00 IOPS, 10.34 MiB/s 2647.50 IOPS, 10.34 MiB/s 2650.40 IOPS, 10.35 MiB/s 2653.33 IOPS, 10.36 MiB/s 2653.71 IOPS, 10.37 MiB/s 2657.12 IOPS, 10.38 MiB/s 2656.00 IOPS, 10.38 MiB/s 2657.00 IOPS, 10.38 MiB/s 00:24:49.298 Latency(us) 00:24:49.298 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:49.298 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:49.298 Verification LBA range: start 0x0 length 0x2000 00:24:49.298 TLSTESTn1 : 10.04 2660.28 10.39 0.00 0.00 48003.93 7767.23 37865.24 00:24:49.298 =================================================================================================================== 00:24:49.298 Total : 2660.28 10.39 0.00 0.00 48003.93 7767.23 37865.24 00:24:49.298 { 00:24:49.298 "results": [ 00:24:49.298 { 00:24:49.298 "job": "TLSTESTn1", 00:24:49.298 "core_mask": "0x4", 00:24:49.298 "workload": "verify", 00:24:49.298 "status": "finished", 00:24:49.298 "verify_range": { 00:24:49.298 "start": 0, 00:24:49.298 "length": 8192 00:24:49.298 }, 00:24:49.298 "queue_depth": 128, 00:24:49.298 "io_size": 4096, 00:24:49.298 "runtime": 10.035028, 00:24:49.298 "iops": 2660.2815657315555, 00:24:49.298 "mibps": 10.391724866138889, 00:24:49.298 "io_failed": 0, 00:24:49.298 "io_timeout": 0, 00:24:49.298 "avg_latency_us": 48003.92528218959, 00:24:49.298 "min_latency_us": 7767.22962962963, 00:24:49.298 "max_latency_us": 37865.24444444444 00:24:49.298 } 00:24:49.298 ], 00:24:49.298 "core_count": 1 00:24:49.298 } 00:24:49.298 16:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:49.298 16:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:49.298 16:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:24:49.298 16:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:24:49.298 16:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:24:49.298 16:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:49.298 16:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:24:49.298 16:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:24:49.298 16:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:24:49.298 16:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:49.298 nvmf_trace.0 00:24:49.298 16:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:24:49.298 16:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3199950 00:24:49.298 16:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 3199950 ']' 00:24:49.298 16:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 3199950 00:24:49.298 16:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:24:49.298 16:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:49.298 16:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3199950 00:24:49.298 16:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:49.298 16:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:49.298 16:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3199950' 00:24:49.298 killing process with pid 3199950 00:24:49.298 16:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 3199950 00:24:49.298 Received shutdown signal, test time was about 10.000000 seconds 00:24:49.298 00:24:49.298 Latency(us) 00:24:49.298 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:49.298 =================================================================================================================== 00:24:49.298 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:49.298 16:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 3199950 00:24:50.233 16:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:50.233 16:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # nvmfcleanup 00:24:50.233 16:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:24:50.233 16:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:50.233 16:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:24:50.233 16:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:50.233 16:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:50.233 rmmod nvme_tcp 00:24:50.491 rmmod nvme_fabrics 00:24:50.491 rmmod nvme_keyring 00:24:50.491 16:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:50.491 16:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:24:50.491 16:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:24:50.491 16:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@513 -- # '[' -n 3199787 ']' 00:24:50.491 16:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@514 -- # killprocess 3199787 00:24:50.491 16:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 3199787 ']' 00:24:50.491 16:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 3199787 00:24:50.491 16:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:24:50.491 16:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:50.491 16:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3199787 00:24:50.491 16:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:50.491 16:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:50.491 16:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3199787' 00:24:50.491 killing process with pid 3199787 00:24:50.491 16:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 3199787 00:24:50.491 16:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 3199787 00:24:51.865 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:24:51.865 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:24:51.865 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:24:51.865 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:24:51.865 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # iptables-save 00:24:51.865 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:24:51.865 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # iptables-restore 00:24:51.866 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:51.866 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:51.866 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:51.866 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:51.866 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:54.398 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:54.398 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.c9r 00:24:54.398 00:24:54.398 real 0m20.554s 00:24:54.398 user 0m28.396s 00:24:54.398 sys 0m5.287s 00:24:54.398 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:54.398 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:54.398 ************************************ 00:24:54.398 END TEST nvmf_fips 00:24:54.398 ************************************ 00:24:54.398 16:32:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:54.398 16:32:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:54.398 16:32:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:54.398 16:32:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:54.398 ************************************ 00:24:54.398 START TEST nvmf_control_msg_list 00:24:54.398 ************************************ 00:24:54.398 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:54.398 * Looking for test storage... 00:24:54.398 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:54.398 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:54.398 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lcov --version 00:24:54.398 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:54.398 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:54.398 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:54.398 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:54.398 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:54.398 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:24:54.398 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:24:54.398 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:24:54.398 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:24:54.398 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:24:54.398 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:24:54.398 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:24:54.398 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:54.398 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:24:54.398 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:24:54.398 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:54.398 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:54.398 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:24:54.398 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:24:54.398 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:54.398 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:24:54.398 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:24:54.398 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:24:54.398 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:24:54.398 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:54.398 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:24:54.398 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:24:54.398 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:54.398 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:54.398 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:24:54.398 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:54.398 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:54.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:54.398 --rc genhtml_branch_coverage=1 00:24:54.398 --rc genhtml_function_coverage=1 00:24:54.398 --rc genhtml_legend=1 00:24:54.398 --rc geninfo_all_blocks=1 00:24:54.398 --rc geninfo_unexecuted_blocks=1 00:24:54.398 00:24:54.398 ' 00:24:54.398 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:54.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:54.398 --rc genhtml_branch_coverage=1 00:24:54.398 --rc genhtml_function_coverage=1 00:24:54.398 --rc genhtml_legend=1 00:24:54.398 --rc geninfo_all_blocks=1 00:24:54.398 --rc geninfo_unexecuted_blocks=1 00:24:54.398 00:24:54.399 ' 00:24:54.399 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:54.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:54.399 --rc genhtml_branch_coverage=1 00:24:54.399 --rc genhtml_function_coverage=1 00:24:54.399 --rc genhtml_legend=1 00:24:54.399 --rc geninfo_all_blocks=1 00:24:54.399 --rc geninfo_unexecuted_blocks=1 00:24:54.399 00:24:54.399 ' 00:24:54.399 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:54.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:54.399 --rc genhtml_branch_coverage=1 00:24:54.399 --rc genhtml_function_coverage=1 00:24:54.399 --rc genhtml_legend=1 00:24:54.399 --rc geninfo_all_blocks=1 00:24:54.399 --rc geninfo_unexecuted_blocks=1 00:24:54.399 00:24:54.399 ' 00:24:54.399 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:54.399 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:24:54.399 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:54.399 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:54.399 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:54.399 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:54.399 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:54.399 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:54.399 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:54.399 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:54.399 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:54.399 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:54.399 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:54.399 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:54.399 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:54.399 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:54.399 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:54.399 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:54.399 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:54.399 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:24:54.399 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:54.399 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:54.399 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:54.399 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.399 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.399 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.399 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:24:54.399 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.399 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:24:54.399 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:54.399 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:54.399 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:54.399 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:54.399 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:54.399 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:54.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:54.399 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:54.399 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:54.399 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:54.399 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:24:54.399 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:24:54.399 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:54.399 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@472 -- # prepare_net_devs 00:24:54.399 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@434 -- # local -g is_hw=no 00:24:54.399 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@436 -- # remove_spdk_ns 00:24:54.399 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:54.399 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:54.399 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:54.399 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:24:54.399 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:24:54.399 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:24:54.399 16:32:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:56.302 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:56.302 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:24:56.302 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:56.302 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:56.302 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:56.302 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:56.302 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:56.302 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:24:56.302 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:56.302 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:24:56.302 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:24:56.302 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:24:56.302 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:24:56.302 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:24:56.302 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:24:56.302 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:56.302 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:56.302 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:56.302 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:56.302 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:56.302 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:56.302 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:56.302 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:56.302 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:56.302 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:56.302 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:56.302 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:24:56.302 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:24:56.302 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:24:56.302 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:24:56.302 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:24:56.302 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:24:56.302 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:56.303 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:56.303 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ up == up ]] 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:56.303 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ up == up ]] 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:56.303 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # is_hw=yes 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:56.303 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:56.303 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:24:56.303 00:24:56.303 --- 10.0.0.2 ping statistics --- 00:24:56.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:56.303 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:56.303 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:56.303 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:24:56.303 00:24:56.303 --- 10.0.0.1 ping statistics --- 00:24:56.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:56.303 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # return 0 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:56.303 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:56.304 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@505 -- # nvmfpid=3203600 00:24:56.304 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:56.304 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@506 -- # waitforlisten 3203600 00:24:56.304 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@831 -- # '[' -z 3203600 ']' 00:24:56.304 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:56.304 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:56.304 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:56.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:56.304 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:56.304 16:32:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:56.304 [2024-09-29 16:32:56.795333] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:24:56.304 [2024-09-29 16:32:56.795482] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:56.561 [2024-09-29 16:32:56.929462] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:56.819 [2024-09-29 16:32:57.178204] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:56.819 [2024-09-29 16:32:57.178297] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:56.819 [2024-09-29 16:32:57.178324] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:56.819 [2024-09-29 16:32:57.178349] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:56.819 [2024-09-29 16:32:57.178368] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:56.819 [2024-09-29 16:32:57.178419] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:57.385 16:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:57.385 16:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # return 0 00:24:57.385 16:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:57.385 16:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:57.385 16:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:57.385 16:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:57.385 16:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:57.385 16:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:57.385 16:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:24:57.385 16:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.385 16:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:57.385 [2024-09-29 16:32:57.784293] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:57.385 16:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.385 16:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:24:57.385 16:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.385 16:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:57.385 16:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.385 16:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:57.385 16:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.385 16:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:57.385 Malloc0 00:24:57.385 16:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.385 16:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:57.385 16:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.385 16:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:57.385 16:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.385 16:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:57.385 16:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.385 16:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:57.385 [2024-09-29 16:32:57.868764] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:57.385 16:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.385 16:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=3203749 00:24:57.385 16:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:57.385 16:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=3203750 00:24:57.385 16:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:57.385 16:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=3203751 00:24:57.385 16:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:57.385 16:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 3203749 00:24:57.643 [2024-09-29 16:32:57.989291] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:57.643 [2024-09-29 16:32:57.989832] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:57.643 [2024-09-29 16:32:57.990233] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:58.578 Initializing NVMe Controllers 00:24:58.578 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:58.578 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:24:58.578 Initialization complete. Launching workers. 00:24:58.578 ======================================================== 00:24:58.578 Latency(us) 00:24:58.578 Device Information : IOPS MiB/s Average min max 00:24:58.578 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 2994.94 11.70 333.36 247.25 1081.01 00:24:58.578 ======================================================== 00:24:58.578 Total : 2994.94 11.70 333.36 247.25 1081.01 00:24:58.578 00:24:58.578 Initializing NVMe Controllers 00:24:58.578 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:58.578 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:24:58.578 Initialization complete. Launching workers. 00:24:58.578 ======================================================== 00:24:58.578 Latency(us) 00:24:58.578 Device Information : IOPS MiB/s Average min max 00:24:58.578 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 41242.02 40481.00 42035.56 00:24:58.578 ======================================================== 00:24:58.578 Total : 25.00 0.10 41242.02 40481.00 42035.56 00:24:58.578 00:24:58.835 Initializing NVMe Controllers 00:24:58.835 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:58.835 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:24:58.835 Initialization complete. Launching workers. 00:24:58.835 ======================================================== 00:24:58.835 Latency(us) 00:24:58.835 Device Information : IOPS MiB/s Average min max 00:24:58.835 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3058.00 11.95 326.48 227.49 853.00 00:24:58.835 ======================================================== 00:24:58.835 Total : 3058.00 11.95 326.48 227.49 853.00 00:24:58.835 00:24:58.835 16:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 3203750 00:24:58.835 16:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 3203751 00:24:58.835 16:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:58.835 16:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:24:58.835 16:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # nvmfcleanup 00:24:58.835 16:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:24:58.835 16:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:58.835 16:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:24:58.835 16:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:58.835 16:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:58.835 rmmod nvme_tcp 00:24:58.835 rmmod nvme_fabrics 00:24:58.835 rmmod nvme_keyring 00:24:58.835 16:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:58.835 16:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:24:58.835 16:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:24:58.835 16:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@513 -- # '[' -n 3203600 ']' 00:24:58.835 16:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@514 -- # killprocess 3203600 00:24:58.835 16:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@950 -- # '[' -z 3203600 ']' 00:24:58.835 16:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # kill -0 3203600 00:24:58.835 16:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # uname 00:24:58.835 16:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:58.835 16:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3203600 00:24:58.835 16:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:58.836 16:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:58.836 16:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3203600' 00:24:58.836 killing process with pid 3203600 00:24:58.836 16:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@969 -- # kill 3203600 00:24:58.836 16:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@974 -- # wait 3203600 00:25:00.213 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:25:00.213 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:25:00.213 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:25:00.213 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:25:00.213 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # iptables-save 00:25:00.213 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:25:00.213 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # iptables-restore 00:25:00.213 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:00.213 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:00.213 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:00.213 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:00.213 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:02.824 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:02.824 00:25:02.824 real 0m8.371s 00:25:02.824 user 0m7.912s 00:25:02.824 sys 0m2.824s 00:25:02.824 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:02.824 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:02.824 ************************************ 00:25:02.824 END TEST nvmf_control_msg_list 00:25:02.824 ************************************ 00:25:02.824 16:33:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:25:02.824 16:33:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:02.824 16:33:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:02.824 16:33:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:02.824 ************************************ 00:25:02.824 START TEST nvmf_wait_for_buf 00:25:02.824 ************************************ 00:25:02.824 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:25:02.824 * Looking for test storage... 00:25:02.824 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:02.824 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:02.824 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lcov --version 00:25:02.824 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:02.824 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:02.824 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:02.824 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:02.824 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:02.824 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:25:02.824 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:25:02.824 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:25:02.824 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:25:02.824 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:25:02.824 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:25:02.824 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:25:02.824 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:02.824 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:25:02.824 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:25:02.824 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:02.824 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:02.824 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:25:02.824 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:25:02.825 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:02.825 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:25:02.825 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:25:02.825 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:25:02.825 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:25:02.825 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:02.825 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:25:02.825 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:25:02.825 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:02.825 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:02.825 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:25:02.825 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:02.825 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:02.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:02.825 --rc genhtml_branch_coverage=1 00:25:02.825 --rc genhtml_function_coverage=1 00:25:02.825 --rc genhtml_legend=1 00:25:02.825 --rc geninfo_all_blocks=1 00:25:02.825 --rc geninfo_unexecuted_blocks=1 00:25:02.825 00:25:02.825 ' 00:25:02.825 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:02.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:02.825 --rc genhtml_branch_coverage=1 00:25:02.825 --rc genhtml_function_coverage=1 00:25:02.825 --rc genhtml_legend=1 00:25:02.825 --rc geninfo_all_blocks=1 00:25:02.825 --rc geninfo_unexecuted_blocks=1 00:25:02.825 00:25:02.825 ' 00:25:02.825 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:02.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:02.825 --rc genhtml_branch_coverage=1 00:25:02.825 --rc genhtml_function_coverage=1 00:25:02.825 --rc genhtml_legend=1 00:25:02.825 --rc geninfo_all_blocks=1 00:25:02.825 --rc geninfo_unexecuted_blocks=1 00:25:02.825 00:25:02.825 ' 00:25:02.825 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:02.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:02.825 --rc genhtml_branch_coverage=1 00:25:02.825 --rc genhtml_function_coverage=1 00:25:02.825 --rc genhtml_legend=1 00:25:02.825 --rc geninfo_all_blocks=1 00:25:02.825 --rc geninfo_unexecuted_blocks=1 00:25:02.825 00:25:02.825 ' 00:25:02.825 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:02.825 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:25:02.825 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:02.825 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:02.825 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:02.825 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:02.825 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:02.825 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:02.825 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:02.825 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:02.825 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:02.825 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:02.825 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:02.825 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:02.825 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:02.825 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:02.825 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:02.825 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:02.825 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:02.825 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:25:02.825 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:02.825 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:02.825 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:02.825 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.825 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.825 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.825 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:25:02.825 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.825 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:25:02.825 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:02.825 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:02.825 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:02.825 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:02.825 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:02.825 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:02.825 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:02.825 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:02.825 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:02.825 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:02.825 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:25:02.825 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:25:02.825 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:02.825 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@472 -- # prepare_net_devs 00:25:02.825 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:25:02.825 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:25:02.825 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:02.825 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:02.825 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:02.825 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:25:02.825 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:25:02.825 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:25:02.825 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:04.727 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:04.727 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:25:04.727 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:04.727 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:04.727 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:04.727 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:04.727 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:04.727 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:25:04.727 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:04.727 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:25:04.727 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:25:04.727 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:25:04.727 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:25:04.727 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:25:04.727 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:25:04.727 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:04.727 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:04.727 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:04.727 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:04.727 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:04.727 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:04.727 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:04.727 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:04.727 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:04.727 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:04.727 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:04.727 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:25:04.727 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:25:04.727 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:25:04.727 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:25:04.727 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:25:04.727 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:25:04.727 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:04.727 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:04.727 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:04.727 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:25:04.727 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:25:04.727 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:04.727 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:04.727 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:25:04.727 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:04.727 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:04.727 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:04.727 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:25:04.727 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:25:04.727 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:04.727 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:04.727 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:25:04.727 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:25:04.727 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:25:04.727 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:25:04.727 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:04.727 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:04.727 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:25:04.727 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:04.727 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:25:04.727 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:04.727 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:04.727 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:04.727 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:04.728 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:04.728 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:04.728 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:04.728 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:25:04.728 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:04.728 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:25:04.728 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:04.728 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:04.728 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:04.728 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:04.728 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:04.728 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:25:04.728 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # is_hw=yes 00:25:04.728 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:25:04.728 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:25:04.728 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:25:04.728 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:04.728 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:04.728 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:04.728 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:04.728 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:04.728 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:04.728 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:04.728 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:04.728 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:04.728 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:04.728 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:04.728 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:04.728 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:04.728 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:04.728 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:04.728 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:04.728 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:04.728 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:04.728 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:04.728 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:04.728 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:04.728 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:04.728 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:04.728 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:04.728 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:25:04.728 00:25:04.728 --- 10.0.0.2 ping statistics --- 00:25:04.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:04.728 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:25:04.728 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:04.728 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:04.728 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:25:04.728 00:25:04.728 --- 10.0.0.1 ping statistics --- 00:25:04.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:04.728 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:25:04.728 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:04.728 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # return 0 00:25:04.728 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:25:04.728 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:04.728 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:25:04.728 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:25:04.728 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:04.728 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:25:04.728 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:25:04.728 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:25:04.728 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:25:04.728 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:04.728 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:04.728 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@505 -- # nvmfpid=3206079 00:25:04.728 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:04.728 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@506 -- # waitforlisten 3206079 00:25:04.728 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@831 -- # '[' -z 3206079 ']' 00:25:04.728 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:04.728 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:04.728 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:04.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:04.728 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:04.728 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:04.986 [2024-09-29 16:33:05.308147] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:25:04.987 [2024-09-29 16:33:05.308320] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:04.987 [2024-09-29 16:33:05.446048] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:05.245 [2024-09-29 16:33:05.708491] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:05.245 [2024-09-29 16:33:05.708577] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:05.245 [2024-09-29 16:33:05.708604] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:05.245 [2024-09-29 16:33:05.708628] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:05.245 [2024-09-29 16:33:05.708648] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:05.245 [2024-09-29 16:33:05.708724] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:05.810 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:05.810 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # return 0 00:25:05.810 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:25:05.810 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:05.810 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:05.810 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:05.810 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:25:05.810 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:05.810 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:25:05.810 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.810 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:05.810 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.810 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:25:05.810 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.810 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:05.810 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.810 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:25:05.810 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.810 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:06.068 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.068 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:25:06.068 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.068 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:06.068 Malloc0 00:25:06.068 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.068 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:25:06.068 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.068 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:06.068 [2024-09-29 16:33:06.626284] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:06.068 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.068 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:25:06.068 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.068 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:06.326 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.326 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:25:06.326 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.326 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:06.326 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.326 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:06.326 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.326 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:06.326 [2024-09-29 16:33:06.650567] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:06.326 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.326 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:06.326 [2024-09-29 16:33:06.795886] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:07.699 Initializing NVMe Controllers 00:25:07.699 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:07.699 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:25:07.699 Initialization complete. Launching workers. 00:25:07.699 ======================================================== 00:25:07.699 Latency(us) 00:25:07.699 Device Information : IOPS MiB/s Average min max 00:25:07.699 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 124.56 15.57 33251.51 24005.08 63849.29 00:25:07.699 ======================================================== 00:25:07.699 Total : 124.56 15.57 33251.51 24005.08 63849.29 00:25:07.699 00:25:07.699 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:25:07.699 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:25:07.699 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.699 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:07.699 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.699 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1974 00:25:07.699 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1974 -eq 0 ]] 00:25:07.699 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:07.699 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:25:07.699 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # nvmfcleanup 00:25:07.699 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:25:07.699 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:07.699 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:25:07.699 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:07.699 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:07.699 rmmod nvme_tcp 00:25:07.957 rmmod nvme_fabrics 00:25:07.957 rmmod nvme_keyring 00:25:07.957 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:07.957 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:25:07.957 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:25:07.957 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@513 -- # '[' -n 3206079 ']' 00:25:07.957 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@514 -- # killprocess 3206079 00:25:07.957 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@950 -- # '[' -z 3206079 ']' 00:25:07.957 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # kill -0 3206079 00:25:07.957 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # uname 00:25:07.957 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:07.957 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3206079 00:25:07.957 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:07.957 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:07.957 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3206079' 00:25:07.957 killing process with pid 3206079 00:25:07.957 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@969 -- # kill 3206079 00:25:07.957 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@974 -- # wait 3206079 00:25:09.328 16:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:25:09.328 16:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:25:09.328 16:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:25:09.328 16:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:25:09.328 16:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # iptables-save 00:25:09.328 16:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:25:09.328 16:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # iptables-restore 00:25:09.328 16:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:09.328 16:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:09.328 16:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:09.328 16:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:09.328 16:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:11.230 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:11.230 00:25:11.230 real 0m8.864s 00:25:11.230 user 0m5.462s 00:25:11.230 sys 0m2.192s 00:25:11.230 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:11.230 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:11.230 ************************************ 00:25:11.230 END TEST nvmf_wait_for_buf 00:25:11.230 ************************************ 00:25:11.230 16:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:25:11.230 16:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:11.230 16:33:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:11.230 16:33:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:11.230 16:33:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:11.230 ************************************ 00:25:11.230 START TEST nvmf_fuzz 00:25:11.230 ************************************ 00:25:11.230 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:11.490 * Looking for test storage... 00:25:11.490 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:11.490 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:11.490 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:25:11.490 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:11.490 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:11.490 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:11.490 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:11.490 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:11.490 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:25:11.490 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:25:11.490 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:25:11.490 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:25:11.490 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:25:11.490 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:25:11.490 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:25:11.490 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:11.490 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:25:11.490 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:25:11.490 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:11.490 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:11.490 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:25:11.490 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:25:11.490 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:11.490 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:25:11.490 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:25:11.490 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:25:11.490 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:25:11.490 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:11.490 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:25:11.490 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:25:11.490 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:11.490 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:11.490 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:25:11.490 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:11.490 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:11.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.490 --rc genhtml_branch_coverage=1 00:25:11.490 --rc genhtml_function_coverage=1 00:25:11.490 --rc genhtml_legend=1 00:25:11.490 --rc geninfo_all_blocks=1 00:25:11.490 --rc geninfo_unexecuted_blocks=1 00:25:11.490 00:25:11.490 ' 00:25:11.490 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:11.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.490 --rc genhtml_branch_coverage=1 00:25:11.490 --rc genhtml_function_coverage=1 00:25:11.490 --rc genhtml_legend=1 00:25:11.490 --rc geninfo_all_blocks=1 00:25:11.490 --rc geninfo_unexecuted_blocks=1 00:25:11.490 00:25:11.490 ' 00:25:11.490 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:11.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.490 --rc genhtml_branch_coverage=1 00:25:11.490 --rc genhtml_function_coverage=1 00:25:11.490 --rc genhtml_legend=1 00:25:11.490 --rc geninfo_all_blocks=1 00:25:11.490 --rc geninfo_unexecuted_blocks=1 00:25:11.490 00:25:11.490 ' 00:25:11.490 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:11.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.490 --rc genhtml_branch_coverage=1 00:25:11.490 --rc genhtml_function_coverage=1 00:25:11.490 --rc genhtml_legend=1 00:25:11.490 --rc geninfo_all_blocks=1 00:25:11.490 --rc geninfo_unexecuted_blocks=1 00:25:11.490 00:25:11.490 ' 00:25:11.490 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:11.490 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:25:11.490 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:11.490 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:11.490 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:11.490 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:11.490 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:11.490 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:11.490 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:11.490 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:11.490 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:11.490 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:11.490 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:11.490 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:11.490 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:11.490 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:11.490 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:11.490 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:11.490 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:11.490 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:25:11.490 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:11.490 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:11.490 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:11.490 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.490 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.490 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.490 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:25:11.491 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.491 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:25:11.491 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:11.491 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:11.491 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:11.491 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:11.491 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:11.491 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:11.491 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:11.491 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:11.491 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:11.491 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:11.491 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:25:11.491 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:25:11.491 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:11.491 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@472 -- # prepare_net_devs 00:25:11.491 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@434 -- # local -g is_hw=no 00:25:11.491 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@436 -- # remove_spdk_ns 00:25:11.491 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:11.491 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:11.491 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:11.491 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:25:11.491 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:25:11.491 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:25:11.491 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:13.394 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:13.394 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:25:13.394 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:13.394 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:13.394 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:13.394 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:13.394 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:13.394 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:25:13.394 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:13.394 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:25:13.394 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:25:13.394 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:25:13.394 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:25:13.394 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:25:13.394 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:25:13.394 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:13.394 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:13.394 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:13.394 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:13.394 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:13.394 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:13.394 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:13.394 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:13.394 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:13.394 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:13.394 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:13.394 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:25:13.394 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:25:13.394 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:25:13.394 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:25:13.394 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:25:13.394 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:25:13.394 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:13.394 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:13.394 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:13.394 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:25:13.394 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:25:13.394 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:13.394 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:13.394 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:25:13.394 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:13.394 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:13.394 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:13.394 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:25:13.394 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:25:13.395 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:13.395 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:13.395 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:25:13.395 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:25:13.395 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:25:13.395 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:25:13.395 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:13.395 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:13.395 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:25:13.395 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:13.395 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ up == up ]] 00:25:13.395 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:13.395 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:13.395 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:13.395 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:13.395 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:13.395 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:13.395 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:13.395 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:25:13.395 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:13.395 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ up == up ]] 00:25:13.395 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:13.395 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:13.395 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:13.395 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:13.395 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:13.395 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:25:13.395 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # is_hw=yes 00:25:13.395 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:25:13.395 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:25:13.395 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:25:13.395 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:13.395 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:13.395 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:13.395 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:13.395 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:13.395 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:13.395 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:13.395 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:13.395 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:13.395 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:13.395 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:13.395 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:13.395 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:13.395 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:13.395 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:13.653 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:13.653 16:33:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:13.653 16:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:13.653 16:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:13.653 16:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:13.653 16:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:13.653 16:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:13.653 16:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:13.653 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:13.653 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.163 ms 00:25:13.653 00:25:13.653 --- 10.0.0.2 ping statistics --- 00:25:13.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.653 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:25:13.653 16:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:13.653 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:13.653 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:25:13.653 00:25:13.653 --- 10.0.0.1 ping statistics --- 00:25:13.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.653 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:25:13.653 16:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:13.653 16:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # return 0 00:25:13.653 16:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:25:13.653 16:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:13.653 16:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:25:13.653 16:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:25:13.653 16:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:13.653 16:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:25:13.653 16:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:25:13.653 16:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=3209062 00:25:13.653 16:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:13.653 16:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:13.653 16:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 3209062 00:25:13.653 16:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@831 -- # '[' -z 3209062 ']' 00:25:13.653 16:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:13.653 16:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:13.653 16:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:13.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:13.653 16:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:13.653 16:33:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:15.028 16:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:15.028 16:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # return 0 00:25:15.028 16:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:15.028 16:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.028 16:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:15.028 16:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.028 16:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:25:15.028 16:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.028 16:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:15.028 Malloc0 00:25:15.028 16:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.028 16:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:15.028 16:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.028 16:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:15.028 16:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.028 16:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:15.028 16:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.028 16:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:15.028 16:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.028 16:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:15.028 16:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.028 16:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:15.028 16:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.028 16:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:25:15.028 16:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:25:47.099 Fuzzing completed. Shutting down the fuzz application 00:25:47.099 00:25:47.099 Dumping successful admin opcodes: 00:25:47.099 8, 9, 10, 24, 00:25:47.099 Dumping successful io opcodes: 00:25:47.099 0, 9, 00:25:47.099 NS: 0x200003aefec0 I/O qp, Total commands completed: 328169, total successful commands: 1944, random_seed: 3567440512 00:25:47.099 NS: 0x200003aefec0 admin qp, Total commands completed: 41344, total successful commands: 337, random_seed: 1035520384 00:25:47.099 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:25:48.034 Fuzzing completed. Shutting down the fuzz application 00:25:48.034 00:25:48.034 Dumping successful admin opcodes: 00:25:48.034 24, 00:25:48.034 Dumping successful io opcodes: 00:25:48.034 00:25:48.034 NS: 0x200003aefec0 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1472907213 00:25:48.034 NS: 0x200003aefec0 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 1473091493 00:25:48.034 16:33:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:48.034 16:33:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.034 16:33:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:48.034 16:33:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.034 16:33:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:25:48.034 16:33:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:25:48.034 16:33:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@512 -- # nvmfcleanup 00:25:48.034 16:33:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:25:48.034 16:33:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:48.034 16:33:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:25:48.034 16:33:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:48.034 16:33:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:48.034 rmmod nvme_tcp 00:25:48.034 rmmod nvme_fabrics 00:25:48.034 rmmod nvme_keyring 00:25:48.034 16:33:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:48.034 16:33:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:25:48.034 16:33:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:25:48.034 16:33:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@513 -- # '[' -n 3209062 ']' 00:25:48.034 16:33:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@514 -- # killprocess 3209062 00:25:48.034 16:33:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@950 -- # '[' -z 3209062 ']' 00:25:48.034 16:33:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # kill -0 3209062 00:25:48.034 16:33:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # uname 00:25:48.034 16:33:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:48.034 16:33:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3209062 00:25:48.034 16:33:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:48.034 16:33:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:48.035 16:33:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3209062' 00:25:48.035 killing process with pid 3209062 00:25:48.035 16:33:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@969 -- # kill 3209062 00:25:48.035 16:33:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@974 -- # wait 3209062 00:25:49.942 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:25:49.942 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:25:49.942 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:25:49.942 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:25:49.942 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@787 -- # iptables-save 00:25:49.942 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:25:49.942 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@787 -- # iptables-restore 00:25:49.942 16:33:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:49.942 16:33:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:49.942 16:33:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:49.942 16:33:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:49.942 16:33:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:51.848 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:51.848 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:25:51.848 00:25:51.848 real 0m40.326s 00:25:51.848 user 0m58.367s 00:25:51.848 sys 0m13.109s 00:25:51.848 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:51.848 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:51.848 ************************************ 00:25:51.848 END TEST nvmf_fuzz 00:25:51.848 ************************************ 00:25:51.848 16:33:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:51.848 16:33:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:51.848 16:33:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:51.848 16:33:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:51.848 ************************************ 00:25:51.848 START TEST nvmf_multiconnection 00:25:51.848 ************************************ 00:25:51.848 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:51.848 * Looking for test storage... 00:25:51.848 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:51.848 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:51.848 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # lcov --version 00:25:51.848 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:51.848 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:51.848 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:51.848 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:51.848 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:51.848 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:25:51.848 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:25:51.848 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:25:51.848 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:25:51.848 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:25:51.848 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:25:51.848 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:25:51.848 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:51.848 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:25:51.848 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:25:51.848 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:51.848 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:51.848 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:25:51.848 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:25:51.848 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:51.848 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:25:51.848 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:25:51.848 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:25:51.848 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:25:51.848 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:51.848 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:25:51.848 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:25:51.848 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:51.848 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:51.848 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:25:51.848 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:51.848 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:51.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:51.848 --rc genhtml_branch_coverage=1 00:25:51.848 --rc genhtml_function_coverage=1 00:25:51.848 --rc genhtml_legend=1 00:25:51.848 --rc geninfo_all_blocks=1 00:25:51.848 --rc geninfo_unexecuted_blocks=1 00:25:51.848 00:25:51.848 ' 00:25:51.848 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:51.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:51.848 --rc genhtml_branch_coverage=1 00:25:51.848 --rc genhtml_function_coverage=1 00:25:51.848 --rc genhtml_legend=1 00:25:51.848 --rc geninfo_all_blocks=1 00:25:51.848 --rc geninfo_unexecuted_blocks=1 00:25:51.848 00:25:51.848 ' 00:25:51.848 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:51.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:51.848 --rc genhtml_branch_coverage=1 00:25:51.848 --rc genhtml_function_coverage=1 00:25:51.848 --rc genhtml_legend=1 00:25:51.848 --rc geninfo_all_blocks=1 00:25:51.848 --rc geninfo_unexecuted_blocks=1 00:25:51.848 00:25:51.848 ' 00:25:51.848 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:51.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:51.848 --rc genhtml_branch_coverage=1 00:25:51.848 --rc genhtml_function_coverage=1 00:25:51.848 --rc genhtml_legend=1 00:25:51.848 --rc geninfo_all_blocks=1 00:25:51.848 --rc geninfo_unexecuted_blocks=1 00:25:51.848 00:25:51.848 ' 00:25:51.848 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:51.848 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:25:51.848 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:51.848 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:51.848 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:51.848 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:51.848 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:51.848 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:51.848 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:51.849 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:51.849 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:51.849 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:51.849 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:51.849 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:51.849 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:51.849 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:51.849 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:51.849 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:51.849 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:51.849 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:25:51.849 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:51.849 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:51.849 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:51.849 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.849 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.849 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.849 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:25:51.849 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.849 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:25:51.849 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:51.849 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:51.849 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:51.849 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:51.849 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:51.849 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:51.849 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:51.849 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:51.849 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:51.849 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:51.849 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:51.849 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:51.849 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:25:51.849 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:25:51.849 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:25:51.849 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:51.849 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@472 -- # prepare_net_devs 00:25:51.849 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@434 -- # local -g is_hw=no 00:25:51.849 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@436 -- # remove_spdk_ns 00:25:51.849 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:51.849 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:51.849 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:51.849 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:25:51.849 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:25:51.849 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:25:51.849 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:53.750 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:53.750 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:25:53.750 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:53.750 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:53.750 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:53.750 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:53.750 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:53.750 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:53.751 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:53.751 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ up == up ]] 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:53.751 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ up == up ]] 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:53.751 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # is_hw=yes 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:53.751 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:54.009 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:54.009 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:54.009 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:54.009 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:54.009 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:54.009 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:54.009 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:54.009 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:54.009 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:54.009 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:25:54.009 00:25:54.009 --- 10.0.0.2 ping statistics --- 00:25:54.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:54.009 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:25:54.009 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:54.009 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:54.009 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:25:54.009 00:25:54.009 --- 10.0.0.1 ping statistics --- 00:25:54.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:54.009 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:25:54.009 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:54.009 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # return 0 00:25:54.009 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:25:54.009 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:54.009 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:25:54.009 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:25:54.009 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:54.009 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:25:54.009 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:25:54.009 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:25:54.009 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:25:54.009 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:54.009 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:54.009 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@505 -- # nvmfpid=3215181 00:25:54.009 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:54.009 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@506 -- # waitforlisten 3215181 00:25:54.009 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@831 -- # '[' -z 3215181 ']' 00:25:54.009 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:54.009 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:54.009 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:54.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:54.009 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:54.009 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:54.009 [2024-09-29 16:33:54.563806] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:25:54.009 [2024-09-29 16:33:54.563957] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:54.268 [2024-09-29 16:33:54.703810] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:54.526 [2024-09-29 16:33:54.970320] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:54.526 [2024-09-29 16:33:54.970408] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:54.526 [2024-09-29 16:33:54.970434] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:54.526 [2024-09-29 16:33:54.970458] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:54.526 [2024-09-29 16:33:54.970477] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:54.526 [2024-09-29 16:33:54.970604] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:54.526 [2024-09-29 16:33:54.970657] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:25:54.526 [2024-09-29 16:33:54.970717] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:54.526 [2024-09-29 16:33:54.970723] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:25:55.092 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:55.092 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # return 0 00:25:55.092 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:25:55.092 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:55.092 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:55.092 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:55.092 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:55.092 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.092 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:55.092 [2024-09-29 16:33:55.592359] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:55.092 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.092 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:25:55.092 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:55.092 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:55.092 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.092 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:55.351 Malloc1 00:25:55.351 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.351 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:25:55.351 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.351 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:55.351 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.351 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:55.351 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.351 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:55.351 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.351 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:55.351 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.351 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:55.351 [2024-09-29 16:33:55.702085] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:55.351 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.351 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:55.351 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:25:55.351 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.351 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:55.351 Malloc2 00:25:55.351 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.351 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:25:55.351 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.351 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:55.351 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.351 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:25:55.351 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.351 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:55.351 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.351 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:55.351 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.351 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:55.351 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.351 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:55.351 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:25:55.351 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.351 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:55.351 Malloc3 00:25:55.351 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.351 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:25:55.351 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.351 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:55.351 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.351 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:25:55.351 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.351 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:55.351 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.352 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:25:55.352 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.352 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:55.352 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.352 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:55.352 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:25:55.352 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.352 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:55.610 Malloc4 00:25:55.610 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.610 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:25:55.610 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.610 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:55.610 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.610 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:25:55.610 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.610 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:55.610 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.610 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:25:55.610 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.610 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:55.610 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.610 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:55.610 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:25:55.610 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.610 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:55.610 Malloc5 00:25:55.610 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.610 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:25:55.610 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.610 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:55.610 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.610 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:25:55.610 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.610 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:55.610 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.610 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:25:55.610 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.610 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:55.610 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.610 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:55.610 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:25:55.610 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.610 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:55.869 Malloc6 00:25:55.869 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.869 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:25:55.869 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.869 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:55.869 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.869 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:25:55.869 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.869 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:55.869 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.869 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:25:55.869 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.869 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:55.869 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.869 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:55.869 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:25:55.869 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.869 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:55.869 Malloc7 00:25:55.869 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.869 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:25:55.869 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.869 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:55.869 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.869 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:25:55.869 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.869 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:55.869 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.869 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:25:55.869 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.869 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:55.870 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.870 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:55.870 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:25:55.870 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.870 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:55.870 Malloc8 00:25:55.870 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.870 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:25:55.870 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.870 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:55.870 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.870 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:25:55.870 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.870 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:55.870 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.870 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:25:55.870 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.870 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:55.870 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.870 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:55.870 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:25:55.870 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.870 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:56.128 Malloc9 00:25:56.128 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.128 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:25:56.128 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.128 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:56.128 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.128 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:25:56.128 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.128 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:56.128 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.128 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:25:56.128 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.128 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:56.128 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.128 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:56.128 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:25:56.128 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.128 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:56.128 Malloc10 00:25:56.128 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.128 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:25:56.128 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.128 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:56.128 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.128 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:25:56.128 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.128 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:56.128 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.128 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:25:56.128 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.128 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:56.128 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.128 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:56.128 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:25:56.128 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.128 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:56.128 Malloc11 00:25:56.128 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.128 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:25:56.128 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.128 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:56.128 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.128 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:25:56.128 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.129 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:56.385 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.385 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:25:56.385 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.385 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:56.385 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.386 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:25:56.386 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:56.386 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:56.952 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:25:56.952 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:56.952 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:56.952 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:56.952 16:33:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:58.854 16:33:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:58.854 16:33:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:58.854 16:33:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:25:58.854 16:33:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:58.854 16:33:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:58.854 16:33:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:58.854 16:33:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:58.854 16:33:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:25:59.788 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:25:59.788 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:59.788 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:59.788 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:59.788 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:01.683 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:01.683 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:01.683 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:26:01.683 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:01.683 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:01.683 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:01.683 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:01.683 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:26:02.248 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:26:02.248 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:02.248 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:02.248 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:02.248 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:04.840 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:04.840 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:04.840 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:26:04.840 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:04.840 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:04.840 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:04.840 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:04.840 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:26:05.097 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:26:05.097 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:05.097 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:05.097 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:05.097 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:07.624 16:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:07.624 16:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:07.624 16:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:26:07.624 16:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:07.624 16:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:07.624 16:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:07.624 16:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:07.624 16:34:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:26:07.883 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:26:07.883 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:07.883 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:07.883 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:07.883 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:10.412 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:10.412 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:10.412 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:26:10.412 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:10.412 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:10.412 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:10.412 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:10.412 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:26:10.670 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:26:10.670 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:10.670 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:10.670 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:10.670 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:12.570 16:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:12.570 16:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:12.570 16:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:26:12.828 16:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:12.828 16:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:12.828 16:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:12.828 16:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:12.828 16:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:26:13.395 16:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:26:13.395 16:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:13.395 16:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:13.395 16:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:13.395 16:34:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:15.925 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:15.925 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:15.925 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:26:15.925 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:15.925 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:15.925 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:15.925 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:15.925 16:34:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:26:16.492 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:26:16.492 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:16.492 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:16.492 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:16.492 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:18.392 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:18.392 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:18.392 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:26:18.392 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:18.392 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:18.392 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:18.392 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:18.392 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:26:19.324 16:34:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:26:19.324 16:34:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:19.324 16:34:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:19.324 16:34:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:19.324 16:34:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:21.223 16:34:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:21.223 16:34:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:21.223 16:34:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:26:21.223 16:34:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:21.223 16:34:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:21.223 16:34:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:21.223 16:34:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:21.223 16:34:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:26:22.157 16:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:26:22.157 16:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:22.157 16:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:22.157 16:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:22.157 16:34:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:24.683 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:24.683 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:24.683 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:26:24.683 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:24.683 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:24.683 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:24.683 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:24.683 16:34:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:26:25.249 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:26:25.249 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:25.249 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:25.249 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:25.249 16:34:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:27.147 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:27.147 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:27.147 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:26:27.147 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:27.147 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:27.147 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:27.147 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:26:27.147 [global] 00:26:27.147 thread=1 00:26:27.147 invalidate=1 00:26:27.147 rw=read 00:26:27.147 time_based=1 00:26:27.147 runtime=10 00:26:27.147 ioengine=libaio 00:26:27.147 direct=1 00:26:27.147 bs=262144 00:26:27.147 iodepth=64 00:26:27.147 norandommap=1 00:26:27.147 numjobs=1 00:26:27.147 00:26:27.147 [job0] 00:26:27.147 filename=/dev/nvme0n1 00:26:27.147 [job1] 00:26:27.147 filename=/dev/nvme10n1 00:26:27.147 [job2] 00:26:27.147 filename=/dev/nvme1n1 00:26:27.147 [job3] 00:26:27.147 filename=/dev/nvme2n1 00:26:27.147 [job4] 00:26:27.147 filename=/dev/nvme3n1 00:26:27.147 [job5] 00:26:27.147 filename=/dev/nvme4n1 00:26:27.147 [job6] 00:26:27.147 filename=/dev/nvme5n1 00:26:27.147 [job7] 00:26:27.147 filename=/dev/nvme6n1 00:26:27.147 [job8] 00:26:27.147 filename=/dev/nvme7n1 00:26:27.147 [job9] 00:26:27.147 filename=/dev/nvme8n1 00:26:27.147 [job10] 00:26:27.147 filename=/dev/nvme9n1 00:26:27.147 Could not set queue depth (nvme0n1) 00:26:27.147 Could not set queue depth (nvme10n1) 00:26:27.147 Could not set queue depth (nvme1n1) 00:26:27.147 Could not set queue depth (nvme2n1) 00:26:27.147 Could not set queue depth (nvme3n1) 00:26:27.147 Could not set queue depth (nvme4n1) 00:26:27.147 Could not set queue depth (nvme5n1) 00:26:27.147 Could not set queue depth (nvme6n1) 00:26:27.147 Could not set queue depth (nvme7n1) 00:26:27.147 Could not set queue depth (nvme8n1) 00:26:27.147 Could not set queue depth (nvme9n1) 00:26:27.405 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:27.405 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:27.405 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:27.405 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:27.405 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:27.405 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:27.405 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:27.405 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:27.405 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:27.405 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:27.405 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:27.405 fio-3.35 00:26:27.405 Starting 11 threads 00:26:39.607 00:26:39.607 job0: (groupid=0, jobs=1): err= 0: pid=3219572: Sun Sep 29 16:34:38 2024 00:26:39.607 read: IOPS=197, BW=49.3MiB/s (51.7MB/s)(502MiB/10179msec) 00:26:39.607 slat (usec): min=10, max=704814, avg=4244.44, stdev=26051.13 00:26:39.607 clat (msec): min=33, max=1314, avg=319.95, stdev=231.48 00:26:39.607 lat (msec): min=33, max=1377, avg=324.19, stdev=234.28 00:26:39.607 clat percentiles (msec): 00:26:39.607 | 1.00th=[ 83], 5.00th=[ 129], 10.00th=[ 148], 20.00th=[ 165], 00:26:39.607 | 30.00th=[ 182], 40.00th=[ 199], 50.00th=[ 218], 60.00th=[ 251], 00:26:39.607 | 70.00th=[ 359], 80.00th=[ 477], 90.00th=[ 667], 95.00th=[ 802], 00:26:39.607 | 99.00th=[ 1250], 99.50th=[ 1318], 99.90th=[ 1318], 99.95th=[ 1318], 00:26:39.607 | 99.99th=[ 1318] 00:26:39.607 bw ( KiB/s): min=12312, max=99840, per=7.65%, avg=49760.25, stdev=29136.95, samples=20 00:26:39.607 iops : min= 48, max= 390, avg=194.35, stdev=113.80, samples=20 00:26:39.607 lat (msec) : 50=0.50%, 100=1.49%, 250=57.47%, 500=23.11%, 750=11.70% 00:26:39.608 lat (msec) : 1000=3.74%, 2000=1.99% 00:26:39.608 cpu : usr=0.10%, sys=0.53%, ctx=247, majf=0, minf=4097 00:26:39.608 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.9% 00:26:39.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:39.608 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:39.608 issued rwts: total=2008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:39.608 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:39.608 job1: (groupid=0, jobs=1): err= 0: pid=3219573: Sun Sep 29 16:34:38 2024 00:26:39.608 read: IOPS=189, BW=47.3MiB/s (49.6MB/s)(482MiB/10188msec) 00:26:39.608 slat (usec): min=8, max=187064, avg=4788.89, stdev=18050.90 00:26:39.608 clat (msec): min=11, max=872, avg=333.12, stdev=198.04 00:26:39.608 lat (msec): min=11, max=902, avg=337.91, stdev=201.23 00:26:39.608 clat percentiles (msec): 00:26:39.608 | 1.00th=[ 38], 5.00th=[ 70], 10.00th=[ 116], 20.00th=[ 165], 00:26:39.608 | 30.00th=[ 209], 40.00th=[ 236], 50.00th=[ 275], 60.00th=[ 317], 00:26:39.608 | 70.00th=[ 439], 80.00th=[ 514], 90.00th=[ 667], 95.00th=[ 701], 00:26:39.608 | 99.00th=[ 793], 99.50th=[ 793], 99.90th=[ 869], 99.95th=[ 869], 00:26:39.608 | 99.99th=[ 869] 00:26:39.608 bw ( KiB/s): min=20480, max=105261, per=7.33%, avg=47703.20, stdev=28193.84, samples=20 00:26:39.608 iops : min= 80, max= 411, avg=186.30, stdev=110.11, samples=20 00:26:39.608 lat (msec) : 20=0.21%, 50=3.94%, 100=3.32%, 250=36.10%, 500=34.85% 00:26:39.608 lat (msec) : 750=18.36%, 1000=3.22% 00:26:39.608 cpu : usr=0.08%, sys=0.69%, ctx=287, majf=0, minf=3721 00:26:39.608 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.7%, >=64=96.7% 00:26:39.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:39.608 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:39.608 issued rwts: total=1928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:39.608 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:39.608 job2: (groupid=0, jobs=1): err= 0: pid=3219574: Sun Sep 29 16:34:38 2024 00:26:39.608 read: IOPS=243, BW=60.8MiB/s (63.8MB/s)(619MiB/10180msec) 00:26:39.608 slat (usec): min=12, max=298888, avg=3617.64, stdev=18629.68 00:26:39.608 clat (msec): min=33, max=856, avg=259.32, stdev=229.74 00:26:39.608 lat (msec): min=33, max=868, avg=262.94, stdev=232.87 00:26:39.608 clat percentiles (msec): 00:26:39.608 | 1.00th=[ 36], 5.00th=[ 39], 10.00th=[ 40], 20.00th=[ 43], 00:26:39.608 | 30.00th=[ 45], 40.00th=[ 50], 50.00th=[ 163], 60.00th=[ 338], 00:26:39.608 | 70.00th=[ 426], 80.00th=[ 498], 90.00th=[ 567], 95.00th=[ 659], 00:26:39.608 | 99.00th=[ 785], 99.50th=[ 785], 99.90th=[ 810], 99.95th=[ 835], 00:26:39.608 | 99.99th=[ 860] 00:26:39.608 bw ( KiB/s): min=20992, max=379656, per=9.48%, avg=61712.85, stdev=88068.49, samples=20 00:26:39.608 iops : min= 82, max= 1483, avg=241.05, stdev=344.02, samples=20 00:26:39.608 lat (msec) : 50=40.71%, 100=4.16%, 250=9.05%, 500=26.70%, 750=17.33% 00:26:39.608 lat (msec) : 1000=2.06% 00:26:39.608 cpu : usr=0.18%, sys=0.70%, ctx=333, majf=0, minf=4097 00:26:39.608 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:26:39.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:39.608 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:39.608 issued rwts: total=2476,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:39.608 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:39.608 job3: (groupid=0, jobs=1): err= 0: pid=3219575: Sun Sep 29 16:34:38 2024 00:26:39.608 read: IOPS=251, BW=63.0MiB/s (66.1MB/s)(638MiB/10127msec) 00:26:39.608 slat (usec): min=10, max=486440, avg=3151.10, stdev=21350.63 00:26:39.608 clat (msec): min=24, max=1137, avg=250.61, stdev=265.99 00:26:39.608 lat (msec): min=25, max=1313, avg=253.76, stdev=268.94 00:26:39.608 clat percentiles (msec): 00:26:39.608 | 1.00th=[ 31], 5.00th=[ 39], 10.00th=[ 42], 20.00th=[ 46], 00:26:39.608 | 30.00th=[ 49], 40.00th=[ 51], 50.00th=[ 59], 60.00th=[ 251], 00:26:39.608 | 70.00th=[ 334], 80.00th=[ 451], 90.00th=[ 676], 95.00th=[ 793], 00:26:39.608 | 99.00th=[ 1045], 99.50th=[ 1070], 99.90th=[ 1099], 99.95th=[ 1133], 00:26:39.608 | 99.99th=[ 1133] 00:26:39.608 bw ( KiB/s): min= 5632, max=333312, per=9.79%, avg=63716.55, stdev=82491.70, samples=20 00:26:39.608 iops : min= 22, max= 1302, avg=248.80, stdev=322.28, samples=20 00:26:39.608 lat (msec) : 50=38.21%, 100=12.74%, 250=8.78%, 500=23.12%, 750=9.25% 00:26:39.608 lat (msec) : 1000=6.66%, 2000=1.25% 00:26:39.608 cpu : usr=0.09%, sys=0.78%, ctx=360, majf=0, minf=4097 00:26:39.608 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:26:39.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:39.608 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:39.608 issued rwts: total=2552,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:39.608 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:39.608 job4: (groupid=0, jobs=1): err= 0: pid=3219576: Sun Sep 29 16:34:38 2024 00:26:39.608 read: IOPS=212, BW=53.1MiB/s (55.7MB/s)(536MiB/10094msec) 00:26:39.608 slat (usec): min=8, max=642586, avg=3465.80, stdev=25762.53 00:26:39.608 clat (msec): min=2, max=1524, avg=297.78, stdev=232.18 00:26:39.608 lat (msec): min=2, max=1524, avg=301.24, stdev=235.73 00:26:39.608 clat percentiles (msec): 00:26:39.608 | 1.00th=[ 17], 5.00th=[ 66], 10.00th=[ 84], 20.00th=[ 108], 00:26:39.608 | 30.00th=[ 144], 40.00th=[ 184], 50.00th=[ 224], 60.00th=[ 279], 00:26:39.608 | 70.00th=[ 363], 80.00th=[ 464], 90.00th=[ 625], 95.00th=[ 768], 00:26:39.608 | 99.00th=[ 1070], 99.50th=[ 1083], 99.90th=[ 1083], 99.95th=[ 1519], 00:26:39.608 | 99.99th=[ 1519] 00:26:39.608 bw ( KiB/s): min= 2560, max=130560, per=8.18%, avg=53221.55, stdev=33882.52, samples=20 00:26:39.608 iops : min= 10, max= 510, avg=207.80, stdev=132.41, samples=20 00:26:39.608 lat (msec) : 4=0.09%, 10=0.28%, 20=0.84%, 50=1.49%, 100=15.77% 00:26:39.608 lat (msec) : 250=38.64%, 500=26.13%, 750=10.22%, 1000=4.57%, 2000=1.96% 00:26:39.608 cpu : usr=0.09%, sys=0.55%, ctx=308, majf=0, minf=4097 00:26:39.608 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.5%, >=64=97.1% 00:26:39.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:39.608 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:39.608 issued rwts: total=2143,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:39.608 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:39.608 job5: (groupid=0, jobs=1): err= 0: pid=3219577: Sun Sep 29 16:34:38 2024 00:26:39.608 read: IOPS=201, BW=50.4MiB/s (52.9MB/s)(512MiB/10159msec) 00:26:39.608 slat (usec): min=9, max=239886, avg=2438.97, stdev=16088.29 00:26:39.608 clat (usec): min=981, max=1173.8k, avg=314641.58, stdev=268752.95 00:26:39.608 lat (usec): min=1011, max=1173.9k, avg=317080.56, stdev=271120.25 00:26:39.608 clat percentiles (usec): 00:26:39.608 | 1.00th=[ 1663], 5.00th=[ 13042], 10.00th=[ 17957], 00:26:39.608 | 20.00th=[ 51119], 30.00th=[ 81265], 40.00th=[ 200279], 00:26:39.608 | 50.00th=[ 283116], 60.00th=[ 333448], 70.00th=[ 408945], 00:26:39.608 | 80.00th=[ 566232], 90.00th=[ 708838], 95.00th=[ 801113], 00:26:39.608 | 99.00th=[1082131], 99.50th=[1098908], 99.90th=[1166017], 00:26:39.608 | 99.95th=[1166017], 99.99th=[1166017] 00:26:39.608 bw ( KiB/s): min=18432, max=253440, per=7.81%, avg=50813.20, stdev=57187.10, samples=20 00:26:39.608 iops : min= 72, max= 990, avg=198.40, stdev=223.43, samples=20 00:26:39.608 lat (usec) : 1000=0.05% 00:26:39.608 lat (msec) : 2=1.71%, 4=0.93%, 10=1.46%, 20=6.69%, 50=8.98% 00:26:39.608 lat (msec) : 100=12.05%, 250=12.69%, 500=31.87%, 750=16.11%, 1000=6.15% 00:26:39.608 lat (msec) : 2000=1.32% 00:26:39.608 cpu : usr=0.13%, sys=0.72%, ctx=796, majf=0, minf=4097 00:26:39.608 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.9% 00:26:39.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:39.608 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:39.608 issued rwts: total=2049,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:39.608 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:39.608 job6: (groupid=0, jobs=1): err= 0: pid=3219578: Sun Sep 29 16:34:38 2024 00:26:39.608 read: IOPS=452, BW=113MiB/s (119MB/s)(1143MiB/10098msec) 00:26:39.608 slat (usec): min=12, max=720414, avg=1656.34, stdev=13912.46 00:26:39.608 clat (msec): min=24, max=1465, avg=139.58, stdev=178.95 00:26:39.608 lat (msec): min=24, max=1465, avg=141.23, stdev=180.54 00:26:39.608 clat percentiles (msec): 00:26:39.608 | 1.00th=[ 32], 5.00th=[ 38], 10.00th=[ 43], 20.00th=[ 47], 00:26:39.608 | 30.00th=[ 50], 40.00th=[ 52], 50.00th=[ 69], 60.00th=[ 97], 00:26:39.608 | 70.00th=[ 126], 80.00th=[ 203], 90.00th=[ 288], 95.00th=[ 485], 00:26:39.608 | 99.00th=[ 944], 99.50th=[ 1401], 99.90th=[ 1435], 99.95th=[ 1469], 00:26:39.608 | 99.99th=[ 1469] 00:26:39.608 bw ( KiB/s): min= 4598, max=348672, per=17.73%, avg=115427.25, stdev=106925.23, samples=20 00:26:39.608 iops : min= 17, max= 1362, avg=450.80, stdev=417.76, samples=20 00:26:39.608 lat (msec) : 50=35.41%, 100=25.66%, 250=26.71%, 500=8.18%, 750=2.17% 00:26:39.609 lat (msec) : 1000=1.16%, 2000=0.72% 00:26:39.609 cpu : usr=0.35%, sys=1.34%, ctx=563, majf=0, minf=4097 00:26:39.609 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:26:39.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:39.609 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:39.609 issued rwts: total=4572,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:39.609 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:39.609 job7: (groupid=0, jobs=1): err= 0: pid=3219579: Sun Sep 29 16:34:38 2024 00:26:39.609 read: IOPS=143, BW=35.8MiB/s (37.5MB/s)(364MiB/10163msec) 00:26:39.609 slat (usec): min=9, max=271713, avg=5687.52, stdev=25042.94 00:26:39.609 clat (msec): min=111, max=1050, avg=441.04, stdev=178.87 00:26:39.609 lat (msec): min=112, max=1247, avg=446.73, stdev=181.95 00:26:39.609 clat percentiles (msec): 00:26:39.609 | 1.00th=[ 113], 5.00th=[ 142], 10.00th=[ 188], 20.00th=[ 292], 00:26:39.609 | 30.00th=[ 342], 40.00th=[ 409], 50.00th=[ 430], 60.00th=[ 472], 00:26:39.609 | 70.00th=[ 518], 80.00th=[ 592], 90.00th=[ 676], 95.00th=[ 760], 00:26:39.609 | 99.00th=[ 844], 99.50th=[ 1053], 99.90th=[ 1053], 99.95th=[ 1053], 00:26:39.609 | 99.99th=[ 1053] 00:26:39.609 bw ( KiB/s): min=11241, max=69632, per=5.47%, avg=35612.15, stdev=14110.10, samples=20 00:26:39.609 iops : min= 43, max= 272, avg=139.05, stdev=55.20, samples=20 00:26:39.609 lat (msec) : 250=14.71%, 500=52.30%, 750=27.84%, 1000=4.60%, 2000=0.55% 00:26:39.609 cpu : usr=0.06%, sys=0.48%, ctx=151, majf=0, minf=4097 00:26:39.609 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.1%, 32=2.2%, >=64=95.7% 00:26:39.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:39.609 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:39.609 issued rwts: total=1455,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:39.609 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:39.609 job8: (groupid=0, jobs=1): err= 0: pid=3219580: Sun Sep 29 16:34:38 2024 00:26:39.609 read: IOPS=234, BW=58.6MiB/s (61.4MB/s)(595MiB/10160msec) 00:26:39.609 slat (usec): min=12, max=337842, avg=3834.66, stdev=18953.42 00:26:39.609 clat (msec): min=10, max=780, avg=269.02, stdev=151.71 00:26:39.609 lat (msec): min=10, max=966, avg=272.86, stdev=154.31 00:26:39.609 clat percentiles (msec): 00:26:39.609 | 1.00th=[ 22], 5.00th=[ 65], 10.00th=[ 81], 20.00th=[ 129], 00:26:39.609 | 30.00th=[ 161], 40.00th=[ 209], 50.00th=[ 259], 60.00th=[ 296], 00:26:39.609 | 70.00th=[ 363], 80.00th=[ 397], 90.00th=[ 468], 95.00th=[ 523], 00:26:39.609 | 99.00th=[ 684], 99.50th=[ 718], 99.90th=[ 785], 99.95th=[ 785], 00:26:39.609 | 99.99th=[ 785] 00:26:39.609 bw ( KiB/s): min=17920, max=202752, per=9.11%, avg=59299.05, stdev=39954.46, samples=20 00:26:39.609 iops : min= 70, max= 792, avg=231.60, stdev=156.06, samples=20 00:26:39.609 lat (msec) : 20=0.46%, 50=1.89%, 100=12.26%, 250=34.02%, 500=44.69% 00:26:39.609 lat (msec) : 750=6.55%, 1000=0.13% 00:26:39.609 cpu : usr=0.12%, sys=0.80%, ctx=220, majf=0, minf=4097 00:26:39.609 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.3%, >=64=97.4% 00:26:39.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:39.609 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:39.609 issued rwts: total=2381,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:39.609 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:39.609 job9: (groupid=0, jobs=1): err= 0: pid=3219581: Sun Sep 29 16:34:38 2024 00:26:39.609 read: IOPS=296, BW=74.2MiB/s (77.8MB/s)(756MiB/10187msec) 00:26:39.609 slat (usec): min=11, max=179343, avg=2782.88, stdev=12847.87 00:26:39.609 clat (msec): min=8, max=1077, avg=212.76, stdev=203.35 00:26:39.609 lat (msec): min=8, max=1077, avg=215.54, stdev=205.82 00:26:39.609 clat percentiles (msec): 00:26:39.609 | 1.00th=[ 36], 5.00th=[ 44], 10.00th=[ 49], 20.00th=[ 54], 00:26:39.609 | 30.00th=[ 55], 40.00th=[ 62], 50.00th=[ 144], 60.00th=[ 180], 00:26:39.609 | 70.00th=[ 253], 80.00th=[ 380], 90.00th=[ 531], 95.00th=[ 642], 00:26:39.609 | 99.00th=[ 785], 99.50th=[ 860], 99.90th=[ 961], 99.95th=[ 978], 00:26:39.609 | 99.99th=[ 1083] 00:26:39.609 bw ( KiB/s): min=19456, max=304031, per=11.63%, avg=75683.50, stdev=80227.86, samples=20 00:26:39.609 iops : min= 76, max= 1187, avg=295.60, stdev=313.29, samples=20 00:26:39.609 lat (msec) : 10=0.10%, 20=0.07%, 50=10.36%, 100=33.45%, 250=25.22% 00:26:39.609 lat (msec) : 500=16.64%, 750=12.31%, 1000=1.82%, 2000=0.03% 00:26:39.609 cpu : usr=0.17%, sys=1.01%, ctx=364, majf=0, minf=4098 00:26:39.609 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:26:39.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:39.609 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:39.609 issued rwts: total=3022,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:39.609 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:39.609 job10: (groupid=0, jobs=1): err= 0: pid=3219582: Sun Sep 29 16:34:38 2024 00:26:39.609 read: IOPS=129, BW=32.4MiB/s (34.0MB/s)(329MiB/10166msec) 00:26:39.609 slat (usec): min=13, max=615059, avg=7307.37, stdev=34745.36 00:26:39.609 clat (msec): min=42, max=1341, avg=486.23, stdev=209.83 00:26:39.609 lat (msec): min=42, max=1341, avg=493.54, stdev=213.31 00:26:39.609 clat percentiles (msec): 00:26:39.609 | 1.00th=[ 44], 5.00th=[ 157], 10.00th=[ 188], 20.00th=[ 342], 00:26:39.609 | 30.00th=[ 388], 40.00th=[ 426], 50.00th=[ 472], 60.00th=[ 510], 00:26:39.609 | 70.00th=[ 535], 80.00th=[ 642], 90.00th=[ 802], 95.00th=[ 894], 00:26:39.609 | 99.00th=[ 995], 99.50th=[ 1003], 99.90th=[ 1334], 99.95th=[ 1334], 00:26:39.609 | 99.99th=[ 1334] 00:26:39.609 bw ( KiB/s): min= 5632, max=79360, per=4.93%, avg=32070.60, stdev=15959.06, samples=20 00:26:39.609 iops : min= 22, max= 310, avg=125.20, stdev=62.35, samples=20 00:26:39.609 lat (msec) : 50=1.37%, 250=11.16%, 500=44.80%, 750=28.78%, 1000=13.67% 00:26:39.609 lat (msec) : 2000=0.23% 00:26:39.609 cpu : usr=0.04%, sys=0.52%, ctx=132, majf=0, minf=4097 00:26:39.609 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.4%, >=64=95.2% 00:26:39.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:39.609 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:39.609 issued rwts: total=1317,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:39.609 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:39.609 00:26:39.609 Run status group 0 (all jobs): 00:26:39.609 READ: bw=636MiB/s (667MB/s), 32.4MiB/s-113MiB/s (34.0MB/s-119MB/s), io=6476MiB (6790MB), run=10094-10188msec 00:26:39.609 00:26:39.609 Disk stats (read/write): 00:26:39.609 nvme0n1: ios=3860/0, merge=0/0, ticks=1206246/0, in_queue=1206246, util=97.34% 00:26:39.609 nvme10n1: ios=3708/0, merge=0/0, ticks=1204484/0, in_queue=1204484, util=97.57% 00:26:39.609 nvme1n1: ios=4818/0, merge=0/0, ticks=1217512/0, in_queue=1217512, util=97.81% 00:26:39.609 nvme2n1: ios=4959/0, merge=0/0, ticks=1229693/0, in_queue=1229693, util=97.96% 00:26:39.609 nvme3n1: ios=4106/0, merge=0/0, ticks=1234416/0, in_queue=1234416, util=98.02% 00:26:39.609 nvme4n1: ios=3969/0, merge=0/0, ticks=1238317/0, in_queue=1238317, util=98.32% 00:26:39.609 nvme5n1: ios=8990/0, merge=0/0, ticks=1238630/0, in_queue=1238630, util=98.47% 00:26:39.609 nvme6n1: ios=2783/0, merge=0/0, ticks=1225612/0, in_queue=1225612, util=98.55% 00:26:39.609 nvme7n1: ios=4591/0, merge=0/0, ticks=1227167/0, in_queue=1227167, util=98.94% 00:26:39.609 nvme8n1: ios=5916/0, merge=0/0, ticks=1209498/0, in_queue=1209498, util=99.11% 00:26:39.609 nvme9n1: ios=2486/0, merge=0/0, ticks=1217838/0, in_queue=1217838, util=99.24% 00:26:39.609 16:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:26:39.609 [global] 00:26:39.609 thread=1 00:26:39.609 invalidate=1 00:26:39.609 rw=randwrite 00:26:39.609 time_based=1 00:26:39.609 runtime=10 00:26:39.609 ioengine=libaio 00:26:39.609 direct=1 00:26:39.609 bs=262144 00:26:39.609 iodepth=64 00:26:39.609 norandommap=1 00:26:39.609 numjobs=1 00:26:39.609 00:26:39.609 [job0] 00:26:39.609 filename=/dev/nvme0n1 00:26:39.610 [job1] 00:26:39.610 filename=/dev/nvme10n1 00:26:39.610 [job2] 00:26:39.610 filename=/dev/nvme1n1 00:26:39.610 [job3] 00:26:39.610 filename=/dev/nvme2n1 00:26:39.610 [job4] 00:26:39.610 filename=/dev/nvme3n1 00:26:39.610 [job5] 00:26:39.610 filename=/dev/nvme4n1 00:26:39.610 [job6] 00:26:39.610 filename=/dev/nvme5n1 00:26:39.610 [job7] 00:26:39.610 filename=/dev/nvme6n1 00:26:39.610 [job8] 00:26:39.610 filename=/dev/nvme7n1 00:26:39.610 [job9] 00:26:39.610 filename=/dev/nvme8n1 00:26:39.610 [job10] 00:26:39.610 filename=/dev/nvme9n1 00:26:39.610 Could not set queue depth (nvme0n1) 00:26:39.610 Could not set queue depth (nvme10n1) 00:26:39.610 Could not set queue depth (nvme1n1) 00:26:39.610 Could not set queue depth (nvme2n1) 00:26:39.610 Could not set queue depth (nvme3n1) 00:26:39.610 Could not set queue depth (nvme4n1) 00:26:39.610 Could not set queue depth (nvme5n1) 00:26:39.610 Could not set queue depth (nvme6n1) 00:26:39.610 Could not set queue depth (nvme7n1) 00:26:39.610 Could not set queue depth (nvme8n1) 00:26:39.610 Could not set queue depth (nvme9n1) 00:26:39.610 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:39.610 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:39.610 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:39.610 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:39.610 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:39.610 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:39.610 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:39.610 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:39.610 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:39.610 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:39.610 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:39.610 fio-3.35 00:26:39.610 Starting 11 threads 00:26:49.658 00:26:49.658 job0: (groupid=0, jobs=1): err= 0: pid=3220157: Sun Sep 29 16:34:49 2024 00:26:49.658 write: IOPS=389, BW=97.5MiB/s (102MB/s)(986MiB/10112msec); 0 zone resets 00:26:49.658 slat (usec): min=14, max=35153, avg=1452.82, stdev=4866.96 00:26:49.658 clat (usec): min=920, max=548387, avg=162659.87, stdev=119140.15 00:26:49.658 lat (usec): min=944, max=555530, avg=164112.68, stdev=120385.12 00:26:49.658 clat percentiles (msec): 00:26:49.658 | 1.00th=[ 3], 5.00th=[ 7], 10.00th=[ 20], 20.00th=[ 63], 00:26:49.658 | 30.00th=[ 103], 40.00th=[ 122], 50.00th=[ 138], 60.00th=[ 153], 00:26:49.658 | 70.00th=[ 186], 80.00th=[ 255], 90.00th=[ 347], 95.00th=[ 422], 00:26:49.658 | 99.00th=[ 472], 99.50th=[ 498], 99.90th=[ 527], 99.95th=[ 542], 00:26:49.658 | 99.99th=[ 550] 00:26:49.658 bw ( KiB/s): min=34816, max=175616, per=13.19%, avg=99292.05, stdev=42528.39, samples=20 00:26:49.658 iops : min= 136, max= 686, avg=387.85, stdev=166.13, samples=20 00:26:49.658 lat (usec) : 1000=0.03% 00:26:49.658 lat (msec) : 2=0.86%, 4=2.16%, 10=3.81%, 20=3.42%, 50=7.53% 00:26:49.658 lat (msec) : 100=10.60%, 250=51.04%, 500=20.12%, 750=0.43% 00:26:49.658 cpu : usr=1.21%, sys=1.45%, ctx=2617, majf=0, minf=1 00:26:49.658 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:26:49.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:49.658 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:49.658 issued rwts: total=0,3942,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:49.658 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:49.658 job1: (groupid=0, jobs=1): err= 0: pid=3220169: Sun Sep 29 16:34:49 2024 00:26:49.658 write: IOPS=250, BW=62.6MiB/s (65.6MB/s)(637MiB/10170msec); 0 zone resets 00:26:49.658 slat (usec): min=18, max=60497, avg=2227.22, stdev=6875.90 00:26:49.658 clat (usec): min=912, max=812121, avg=253281.45, stdev=178450.89 00:26:49.658 lat (usec): min=947, max=820017, avg=255508.67, stdev=179923.47 00:26:49.658 clat percentiles (msec): 00:26:49.658 | 1.00th=[ 3], 5.00th=[ 12], 10.00th=[ 34], 20.00th=[ 99], 00:26:49.658 | 30.00th=[ 167], 40.00th=[ 207], 50.00th=[ 232], 60.00th=[ 257], 00:26:49.658 | 70.00th=[ 296], 80.00th=[ 368], 90.00th=[ 477], 95.00th=[ 642], 00:26:49.658 | 99.00th=[ 776], 99.50th=[ 793], 99.90th=[ 802], 99.95th=[ 810], 00:26:49.658 | 99.99th=[ 810] 00:26:49.658 bw ( KiB/s): min=22016, max=159232, per=8.44%, avg=63562.25, stdev=32693.71, samples=20 00:26:49.658 iops : min= 86, max= 622, avg=248.25, stdev=127.76, samples=20 00:26:49.658 lat (usec) : 1000=0.08% 00:26:49.658 lat (msec) : 2=0.82%, 4=1.53%, 10=2.36%, 20=1.77%, 50=6.48% 00:26:49.658 lat (msec) : 100=7.15%, 250=38.02%, 500=32.44%, 750=7.15%, 1000=2.20% 00:26:49.658 cpu : usr=0.85%, sys=1.02%, ctx=1590, majf=0, minf=1 00:26:49.658 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:26:49.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:49.658 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:49.658 issued rwts: total=0,2546,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:49.658 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:49.658 job2: (groupid=0, jobs=1): err= 0: pid=3220170: Sun Sep 29 16:34:49 2024 00:26:49.658 write: IOPS=271, BW=67.8MiB/s (71.1MB/s)(689MiB/10158msec); 0 zone resets 00:26:49.658 slat (usec): min=25, max=94191, avg=3176.94, stdev=7110.61 00:26:49.658 clat (msec): min=22, max=790, avg=232.63, stdev=130.86 00:26:49.658 lat (msec): min=27, max=790, avg=235.80, stdev=132.19 00:26:49.658 clat percentiles (msec): 00:26:49.658 | 1.00th=[ 63], 5.00th=[ 74], 10.00th=[ 89], 20.00th=[ 127], 00:26:49.658 | 30.00th=[ 140], 40.00th=[ 174], 50.00th=[ 201], 60.00th=[ 241], 00:26:49.658 | 70.00th=[ 279], 80.00th=[ 326], 90.00th=[ 409], 95.00th=[ 443], 00:26:49.658 | 99.00th=[ 709], 99.50th=[ 743], 99.90th=[ 785], 99.95th=[ 793], 00:26:49.658 | 99.99th=[ 793] 00:26:49.658 bw ( KiB/s): min=24064, max=180736, per=9.15%, avg=68902.70, stdev=37015.89, samples=20 00:26:49.658 iops : min= 94, max= 706, avg=269.10, stdev=144.62, samples=20 00:26:49.658 lat (msec) : 50=0.29%, 100=11.80%, 250=50.71%, 500=33.79%, 750=2.94% 00:26:49.658 lat (msec) : 1000=0.47% 00:26:49.658 cpu : usr=0.88%, sys=0.87%, ctx=943, majf=0, minf=1 00:26:49.658 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:26:49.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:49.658 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:49.658 issued rwts: total=0,2755,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:49.658 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:49.658 job3: (groupid=0, jobs=1): err= 0: pid=3220171: Sun Sep 29 16:34:49 2024 00:26:49.658 write: IOPS=211, BW=52.9MiB/s (55.4MB/s)(539MiB/10194msec); 0 zone resets 00:26:49.658 slat (usec): min=21, max=195852, avg=3881.30, stdev=10601.48 00:26:49.658 clat (msec): min=6, max=944, avg=298.54, stdev=196.21 00:26:49.658 lat (msec): min=6, max=944, avg=302.42, stdev=198.70 00:26:49.658 clat percentiles (msec): 00:26:49.658 | 1.00th=[ 7], 5.00th=[ 44], 10.00th=[ 105], 20.00th=[ 144], 00:26:49.658 | 30.00th=[ 180], 40.00th=[ 218], 50.00th=[ 266], 60.00th=[ 288], 00:26:49.658 | 70.00th=[ 342], 80.00th=[ 422], 90.00th=[ 609], 95.00th=[ 743], 00:26:49.658 | 99.00th=[ 818], 99.50th=[ 869], 99.90th=[ 936], 99.95th=[ 944], 00:26:49.658 | 99.99th=[ 944] 00:26:49.658 bw ( KiB/s): min=18432, max=132608, per=7.12%, avg=53574.70, stdev=28026.11, samples=20 00:26:49.658 iops : min= 72, max= 518, avg=209.25, stdev=109.47, samples=20 00:26:49.658 lat (msec) : 10=3.01%, 20=1.11%, 50=1.53%, 100=3.99%, 250=36.09% 00:26:49.658 lat (msec) : 500=39.47%, 750=10.62%, 1000=4.17% 00:26:49.658 cpu : usr=0.78%, sys=0.66%, ctx=983, majf=0, minf=2 00:26:49.658 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.5%, >=64=97.1% 00:26:49.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:49.658 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:49.658 issued rwts: total=0,2156,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:49.658 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:49.658 job4: (groupid=0, jobs=1): err= 0: pid=3220172: Sun Sep 29 16:34:49 2024 00:26:49.658 write: IOPS=371, BW=93.0MiB/s (97.5MB/s)(945MiB/10158msec); 0 zone resets 00:26:49.658 slat (usec): min=16, max=89909, avg=1749.16, stdev=5420.42 00:26:49.658 clat (msec): min=2, max=733, avg=170.24, stdev=137.07 00:26:49.658 lat (msec): min=3, max=738, avg=171.99, stdev=138.34 00:26:49.658 clat percentiles (msec): 00:26:49.658 | 1.00th=[ 13], 5.00th=[ 52], 10.00th=[ 55], 20.00th=[ 59], 00:26:49.658 | 30.00th=[ 62], 40.00th=[ 71], 50.00th=[ 138], 60.00th=[ 159], 00:26:49.658 | 70.00th=[ 209], 80.00th=[ 271], 90.00th=[ 414], 95.00th=[ 451], 00:26:49.659 | 99.00th=[ 575], 99.50th=[ 617], 99.90th=[ 701], 99.95th=[ 726], 00:26:49.659 | 99.99th=[ 735] 00:26:49.659 bw ( KiB/s): min=36864, max=288256, per=12.63%, avg=95077.60, stdev=72451.32, samples=20 00:26:49.659 iops : min= 144, max= 1126, avg=371.35, stdev=282.90, samples=20 00:26:49.659 lat (msec) : 4=0.05%, 10=0.64%, 20=0.95%, 50=2.54%, 100=39.89% 00:26:49.659 lat (msec) : 250=32.80%, 500=20.96%, 750=2.17% 00:26:49.659 cpu : usr=1.07%, sys=1.45%, ctx=1771, majf=0, minf=1 00:26:49.659 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:26:49.659 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:49.659 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:49.659 issued rwts: total=0,3778,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:49.659 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:49.659 job5: (groupid=0, jobs=1): err= 0: pid=3220173: Sun Sep 29 16:34:49 2024 00:26:49.659 write: IOPS=287, BW=71.9MiB/s (75.4MB/s)(731MiB/10170msec); 0 zone resets 00:26:49.659 slat (usec): min=16, max=74683, avg=1277.85, stdev=5061.83 00:26:49.659 clat (usec): min=1223, max=939256, avg=221188.69, stdev=184582.66 00:26:49.659 lat (usec): min=1261, max=939309, avg=222466.54, stdev=185169.45 00:26:49.659 clat percentiles (msec): 00:26:49.659 | 1.00th=[ 4], 5.00th=[ 42], 10.00th=[ 52], 20.00th=[ 78], 00:26:49.659 | 30.00th=[ 108], 40.00th=[ 136], 50.00th=[ 167], 60.00th=[ 203], 00:26:49.659 | 70.00th=[ 253], 80.00th=[ 330], 90.00th=[ 477], 95.00th=[ 617], 00:26:49.659 | 99.00th=[ 877], 99.50th=[ 894], 99.90th=[ 919], 99.95th=[ 927], 00:26:49.659 | 99.99th=[ 936] 00:26:49.659 bw ( KiB/s): min=30720, max=162304, per=9.73%, avg=73238.00, stdev=33457.49, samples=20 00:26:49.659 iops : min= 120, max= 634, avg=286.05, stdev=130.74, samples=20 00:26:49.659 lat (msec) : 2=0.51%, 4=0.89%, 10=1.74%, 20=0.44%, 50=5.85% 00:26:49.659 lat (msec) : 100=17.03%, 250=43.16%, 500=21.20%, 750=6.46%, 1000=2.70% 00:26:49.659 cpu : usr=0.97%, sys=1.01%, ctx=2160, majf=0, minf=1 00:26:49.659 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.8% 00:26:49.659 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:49.659 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:49.659 issued rwts: total=0,2924,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:49.659 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:49.659 job6: (groupid=0, jobs=1): err= 0: pid=3220174: Sun Sep 29 16:34:49 2024 00:26:49.659 write: IOPS=187, BW=46.9MiB/s (49.2MB/s)(474MiB/10111msec); 0 zone resets 00:26:49.659 slat (usec): min=20, max=148863, avg=4693.68, stdev=11445.85 00:26:49.659 clat (msec): min=10, max=934, avg=335.88, stdev=205.32 00:26:49.659 lat (msec): min=10, max=934, avg=340.57, stdev=207.79 00:26:49.659 clat percentiles (msec): 00:26:49.659 | 1.00th=[ 25], 5.00th=[ 96], 10.00th=[ 110], 20.00th=[ 133], 00:26:49.659 | 30.00th=[ 176], 40.00th=[ 243], 50.00th=[ 321], 60.00th=[ 363], 00:26:49.659 | 70.00th=[ 430], 80.00th=[ 472], 90.00th=[ 676], 95.00th=[ 743], 00:26:49.659 | 99.00th=[ 835], 99.50th=[ 894], 99.90th=[ 927], 99.95th=[ 936], 00:26:49.659 | 99.99th=[ 936] 00:26:49.659 bw ( KiB/s): min=15872, max=139030, per=6.23%, avg=46932.30, stdev=30826.48, samples=20 00:26:49.659 iops : min= 62, max= 543, avg=183.30, stdev=120.39, samples=20 00:26:49.659 lat (msec) : 20=0.79%, 50=1.00%, 100=4.69%, 250=34.07%, 500=40.88% 00:26:49.659 lat (msec) : 750=14.14%, 1000=4.43% 00:26:49.659 cpu : usr=0.53%, sys=0.64%, ctx=669, majf=0, minf=1 00:26:49.659 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.7%, >=64=96.7% 00:26:49.659 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:49.659 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:49.659 issued rwts: total=0,1896,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:49.659 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:49.659 job7: (groupid=0, jobs=1): err= 0: pid=3220180: Sun Sep 29 16:34:49 2024 00:26:49.659 write: IOPS=205, BW=51.4MiB/s (53.9MB/s)(524MiB/10191msec); 0 zone resets 00:26:49.659 slat (usec): min=22, max=207153, avg=3813.13, stdev=11492.06 00:26:49.659 clat (msec): min=8, max=877, avg=307.37, stdev=212.38 00:26:49.659 lat (msec): min=11, max=877, avg=311.18, stdev=215.81 00:26:49.659 clat percentiles (msec): 00:26:49.659 | 1.00th=[ 34], 5.00th=[ 75], 10.00th=[ 86], 20.00th=[ 128], 00:26:49.659 | 30.00th=[ 142], 40.00th=[ 224], 50.00th=[ 249], 60.00th=[ 300], 00:26:49.659 | 70.00th=[ 393], 80.00th=[ 430], 90.00th=[ 701], 95.00th=[ 776], 00:26:49.659 | 99.00th=[ 852], 99.50th=[ 869], 99.90th=[ 877], 99.95th=[ 877], 00:26:49.659 | 99.99th=[ 877] 00:26:49.659 bw ( KiB/s): min=18432, max=133120, per=6.91%, avg=52019.80, stdev=34486.40, samples=20 00:26:49.659 iops : min= 72, max= 520, avg=203.15, stdev=134.76, samples=20 00:26:49.659 lat (msec) : 10=0.05%, 20=0.48%, 50=0.91%, 100=12.46%, 250=36.61% 00:26:49.659 lat (msec) : 500=33.79%, 750=9.69%, 1000=6.01% 00:26:49.659 cpu : usr=0.55%, sys=0.76%, ctx=1075, majf=0, minf=1 00:26:49.659 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.5%, >=64=97.0% 00:26:49.659 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:49.659 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:49.659 issued rwts: total=0,2095,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:49.659 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:49.659 job8: (groupid=0, jobs=1): err= 0: pid=3220214: Sun Sep 29 16:34:49 2024 00:26:49.659 write: IOPS=178, BW=44.7MiB/s (46.9MB/s)(454MiB/10155msec); 0 zone resets 00:26:49.659 slat (usec): min=22, max=69006, avg=4892.40, stdev=10808.56 00:26:49.659 clat (msec): min=30, max=903, avg=352.61, stdev=193.93 00:26:49.659 lat (msec): min=30, max=903, avg=357.50, stdev=195.94 00:26:49.659 clat percentiles (msec): 00:26:49.659 | 1.00th=[ 69], 5.00th=[ 123], 10.00th=[ 163], 20.00th=[ 209], 00:26:49.659 | 30.00th=[ 236], 40.00th=[ 271], 50.00th=[ 305], 60.00th=[ 338], 00:26:49.659 | 70.00th=[ 393], 80.00th=[ 443], 90.00th=[ 701], 95.00th=[ 793], 00:26:49.659 | 99.00th=[ 869], 99.50th=[ 885], 99.90th=[ 894], 99.95th=[ 902], 00:26:49.659 | 99.99th=[ 902] 00:26:49.659 bw ( KiB/s): min=18432, max=84480, per=5.96%, avg=44900.40, stdev=20780.89, samples=20 00:26:49.659 iops : min= 72, max= 330, avg=175.35, stdev=81.23, samples=20 00:26:49.659 lat (msec) : 50=0.44%, 100=2.59%, 250=32.42%, 500=48.60%, 750=8.42% 00:26:49.659 lat (msec) : 1000=7.54% 00:26:49.659 cpu : usr=0.47%, sys=0.65%, ctx=592, majf=0, minf=1 00:26:49.659 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.8%, >=64=96.5% 00:26:49.659 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:49.659 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:49.659 issued rwts: total=0,1817,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:49.659 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:49.659 job9: (groupid=0, jobs=1): err= 0: pid=3220256: Sun Sep 29 16:34:49 2024 00:26:49.659 write: IOPS=270, BW=67.5MiB/s (70.8MB/s)(688MiB/10191msec); 0 zone resets 00:26:49.659 slat (usec): min=14, max=208393, avg=1966.00, stdev=8693.83 00:26:49.659 clat (usec): min=1244, max=767398, avg=234788.84, stdev=169723.98 00:26:49.659 lat (usec): min=1261, max=767431, avg=236754.84, stdev=171037.42 00:26:49.659 clat percentiles (msec): 00:26:49.659 | 1.00th=[ 4], 5.00th=[ 20], 10.00th=[ 42], 20.00th=[ 69], 00:26:49.659 | 30.00th=[ 113], 40.00th=[ 157], 50.00th=[ 205], 60.00th=[ 266], 00:26:49.659 | 70.00th=[ 309], 80.00th=[ 388], 90.00th=[ 472], 95.00th=[ 558], 00:26:49.659 | 99.00th=[ 709], 99.50th=[ 735], 99.90th=[ 760], 99.95th=[ 768], 00:26:49.659 | 99.99th=[ 768] 00:26:49.659 bw ( KiB/s): min=23552, max=166400, per=9.15%, avg=68867.05, stdev=33671.11, samples=20 00:26:49.659 iops : min= 92, max= 650, avg=268.95, stdev=131.53, samples=20 00:26:49.659 lat (msec) : 2=0.33%, 4=0.76%, 10=1.60%, 20=2.36%, 50=7.95% 00:26:49.659 lat (msec) : 100=13.69%, 250=31.49%, 500=33.27%, 750=8.32%, 1000=0.22% 00:26:49.659 cpu : usr=0.91%, sys=0.96%, ctx=1779, majf=0, minf=1 00:26:49.659 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:26:49.659 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:49.659 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:49.659 issued rwts: total=0,2753,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:49.659 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:49.659 job10: (groupid=0, jobs=1): err= 0: pid=3220283: Sun Sep 29 16:34:49 2024 00:26:49.659 write: IOPS=326, BW=81.6MiB/s (85.6MB/s)(830MiB/10171msec); 0 zone resets 00:26:49.659 slat (usec): min=16, max=241020, avg=2229.15, stdev=7122.50 00:26:49.659 clat (usec): min=1595, max=931785, avg=193725.05, stdev=135700.73 00:26:49.659 lat (usec): min=1666, max=939186, avg=195954.20, stdev=136811.77 00:26:49.659 clat percentiles (msec): 00:26:49.659 | 1.00th=[ 18], 5.00th=[ 44], 10.00th=[ 55], 20.00th=[ 109], 00:26:49.659 | 30.00th=[ 132], 40.00th=[ 148], 50.00th=[ 163], 60.00th=[ 178], 00:26:49.659 | 70.00th=[ 201], 80.00th=[ 262], 90.00th=[ 359], 95.00th=[ 523], 00:26:49.659 | 99.00th=[ 684], 99.50th=[ 785], 99.90th=[ 911], 99.95th=[ 927], 00:26:49.659 | 99.99th=[ 936] 00:26:49.659 bw ( KiB/s): min=29184, max=196608, per=11.07%, avg=83369.15, stdev=39017.64, samples=20 00:26:49.659 iops : min= 114, max= 768, avg=325.65, stdev=152.41, samples=20 00:26:49.659 lat (msec) : 2=0.09%, 4=0.24%, 10=0.48%, 20=0.45%, 50=6.72% 00:26:49.659 lat (msec) : 100=9.79%, 250=61.23%, 500=15.36%, 750=5.03%, 1000=0.60% 00:26:49.659 cpu : usr=1.11%, sys=0.96%, ctx=1596, majf=0, minf=2 00:26:49.659 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:26:49.659 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:49.659 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:49.659 issued rwts: total=0,3320,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:49.659 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:49.659 00:26:49.659 Run status group 0 (all jobs): 00:26:49.659 WRITE: bw=735MiB/s (771MB/s), 44.7MiB/s-97.5MiB/s (46.9MB/s-102MB/s), io=7496MiB (7860MB), run=10111-10194msec 00:26:49.659 00:26:49.659 Disk stats (read/write): 00:26:49.659 nvme0n1: ios=49/7725, merge=0/0, ticks=101/1230615, in_queue=1230716, util=95.86% 00:26:49.659 nvme10n1: ios=34/4952, merge=0/0, ticks=55/1224395, in_queue=1224450, util=95.80% 00:26:49.659 nvme1n1: ios=42/5362, merge=0/0, ticks=490/1213512, in_queue=1214002, util=100.00% 00:26:49.659 nvme2n1: ios=25/4175, merge=0/0, ticks=248/1210605, in_queue=1210853, util=98.35% 00:26:49.659 nvme3n1: ios=0/7408, merge=0/0, ticks=0/1224930, in_queue=1224930, util=96.54% 00:26:49.659 nvme4n1: ios=0/5709, merge=0/0, ticks=0/1233739, in_queue=1233739, util=97.16% 00:26:49.659 nvme5n1: ios=39/3634, merge=0/0, ticks=2646/1212747, in_queue=1215393, util=100.00% 00:26:49.659 nvme6n1: ios=0/4048, merge=0/0, ticks=0/1212013, in_queue=1212013, util=97.64% 00:26:49.660 nvme7n1: ios=0/3491, merge=0/0, ticks=0/1213169, in_queue=1213169, util=98.50% 00:26:49.660 nvme8n1: ios=36/5363, merge=0/0, ticks=2319/1191641, in_queue=1193960, util=100.00% 00:26:49.660 nvme9n1: ios=0/6499, merge=0/0, ticks=0/1216715, in_queue=1216715, util=99.13% 00:26:49.660 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:26:49.660 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:26:49.660 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:49.660 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:49.660 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:49.660 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:26:49.660 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:49.660 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:49.660 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:26:49.660 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:49.660 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:26:49.660 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:49.660 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:49.660 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.660 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:49.660 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.660 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:49.660 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:26:49.660 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:26:49.660 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:26:49.660 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:49.660 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:49.660 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:26:49.660 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:49.660 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:26:49.660 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:49.660 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:49.660 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.660 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:49.660 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.660 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:49.660 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:26:50.225 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:26:50.225 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:26:50.225 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:50.225 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:50.225 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:26:50.225 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:50.225 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:26:50.225 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:50.225 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:26:50.225 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.225 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:50.225 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.225 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:50.225 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:26:50.225 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:26:50.225 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:26:50.225 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:50.225 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:50.225 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:26:50.225 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:50.225 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:26:50.225 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:50.225 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:26:50.225 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.225 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:50.225 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.225 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:50.225 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:26:50.790 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:26:50.790 16:34:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:26:50.790 16:34:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:50.790 16:34:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:50.790 16:34:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:26:50.790 16:34:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:50.790 16:34:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:26:50.790 16:34:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:50.790 16:34:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:26:50.790 16:34:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.790 16:34:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:50.790 16:34:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.790 16:34:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:50.790 16:34:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:26:51.047 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:26:51.047 16:34:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:26:51.047 16:34:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:51.047 16:34:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:51.047 16:34:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:26:51.047 16:34:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:51.047 16:34:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:26:51.047 16:34:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:51.047 16:34:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:26:51.047 16:34:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.047 16:34:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:51.047 16:34:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.047 16:34:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:51.047 16:34:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:26:51.306 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:26:51.306 16:34:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:26:51.306 16:34:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:51.306 16:34:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:51.306 16:34:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:26:51.306 16:34:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:51.306 16:34:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:26:51.306 16:34:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:51.306 16:34:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:26:51.306 16:34:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.306 16:34:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:51.306 16:34:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.306 16:34:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:51.306 16:34:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:26:51.563 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:26:51.563 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:26:51.563 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:51.563 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:51.563 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:26:51.564 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:51.564 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:26:51.564 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:51.564 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:26:51.564 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.564 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:51.564 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.564 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:51.564 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:26:51.821 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:26:51.821 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:26:51.821 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:51.821 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:51.822 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:26:51.822 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:51.822 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:26:51.822 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:51.822 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:26:51.822 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.822 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:51.822 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.822 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:51.822 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:26:52.080 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:26:52.080 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:26:52.080 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:52.080 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:52.080 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:26:52.080 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:52.080 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:26:52.080 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:52.080 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:26:52.080 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.080 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:52.080 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.080 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:52.080 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:26:52.080 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:26:52.080 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:26:52.080 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:52.080 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:52.080 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:26:52.080 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:52.080 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:26:52.080 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:52.080 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:26:52.080 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.080 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:52.080 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.080 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:26:52.080 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:26:52.080 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:26:52.080 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # nvmfcleanup 00:26:52.080 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:26:52.080 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:52.080 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:26:52.080 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:52.080 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:52.080 rmmod nvme_tcp 00:26:52.080 rmmod nvme_fabrics 00:26:52.338 rmmod nvme_keyring 00:26:52.338 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:52.339 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:26:52.339 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:26:52.339 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@513 -- # '[' -n 3215181 ']' 00:26:52.339 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@514 -- # killprocess 3215181 00:26:52.339 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@950 -- # '[' -z 3215181 ']' 00:26:52.339 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # kill -0 3215181 00:26:52.339 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # uname 00:26:52.339 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:52.339 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3215181 00:26:52.339 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:52.339 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:52.339 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3215181' 00:26:52.339 killing process with pid 3215181 00:26:52.339 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@969 -- # kill 3215181 00:26:52.339 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@974 -- # wait 3215181 00:26:55.623 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:26:55.623 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:26:55.623 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:26:55.623 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:26:55.623 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@787 -- # iptables-save 00:26:55.623 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:26:55.623 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@787 -- # iptables-restore 00:26:55.623 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:55.623 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:55.623 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:55.623 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:55.623 16:34:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:57.525 16:34:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:57.525 00:26:57.525 real 1m5.729s 00:26:57.525 user 3m50.022s 00:26:57.525 sys 0m16.098s 00:26:57.525 16:34:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:57.525 16:34:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:57.525 ************************************ 00:26:57.525 END TEST nvmf_multiconnection 00:26:57.525 ************************************ 00:26:57.525 16:34:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:57.525 16:34:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:57.525 16:34:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:57.525 16:34:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:57.525 ************************************ 00:26:57.525 START TEST nvmf_initiator_timeout 00:26:57.525 ************************************ 00:26:57.525 16:34:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:57.525 * Looking for test storage... 00:26:57.525 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:57.525 16:34:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:57.525 16:34:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # lcov --version 00:26:57.525 16:34:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:57.525 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:57.525 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:57.525 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:57.525 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:57.525 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:26:57.525 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:26:57.525 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:26:57.525 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:26:57.525 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:26:57.525 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:26:57.525 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:26:57.525 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:57.525 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:26:57.525 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:26:57.525 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:57.525 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:57.525 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:26:57.525 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:26:57.525 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:57.525 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:26:57.526 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:26:57.526 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:26:57.526 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:26:57.526 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:57.526 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:26:57.526 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:26:57.526 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:57.526 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:57.526 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:26:57.526 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:57.526 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:57.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:57.526 --rc genhtml_branch_coverage=1 00:26:57.526 --rc genhtml_function_coverage=1 00:26:57.526 --rc genhtml_legend=1 00:26:57.526 --rc geninfo_all_blocks=1 00:26:57.526 --rc geninfo_unexecuted_blocks=1 00:26:57.526 00:26:57.526 ' 00:26:57.526 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:57.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:57.526 --rc genhtml_branch_coverage=1 00:26:57.526 --rc genhtml_function_coverage=1 00:26:57.526 --rc genhtml_legend=1 00:26:57.526 --rc geninfo_all_blocks=1 00:26:57.526 --rc geninfo_unexecuted_blocks=1 00:26:57.526 00:26:57.526 ' 00:26:57.526 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:57.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:57.526 --rc genhtml_branch_coverage=1 00:26:57.526 --rc genhtml_function_coverage=1 00:26:57.526 --rc genhtml_legend=1 00:26:57.526 --rc geninfo_all_blocks=1 00:26:57.526 --rc geninfo_unexecuted_blocks=1 00:26:57.526 00:26:57.526 ' 00:26:57.526 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:57.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:57.526 --rc genhtml_branch_coverage=1 00:26:57.526 --rc genhtml_function_coverage=1 00:26:57.526 --rc genhtml_legend=1 00:26:57.526 --rc geninfo_all_blocks=1 00:26:57.526 --rc geninfo_unexecuted_blocks=1 00:26:57.526 00:26:57.526 ' 00:26:57.526 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:57.526 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:26:57.526 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:57.526 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:57.526 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:57.526 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:57.526 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:57.526 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:57.526 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:57.526 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:57.526 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:57.526 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:57.526 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:57.526 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:57.526 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:57.526 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:57.526 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:57.526 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:57.526 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:57.526 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:26:57.526 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:57.526 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:57.526 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:57.526 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:57.526 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:57.526 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:57.526 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:26:57.526 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:57.526 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:26:57.526 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:57.526 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:57.526 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:57.526 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:57.526 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:57.526 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:57.526 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:57.526 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:57.526 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:57.526 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:57.526 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:57.526 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:57.526 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:26:57.526 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:26:57.526 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:57.526 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@472 -- # prepare_net_devs 00:26:57.526 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@434 -- # local -g is_hw=no 00:26:57.526 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@436 -- # remove_spdk_ns 00:26:57.526 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:57.526 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:57.526 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:57.526 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:26:57.526 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:26:57.526 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:26:57.526 16:34:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:00.061 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:00.061 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:27:00.061 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:00.061 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:00.061 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:00.061 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:00.061 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:00.061 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:27:00.061 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:00.061 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:27:00.061 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:27:00.061 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:00.062 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:00.062 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ up == up ]] 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:00.062 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ up == up ]] 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:00.062 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # is_hw=yes 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:00.062 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:00.062 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:27:00.062 00:27:00.062 --- 10.0.0.2 ping statistics --- 00:27:00.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:00.062 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:00.062 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:00.062 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:27:00.062 00:27:00.062 --- 10.0.0.1 ping statistics --- 00:27:00.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:00.062 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # return 0 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@505 -- # nvmfpid=3223905 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@506 -- # waitforlisten 3223905 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # '[' -z 3223905 ']' 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:00.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:00.062 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:00.062 [2024-09-29 16:35:00.331302] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:27:00.062 [2024-09-29 16:35:00.331469] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:00.062 [2024-09-29 16:35:00.488099] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:00.320 [2024-09-29 16:35:00.754758] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:00.320 [2024-09-29 16:35:00.754840] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:00.320 [2024-09-29 16:35:00.754864] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:00.320 [2024-09-29 16:35:00.754888] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:00.320 [2024-09-29 16:35:00.754907] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:00.320 [2024-09-29 16:35:00.755026] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:00.320 [2024-09-29 16:35:00.755086] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:27:00.320 [2024-09-29 16:35:00.755156] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:00.320 [2024-09-29 16:35:00.755162] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:27:00.887 16:35:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:00.887 16:35:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # return 0 00:27:00.887 16:35:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:27:00.887 16:35:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:00.887 16:35:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:00.887 16:35:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:00.887 16:35:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:27:00.887 16:35:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:00.887 16:35:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.887 16:35:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:00.887 Malloc0 00:27:00.887 16:35:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.887 16:35:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:27:00.887 16:35:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.887 16:35:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:00.887 Delay0 00:27:00.887 16:35:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.887 16:35:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:00.887 16:35:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.887 16:35:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:00.887 [2024-09-29 16:35:01.440473] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:01.145 16:35:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.145 16:35:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:27:01.145 16:35:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.145 16:35:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:01.145 16:35:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.145 16:35:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:01.145 16:35:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.145 16:35:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:01.145 16:35:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.145 16:35:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:01.145 16:35:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.145 16:35:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:01.145 [2024-09-29 16:35:01.470600] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:01.145 16:35:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.145 16:35:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:27:01.711 16:35:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:27:01.711 16:35:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:27:01.711 16:35:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:01.711 16:35:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:01.711 16:35:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:27:03.609 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:03.610 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:03.610 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:27:03.610 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:03.610 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:03.610 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:27:03.610 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=3224339 00:27:03.610 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:27:03.610 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:27:03.610 [global] 00:27:03.610 thread=1 00:27:03.610 invalidate=1 00:27:03.610 rw=write 00:27:03.610 time_based=1 00:27:03.610 runtime=60 00:27:03.610 ioengine=libaio 00:27:03.610 direct=1 00:27:03.610 bs=4096 00:27:03.610 iodepth=1 00:27:03.610 norandommap=0 00:27:03.610 numjobs=1 00:27:03.610 00:27:03.610 verify_dump=1 00:27:03.610 verify_backlog=512 00:27:03.610 verify_state_save=0 00:27:03.610 do_verify=1 00:27:03.610 verify=crc32c-intel 00:27:03.610 [job0] 00:27:03.610 filename=/dev/nvme0n1 00:27:03.610 Could not set queue depth (nvme0n1) 00:27:03.868 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:03.868 fio-3.35 00:27:03.868 Starting 1 thread 00:27:07.148 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:27:07.148 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.148 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:07.148 true 00:27:07.148 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.148 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:27:07.148 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.148 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:07.148 true 00:27:07.148 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.148 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:27:07.148 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.148 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:07.148 true 00:27:07.148 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.148 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:27:07.148 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.148 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:07.148 true 00:27:07.148 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.148 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:27:09.675 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:27:09.675 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.675 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:09.675 true 00:27:09.675 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.675 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:27:09.675 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.675 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:09.675 true 00:27:09.675 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.675 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:27:09.675 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.675 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:09.675 true 00:27:09.675 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.675 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:27:09.675 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.675 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:09.675 true 00:27:09.675 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.675 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:27:09.675 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 3224339 00:28:05.886 00:28:05.886 job0: (groupid=0, jobs=1): err= 0: pid=3224408: Sun Sep 29 16:36:04 2024 00:28:05.886 read: IOPS=126, BW=506KiB/s (519kB/s)(29.7MiB/60026msec) 00:28:05.886 slat (nsec): min=4613, max=74709, avg=12022.64, stdev=8339.81 00:28:05.886 clat (usec): min=273, max=45058, avg=2187.27, stdev=8429.09 00:28:05.886 lat (usec): min=279, max=45078, avg=2199.29, stdev=8431.14 00:28:05.886 clat percentiles (usec): 00:28:05.886 | 1.00th=[ 293], 5.00th=[ 302], 10.00th=[ 306], 20.00th=[ 322], 00:28:05.886 | 30.00th=[ 334], 40.00th=[ 338], 50.00th=[ 347], 60.00th=[ 359], 00:28:05.886 | 70.00th=[ 383], 80.00th=[ 433], 90.00th=[ 486], 95.00th=[ 553], 00:28:05.886 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[43779], 00:28:05.886 | 99.99th=[44827] 00:28:05.886 write: IOPS=127, BW=512KiB/s (524kB/s)(30.0MiB/60026msec); 0 zone resets 00:28:05.886 slat (usec): min=6, max=7946, avg=17.10, stdev=104.87 00:28:05.886 clat (usec): min=213, max=41039k, avg=5614.56, stdev=468283.63 00:28:05.886 lat (usec): min=222, max=41039k, avg=5631.66, stdev=468283.54 00:28:05.886 clat percentiles (usec): 00:28:05.886 | 1.00th=[ 223], 5.00th=[ 227], 10.00th=[ 231], 00:28:05.886 | 20.00th=[ 237], 30.00th=[ 241], 40.00th=[ 247], 00:28:05.886 | 50.00th=[ 255], 60.00th=[ 265], 70.00th=[ 281], 00:28:05.886 | 80.00th=[ 297], 90.00th=[ 326], 95.00th=[ 363], 00:28:05.886 | 99.00th=[ 404], 99.50th=[ 494], 99.90th=[ 873], 00:28:05.886 | 99.95th=[ 1205], 99.99th=[17112761] 00:28:05.886 bw ( KiB/s): min= 680, max= 8192, per=100.00%, avg=4726.15, stdev=2476.96, samples=13 00:28:05.886 iops : min= 170, max= 2048, avg=1181.54, stdev=619.24, samples=13 00:28:05.886 lat (usec) : 250=22.85%, 500=73.00%, 750=1.77%, 1000=0.12% 00:28:05.886 lat (msec) : 2=0.02%, 10=0.01%, 50=2.22%, >=2000=0.01% 00:28:05.886 cpu : usr=0.24%, sys=0.48%, ctx=15284, majf=0, minf=1 00:28:05.886 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:05.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.886 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.886 issued rwts: total=7600,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:05.886 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:05.886 00:28:05.886 Run status group 0 (all jobs): 00:28:05.886 READ: bw=506KiB/s (519kB/s), 506KiB/s-506KiB/s (519kB/s-519kB/s), io=29.7MiB (31.1MB), run=60026-60026msec 00:28:05.886 WRITE: bw=512KiB/s (524kB/s), 512KiB/s-512KiB/s (524kB/s-524kB/s), io=30.0MiB (31.5MB), run=60026-60026msec 00:28:05.886 00:28:05.886 Disk stats (read/write): 00:28:05.886 nvme0n1: ios=7695/7680, merge=0/0, ticks=17526/1961, in_queue=19487, util=99.79% 00:28:05.886 16:36:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:28:05.886 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:05.886 16:36:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:28:05.886 16:36:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:28:05.886 16:36:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:05.886 16:36:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:05.886 16:36:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:05.886 16:36:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:05.886 16:36:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:28:05.887 16:36:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:28:05.887 16:36:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:28:05.887 nvmf hotplug test: fio successful as expected 00:28:05.887 16:36:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:05.887 16:36:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.887 16:36:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:05.887 16:36:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.887 16:36:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:28:05.887 16:36:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:28:05.887 16:36:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:28:05.887 16:36:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # nvmfcleanup 00:28:05.887 16:36:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:28:05.887 16:36:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:05.887 16:36:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:28:05.887 16:36:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:05.887 16:36:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:05.887 rmmod nvme_tcp 00:28:05.887 rmmod nvme_fabrics 00:28:05.887 rmmod nvme_keyring 00:28:05.887 16:36:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:05.887 16:36:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:28:05.887 16:36:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:28:05.887 16:36:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@513 -- # '[' -n 3223905 ']' 00:28:05.887 16:36:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@514 -- # killprocess 3223905 00:28:05.887 16:36:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # '[' -z 3223905 ']' 00:28:05.887 16:36:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # kill -0 3223905 00:28:05.887 16:36:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # uname 00:28:05.887 16:36:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:05.887 16:36:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3223905 00:28:05.887 16:36:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:05.887 16:36:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:05.887 16:36:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3223905' 00:28:05.887 killing process with pid 3223905 00:28:05.887 16:36:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@969 -- # kill 3223905 00:28:05.887 16:36:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@974 -- # wait 3223905 00:28:05.887 16:36:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:28:05.887 16:36:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:28:05.887 16:36:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:28:05.887 16:36:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:28:05.887 16:36:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@787 -- # iptables-save 00:28:05.887 16:36:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:28:05.887 16:36:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@787 -- # iptables-restore 00:28:05.887 16:36:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:05.887 16:36:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:05.887 16:36:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:05.887 16:36:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:05.887 16:36:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:07.791 16:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:07.791 00:28:07.791 real 1m10.392s 00:28:07.791 user 4m15.969s 00:28:07.791 sys 0m7.427s 00:28:07.791 16:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:07.791 16:36:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:07.791 ************************************ 00:28:07.791 END TEST nvmf_initiator_timeout 00:28:07.791 ************************************ 00:28:07.791 16:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:28:07.791 16:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:28:07.791 16:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:28:07.791 16:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:28:07.791 16:36:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:10.321 16:36:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:10.321 16:36:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:28:10.321 16:36:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:10.321 16:36:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:10.321 16:36:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:10.321 16:36:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:10.322 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:10.322 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:10.322 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:10.322 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:10.322 ************************************ 00:28:10.322 START TEST nvmf_perf_adq 00:28:10.322 ************************************ 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:28:10.322 * Looking for test storage... 00:28:10.322 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # lcov --version 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:10.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:10.322 --rc genhtml_branch_coverage=1 00:28:10.322 --rc genhtml_function_coverage=1 00:28:10.322 --rc genhtml_legend=1 00:28:10.322 --rc geninfo_all_blocks=1 00:28:10.322 --rc geninfo_unexecuted_blocks=1 00:28:10.322 00:28:10.322 ' 00:28:10.322 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:10.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:10.323 --rc genhtml_branch_coverage=1 00:28:10.323 --rc genhtml_function_coverage=1 00:28:10.323 --rc genhtml_legend=1 00:28:10.323 --rc geninfo_all_blocks=1 00:28:10.323 --rc geninfo_unexecuted_blocks=1 00:28:10.323 00:28:10.323 ' 00:28:10.323 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:10.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:10.323 --rc genhtml_branch_coverage=1 00:28:10.323 --rc genhtml_function_coverage=1 00:28:10.323 --rc genhtml_legend=1 00:28:10.323 --rc geninfo_all_blocks=1 00:28:10.323 --rc geninfo_unexecuted_blocks=1 00:28:10.323 00:28:10.323 ' 00:28:10.323 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:10.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:10.323 --rc genhtml_branch_coverage=1 00:28:10.323 --rc genhtml_function_coverage=1 00:28:10.323 --rc genhtml_legend=1 00:28:10.323 --rc geninfo_all_blocks=1 00:28:10.323 --rc geninfo_unexecuted_blocks=1 00:28:10.323 00:28:10.323 ' 00:28:10.323 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:10.323 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:28:10.323 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:10.323 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:10.323 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:10.323 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:10.323 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:10.323 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:10.323 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:10.323 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:10.323 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:10.323 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:10.323 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:10.323 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:10.323 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:10.323 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:10.323 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:10.323 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:10.323 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:10.323 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:28:10.323 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:10.323 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:10.323 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:10.323 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:10.323 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:10.323 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:10.323 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:28:10.323 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:10.323 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:28:10.323 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:10.323 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:10.323 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:10.323 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:10.323 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:10.323 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:10.323 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:10.323 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:10.323 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:10.323 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:10.323 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:28:10.323 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:10.323 16:36:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:12.297 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:12.297 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:12.297 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:12.297 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:12.297 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:12.297 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:12.297 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:12.297 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:12.297 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:12.297 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:12.297 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:12.297 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:12.297 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:12.297 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:12.297 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:12.297 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:12.297 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:12.297 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:12.297 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:12.297 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:12.297 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:12.297 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:12.297 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:12.297 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:12.297 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:12.297 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:12.297 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:28:12.297 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:28:12.297 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:28:12.297 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:28:12.297 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:28:12.297 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:28:12.297 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:12.297 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:12.297 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:12.297 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:28:12.297 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:28:12.297 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:12.297 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:12.297 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:12.297 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:12.297 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:12.297 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:12.297 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:28:12.297 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:28:12.298 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:12.298 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:12.298 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:12.298 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:28:12.298 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:28:12.298 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:28:12.298 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:12.298 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:12.298 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:12.298 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:12.298 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:12.298 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:12.298 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:12.298 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:12.298 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:12.298 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:12.298 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:12.298 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:12.298 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:12.298 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:12.298 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:12.298 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:12.298 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:12.298 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:12.298 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:12.298 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:12.298 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:28:12.298 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:12.298 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:28:12.298 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:28:12.298 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:28:12.298 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:28:12.298 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:28:12.870 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:28:15.401 16:36:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:28:20.674 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:28:20.674 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:28:20.674 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:20.674 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # prepare_net_devs 00:28:20.674 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@434 -- # local -g is_hw=no 00:28:20.674 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # remove_spdk_ns 00:28:20.674 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:20.674 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:20.674 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:20.674 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:28:20.674 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:28:20.674 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:20.674 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:20.674 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:20.674 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:20.674 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:20.674 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:20.674 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:20.674 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:20.674 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:20.674 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:20.674 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:20.674 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:20.674 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:20.674 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:20.674 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:20.674 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:20.674 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:20.674 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:20.674 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:20.674 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:20.674 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:20.674 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:20.674 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:20.674 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:20.674 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:20.674 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:20.674 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:20.674 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:20.674 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:28:20.674 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:28:20.674 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:28:20.674 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:28:20.674 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:28:20.674 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:28:20.674 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:20.674 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:20.674 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:20.674 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:28:20.674 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:20.675 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:20.675 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:20.675 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # is_hw=yes 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:20.675 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:20.675 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.362 ms 00:28:20.675 00:28:20.675 --- 10.0.0.2 ping statistics --- 00:28:20.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:20.675 rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:20.675 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:20.675 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:28:20.675 00:28:20.675 --- 10.0.0.1 ping statistics --- 00:28:20.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:20.675 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # return 0 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # nvmfpid=3236806 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # waitforlisten 3236806 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 3236806 ']' 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:20.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:20.675 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:20.675 [2024-09-29 16:36:20.962195] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:28:20.675 [2024-09-29 16:36:20.962343] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:20.675 [2024-09-29 16:36:21.106427] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:20.934 [2024-09-29 16:36:21.374474] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:20.934 [2024-09-29 16:36:21.374556] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:20.934 [2024-09-29 16:36:21.374581] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:20.934 [2024-09-29 16:36:21.374605] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:20.934 [2024-09-29 16:36:21.374625] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:20.934 [2024-09-29 16:36:21.374755] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:20.934 [2024-09-29 16:36:21.374797] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:28:20.934 [2024-09-29 16:36:21.374817] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:20.934 [2024-09-29 16:36:21.374819] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:28:21.499 16:36:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:21.499 16:36:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:28:21.499 16:36:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:28:21.499 16:36:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:21.499 16:36:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:21.499 16:36:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:21.499 16:36:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:28:21.499 16:36:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:21.499 16:36:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:21.499 16:36:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.499 16:36:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:21.499 16:36:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.757 16:36:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:21.757 16:36:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:28:21.757 16:36:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.757 16:36:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:21.757 16:36:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.757 16:36:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:21.757 16:36:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.757 16:36:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:22.015 16:36:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.015 16:36:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:28:22.015 16:36:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.015 16:36:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:22.015 [2024-09-29 16:36:22.474991] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:22.015 16:36:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.015 16:36:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:22.015 16:36:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.015 16:36:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:22.015 Malloc1 00:28:22.015 16:36:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.015 16:36:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:22.015 16:36:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.015 16:36:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:22.015 16:36:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.015 16:36:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:22.015 16:36:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.015 16:36:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:22.015 16:36:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.015 16:36:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:22.015 16:36:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.015 16:36:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:22.273 [2024-09-29 16:36:22.581085] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:22.273 16:36:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.273 16:36:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=3237089 00:28:22.273 16:36:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:28:22.273 16:36:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:24.173 16:36:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:28:24.173 16:36:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.173 16:36:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:24.173 16:36:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.173 16:36:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:28:24.173 "tick_rate": 2700000000, 00:28:24.173 "poll_groups": [ 00:28:24.173 { 00:28:24.173 "name": "nvmf_tgt_poll_group_000", 00:28:24.173 "admin_qpairs": 1, 00:28:24.173 "io_qpairs": 1, 00:28:24.173 "current_admin_qpairs": 1, 00:28:24.173 "current_io_qpairs": 1, 00:28:24.173 "pending_bdev_io": 0, 00:28:24.173 "completed_nvme_io": 16865, 00:28:24.173 "transports": [ 00:28:24.173 { 00:28:24.173 "trtype": "TCP" 00:28:24.173 } 00:28:24.173 ] 00:28:24.173 }, 00:28:24.173 { 00:28:24.173 "name": "nvmf_tgt_poll_group_001", 00:28:24.173 "admin_qpairs": 0, 00:28:24.173 "io_qpairs": 1, 00:28:24.173 "current_admin_qpairs": 0, 00:28:24.173 "current_io_qpairs": 1, 00:28:24.173 "pending_bdev_io": 0, 00:28:24.173 "completed_nvme_io": 16885, 00:28:24.173 "transports": [ 00:28:24.173 { 00:28:24.173 "trtype": "TCP" 00:28:24.173 } 00:28:24.173 ] 00:28:24.173 }, 00:28:24.173 { 00:28:24.173 "name": "nvmf_tgt_poll_group_002", 00:28:24.173 "admin_qpairs": 0, 00:28:24.173 "io_qpairs": 1, 00:28:24.173 "current_admin_qpairs": 0, 00:28:24.173 "current_io_qpairs": 1, 00:28:24.173 "pending_bdev_io": 0, 00:28:24.173 "completed_nvme_io": 17120, 00:28:24.173 "transports": [ 00:28:24.173 { 00:28:24.173 "trtype": "TCP" 00:28:24.173 } 00:28:24.173 ] 00:28:24.173 }, 00:28:24.173 { 00:28:24.173 "name": "nvmf_tgt_poll_group_003", 00:28:24.173 "admin_qpairs": 0, 00:28:24.173 "io_qpairs": 1, 00:28:24.173 "current_admin_qpairs": 0, 00:28:24.173 "current_io_qpairs": 1, 00:28:24.173 "pending_bdev_io": 0, 00:28:24.173 "completed_nvme_io": 16521, 00:28:24.173 "transports": [ 00:28:24.173 { 00:28:24.173 "trtype": "TCP" 00:28:24.173 } 00:28:24.173 ] 00:28:24.173 } 00:28:24.173 ] 00:28:24.173 }' 00:28:24.173 16:36:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:28:24.173 16:36:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:28:24.173 16:36:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:28:24.173 16:36:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:28:24.173 16:36:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 3237089 00:28:32.303 Initializing NVMe Controllers 00:28:32.303 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:32.303 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:32.303 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:32.303 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:32.303 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:32.303 Initialization complete. Launching workers. 00:28:32.303 ======================================================== 00:28:32.303 Latency(us) 00:28:32.303 Device Information : IOPS MiB/s Average min max 00:28:32.303 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 9140.58 35.71 7002.39 2450.35 11925.75 00:28:32.303 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 9230.88 36.06 6936.59 2536.36 14139.10 00:28:32.303 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 9368.78 36.60 6833.18 3252.28 10782.68 00:28:32.303 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 8989.88 35.12 7118.46 2467.30 10435.92 00:28:32.303 ======================================================== 00:28:32.303 Total : 36730.12 143.48 6971.10 2450.35 14139.10 00:28:32.303 00:28:32.303 16:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:28:32.303 16:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # nvmfcleanup 00:28:32.303 16:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:32.303 16:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:32.303 16:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:32.303 16:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:32.303 16:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:32.303 rmmod nvme_tcp 00:28:32.303 rmmod nvme_fabrics 00:28:32.303 rmmod nvme_keyring 00:28:32.561 16:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:32.561 16:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:32.561 16:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:32.561 16:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@513 -- # '[' -n 3236806 ']' 00:28:32.561 16:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # killprocess 3236806 00:28:32.561 16:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 3236806 ']' 00:28:32.561 16:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 3236806 00:28:32.561 16:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:28:32.561 16:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:32.561 16:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3236806 00:28:32.561 16:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:32.561 16:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:32.561 16:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3236806' 00:28:32.561 killing process with pid 3236806 00:28:32.561 16:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 3236806 00:28:32.561 16:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 3236806 00:28:33.936 16:36:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:28:33.936 16:36:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:28:33.936 16:36:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:28:33.936 16:36:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:33.936 16:36:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # iptables-save 00:28:33.936 16:36:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:28:33.936 16:36:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # iptables-restore 00:28:33.936 16:36:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:33.936 16:36:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:33.936 16:36:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:33.936 16:36:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:33.936 16:36:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:36.470 16:36:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:36.470 16:36:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:28:36.470 16:36:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:28:36.470 16:36:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:28:36.727 16:36:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:28:39.264 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # prepare_net_devs 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@434 -- # local -g is_hw=no 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # remove_spdk_ns 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:44.532 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:44.532 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:44.532 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:44.532 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # is_hw=yes 00:28:44.532 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:28:44.533 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:28:44.533 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:28:44.533 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:44.533 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:44.533 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:44.533 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:44.533 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:44.533 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:44.533 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:44.533 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:44.533 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:44.533 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:44.533 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:44.533 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:44.533 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:44.533 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:44.533 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:44.533 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:44.533 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:44.533 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:44.533 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:44.533 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:44.533 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:44.533 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:44.533 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:44.533 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:44.533 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.319 ms 00:28:44.533 00:28:44.533 --- 10.0.0.2 ping statistics --- 00:28:44.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:44.533 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:28:44.533 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:44.533 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:44.533 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:28:44.533 00:28:44.533 --- 10.0.0.1 ping statistics --- 00:28:44.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:44.533 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:28:44.533 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:44.533 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # return 0 00:28:44.533 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:28:44.533 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:44.533 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:28:44.533 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:28:44.533 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:44.533 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:28:44.533 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:28:44.533 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:28:44.533 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:28:44.533 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:28:44.533 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:28:44.533 net.core.busy_poll = 1 00:28:44.533 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:28:44.533 net.core.busy_read = 1 00:28:44.533 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:28:44.533 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:28:44.533 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:28:44.533 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:28:44.533 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:28:44.533 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:44.533 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:28:44.533 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:44.533 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:44.533 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # nvmfpid=3239839 00:28:44.533 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:44.533 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # waitforlisten 3239839 00:28:44.533 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 3239839 ']' 00:28:44.533 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:44.533 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:44.533 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:44.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:44.533 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:44.533 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:44.533 [2024-09-29 16:36:44.887165] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:28:44.533 [2024-09-29 16:36:44.887312] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:44.533 [2024-09-29 16:36:45.024307] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:44.791 [2024-09-29 16:36:45.279318] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:44.791 [2024-09-29 16:36:45.279392] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:44.791 [2024-09-29 16:36:45.279427] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:44.791 [2024-09-29 16:36:45.279452] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:44.791 [2024-09-29 16:36:45.279479] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:44.791 [2024-09-29 16:36:45.279610] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:44.791 [2024-09-29 16:36:45.279682] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:28:44.791 [2024-09-29 16:36:45.279730] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:28:44.791 [2024-09-29 16:36:45.279731] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:45.357 16:36:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:45.357 16:36:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:28:45.357 16:36:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:28:45.357 16:36:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:45.357 16:36:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:45.357 16:36:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:45.357 16:36:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:28:45.357 16:36:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:45.357 16:36:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:45.357 16:36:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.357 16:36:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:45.357 16:36:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.357 16:36:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:45.357 16:36:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:28:45.357 16:36:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.357 16:36:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:45.357 16:36:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.357 16:36:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:45.357 16:36:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.357 16:36:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:45.924 16:36:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.924 16:36:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:28:45.924 16:36:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.924 16:36:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:45.924 [2024-09-29 16:36:46.265856] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:45.924 16:36:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.924 16:36:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:45.924 16:36:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.924 16:36:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:45.924 Malloc1 00:28:45.924 16:36:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.924 16:36:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:45.924 16:36:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.924 16:36:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:45.924 16:36:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.924 16:36:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:45.924 16:36:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.924 16:36:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:45.924 16:36:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.924 16:36:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:45.924 16:36:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.924 16:36:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:45.924 [2024-09-29 16:36:46.372467] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:45.924 16:36:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.924 16:36:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=3240008 00:28:45.924 16:36:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:28:45.924 16:36:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:47.823 16:36:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:28:47.823 16:36:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.823 16:36:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:48.081 16:36:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:48.081 16:36:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:28:48.081 "tick_rate": 2700000000, 00:28:48.081 "poll_groups": [ 00:28:48.081 { 00:28:48.081 "name": "nvmf_tgt_poll_group_000", 00:28:48.081 "admin_qpairs": 1, 00:28:48.081 "io_qpairs": 1, 00:28:48.081 "current_admin_qpairs": 1, 00:28:48.081 "current_io_qpairs": 1, 00:28:48.081 "pending_bdev_io": 0, 00:28:48.081 "completed_nvme_io": 17365, 00:28:48.081 "transports": [ 00:28:48.081 { 00:28:48.081 "trtype": "TCP" 00:28:48.081 } 00:28:48.081 ] 00:28:48.081 }, 00:28:48.081 { 00:28:48.081 "name": "nvmf_tgt_poll_group_001", 00:28:48.081 "admin_qpairs": 0, 00:28:48.081 "io_qpairs": 3, 00:28:48.081 "current_admin_qpairs": 0, 00:28:48.081 "current_io_qpairs": 3, 00:28:48.081 "pending_bdev_io": 0, 00:28:48.081 "completed_nvme_io": 19666, 00:28:48.081 "transports": [ 00:28:48.081 { 00:28:48.081 "trtype": "TCP" 00:28:48.081 } 00:28:48.081 ] 00:28:48.081 }, 00:28:48.081 { 00:28:48.081 "name": "nvmf_tgt_poll_group_002", 00:28:48.081 "admin_qpairs": 0, 00:28:48.081 "io_qpairs": 0, 00:28:48.081 "current_admin_qpairs": 0, 00:28:48.081 "current_io_qpairs": 0, 00:28:48.081 "pending_bdev_io": 0, 00:28:48.081 "completed_nvme_io": 0, 00:28:48.081 "transports": [ 00:28:48.081 { 00:28:48.081 "trtype": "TCP" 00:28:48.081 } 00:28:48.081 ] 00:28:48.081 }, 00:28:48.081 { 00:28:48.081 "name": "nvmf_tgt_poll_group_003", 00:28:48.081 "admin_qpairs": 0, 00:28:48.081 "io_qpairs": 0, 00:28:48.081 "current_admin_qpairs": 0, 00:28:48.081 "current_io_qpairs": 0, 00:28:48.081 "pending_bdev_io": 0, 00:28:48.081 "completed_nvme_io": 0, 00:28:48.081 "transports": [ 00:28:48.081 { 00:28:48.081 "trtype": "TCP" 00:28:48.081 } 00:28:48.081 ] 00:28:48.081 } 00:28:48.081 ] 00:28:48.081 }' 00:28:48.081 16:36:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:28:48.081 16:36:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:28:48.081 16:36:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:28:48.082 16:36:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:28:48.082 16:36:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 3240008 00:28:56.252 Initializing NVMe Controllers 00:28:56.252 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:56.252 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:56.252 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:56.252 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:56.252 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:56.252 Initialization complete. Launching workers. 00:28:56.252 ======================================================== 00:28:56.252 Latency(us) 00:28:56.252 Device Information : IOPS MiB/s Average min max 00:28:56.252 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 9390.80 36.68 6817.27 3274.69 9342.31 00:28:56.252 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 3689.80 14.41 17353.18 2404.59 67612.50 00:28:56.252 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 3310.00 12.93 19342.40 3138.06 70266.40 00:28:56.252 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 3729.00 14.57 17218.60 2421.48 67602.38 00:28:56.252 ======================================================== 00:28:56.252 Total : 20119.60 78.59 12737.87 2404.59 70266.40 00:28:56.252 00:28:56.252 16:36:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:28:56.252 16:36:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # nvmfcleanup 00:28:56.252 16:36:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:56.252 16:36:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:56.253 16:36:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:56.253 16:36:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:56.253 16:36:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:56.253 rmmod nvme_tcp 00:28:56.253 rmmod nvme_fabrics 00:28:56.253 rmmod nvme_keyring 00:28:56.253 16:36:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:56.253 16:36:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:56.253 16:36:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:56.253 16:36:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@513 -- # '[' -n 3239839 ']' 00:28:56.253 16:36:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # killprocess 3239839 00:28:56.253 16:36:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 3239839 ']' 00:28:56.253 16:36:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 3239839 00:28:56.253 16:36:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:28:56.253 16:36:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:56.253 16:36:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3239839 00:28:56.253 16:36:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:56.253 16:36:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:56.253 16:36:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3239839' 00:28:56.253 killing process with pid 3239839 00:28:56.253 16:36:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 3239839 00:28:56.253 16:36:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 3239839 00:28:58.150 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:28:58.150 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:28:58.150 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:28:58.150 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:58.150 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # iptables-save 00:28:58.150 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:28:58.150 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # iptables-restore 00:28:58.150 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:58.150 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:58.150 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:58.150 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:58.150 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:00.050 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:00.050 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:29:00.050 00:29:00.050 real 0m49.821s 00:29:00.050 user 2m52.592s 00:29:00.050 sys 0m10.459s 00:29:00.050 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:00.050 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:00.050 ************************************ 00:29:00.050 END TEST nvmf_perf_adq 00:29:00.050 ************************************ 00:29:00.050 16:37:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:29:00.050 16:37:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:00.050 16:37:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:00.050 16:37:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:00.050 ************************************ 00:29:00.050 START TEST nvmf_shutdown 00:29:00.050 ************************************ 00:29:00.050 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:29:00.050 * Looking for test storage... 00:29:00.050 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:00.050 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:00.050 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:29:00.050 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:00.050 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:00.050 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:00.050 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:00.050 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:00.050 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:29:00.050 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:29:00.050 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:29:00.050 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:29:00.050 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:29:00.050 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:29:00.050 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:29:00.050 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:00.050 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:29:00.050 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:29:00.050 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:00.050 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:00.050 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:29:00.050 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:29:00.050 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:00.050 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:29:00.050 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:29:00.050 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:29:00.050 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:29:00.050 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:00.050 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:29:00.050 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:29:00.050 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:00.050 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:00.050 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:29:00.050 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:00.050 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:00.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.050 --rc genhtml_branch_coverage=1 00:29:00.050 --rc genhtml_function_coverage=1 00:29:00.050 --rc genhtml_legend=1 00:29:00.050 --rc geninfo_all_blocks=1 00:29:00.050 --rc geninfo_unexecuted_blocks=1 00:29:00.050 00:29:00.050 ' 00:29:00.050 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:00.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.050 --rc genhtml_branch_coverage=1 00:29:00.050 --rc genhtml_function_coverage=1 00:29:00.050 --rc genhtml_legend=1 00:29:00.050 --rc geninfo_all_blocks=1 00:29:00.050 --rc geninfo_unexecuted_blocks=1 00:29:00.050 00:29:00.050 ' 00:29:00.050 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:00.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.050 --rc genhtml_branch_coverage=1 00:29:00.050 --rc genhtml_function_coverage=1 00:29:00.050 --rc genhtml_legend=1 00:29:00.050 --rc geninfo_all_blocks=1 00:29:00.050 --rc geninfo_unexecuted_blocks=1 00:29:00.050 00:29:00.050 ' 00:29:00.050 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:00.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.050 --rc genhtml_branch_coverage=1 00:29:00.050 --rc genhtml_function_coverage=1 00:29:00.050 --rc genhtml_legend=1 00:29:00.050 --rc geninfo_all_blocks=1 00:29:00.050 --rc geninfo_unexecuted_blocks=1 00:29:00.050 00:29:00.050 ' 00:29:00.051 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:00.051 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:29:00.051 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:00.051 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:00.051 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:00.051 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:00.051 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:00.051 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:00.051 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:00.051 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:00.051 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:00.051 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:00.051 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:00.051 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:00.051 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:00.051 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:00.051 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:00.051 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:00.051 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:00.051 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:29:00.051 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:00.051 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:00.051 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:00.051 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.051 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.051 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.051 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:29:00.051 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.051 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:29:00.051 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:00.051 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:00.051 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:00.051 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:00.051 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:00.051 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:00.051 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:00.051 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:00.051 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:00.051 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:00.051 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:00.051 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:00.051 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:29:00.051 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:00.051 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:00.051 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:00.051 ************************************ 00:29:00.051 START TEST nvmf_shutdown_tc1 00:29:00.051 ************************************ 00:29:00.051 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:29:00.051 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:29:00.051 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:00.051 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:29:00.051 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:00.051 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@472 -- # prepare_net_devs 00:29:00.051 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@434 -- # local -g is_hw=no 00:29:00.051 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@436 -- # remove_spdk_ns 00:29:00.051 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:00.051 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:00.051 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:00.051 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:29:00.051 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:29:00.051 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:00.051 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:01.949 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:01.949 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:01.949 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:01.949 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:01.950 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:01.950 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:01.950 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:01.950 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # is_hw=yes 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:01.950 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:02.209 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:02.209 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:02.209 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:02.209 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:02.209 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:02.209 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:02.210 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:02.210 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:02.210 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:02.210 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:02.210 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:02.210 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:29:02.210 00:29:02.210 --- 10.0.0.2 ping statistics --- 00:29:02.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:02.210 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:29:02.210 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:02.210 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:02.210 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.081 ms 00:29:02.210 00:29:02.210 --- 10.0.0.1 ping statistics --- 00:29:02.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:02.210 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:29:02.210 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:02.210 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # return 0 00:29:02.210 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:29:02.210 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:02.210 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:29:02.210 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:29:02.210 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:02.210 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:29:02.210 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:29:02.210 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:02.210 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:29:02.210 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:02.210 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:02.210 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@505 -- # nvmfpid=3243424 00:29:02.210 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:02.210 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@506 -- # waitforlisten 3243424 00:29:02.210 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 3243424 ']' 00:29:02.210 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:02.210 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:02.210 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:02.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:02.210 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:02.210 16:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:02.210 [2024-09-29 16:37:02.750118] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:29:02.210 [2024-09-29 16:37:02.750264] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:02.468 [2024-09-29 16:37:02.893178] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:02.726 [2024-09-29 16:37:03.158375] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:02.726 [2024-09-29 16:37:03.158445] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:02.726 [2024-09-29 16:37:03.158479] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:02.726 [2024-09-29 16:37:03.158503] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:02.726 [2024-09-29 16:37:03.158523] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:02.726 [2024-09-29 16:37:03.158667] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:29:02.726 [2024-09-29 16:37:03.158774] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:29:02.726 [2024-09-29 16:37:03.158812] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:02.726 [2024-09-29 16:37:03.158817] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:29:03.291 16:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:03.291 16:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:29:03.291 16:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:29:03.291 16:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:03.292 16:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:03.292 16:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:03.292 16:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:03.292 16:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.292 16:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:03.292 [2024-09-29 16:37:03.729314] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:03.292 16:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.292 16:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:03.292 16:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:03.292 16:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:03.292 16:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:03.292 16:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:03.292 16:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:03.292 16:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:03.292 16:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:03.292 16:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:03.292 16:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:03.292 16:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:03.292 16:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:03.292 16:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:03.292 16:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:03.292 16:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:03.292 16:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:03.292 16:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:03.292 16:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:03.292 16:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:03.292 16:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:03.292 16:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:03.292 16:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:03.292 16:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:03.292 16:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:03.292 16:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:03.292 16:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:03.292 16:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.292 16:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:03.292 Malloc1 00:29:03.550 [2024-09-29 16:37:03.858039] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:03.550 Malloc2 00:29:03.550 Malloc3 00:29:03.808 Malloc4 00:29:03.808 Malloc5 00:29:03.808 Malloc6 00:29:04.066 Malloc7 00:29:04.066 Malloc8 00:29:04.324 Malloc9 00:29:04.324 Malloc10 00:29:04.324 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:04.324 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:04.324 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:04.324 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:04.324 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=3243732 00:29:04.324 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 3243732 /var/tmp/bdevperf.sock 00:29:04.324 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 3243732 ']' 00:29:04.324 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:29:04.324 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:04.324 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:04.324 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:04.324 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # config=() 00:29:04.324 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:04.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:04.324 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # local subsystem config 00:29:04.324 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:04.325 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:04.325 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:04.325 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:04.325 { 00:29:04.325 "params": { 00:29:04.325 "name": "Nvme$subsystem", 00:29:04.325 "trtype": "$TEST_TRANSPORT", 00:29:04.325 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.325 "adrfam": "ipv4", 00:29:04.325 "trsvcid": "$NVMF_PORT", 00:29:04.325 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.325 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.325 "hdgst": ${hdgst:-false}, 00:29:04.325 "ddgst": ${ddgst:-false} 00:29:04.325 }, 00:29:04.325 "method": "bdev_nvme_attach_controller" 00:29:04.325 } 00:29:04.325 EOF 00:29:04.325 )") 00:29:04.325 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:29:04.325 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:04.325 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:04.325 { 00:29:04.325 "params": { 00:29:04.325 "name": "Nvme$subsystem", 00:29:04.325 "trtype": "$TEST_TRANSPORT", 00:29:04.325 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.325 "adrfam": "ipv4", 00:29:04.325 "trsvcid": "$NVMF_PORT", 00:29:04.325 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.325 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.325 "hdgst": ${hdgst:-false}, 00:29:04.325 "ddgst": ${ddgst:-false} 00:29:04.325 }, 00:29:04.325 "method": "bdev_nvme_attach_controller" 00:29:04.325 } 00:29:04.325 EOF 00:29:04.325 )") 00:29:04.325 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:29:04.325 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:04.325 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:04.325 { 00:29:04.325 "params": { 00:29:04.325 "name": "Nvme$subsystem", 00:29:04.325 "trtype": "$TEST_TRANSPORT", 00:29:04.325 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.325 "adrfam": "ipv4", 00:29:04.325 "trsvcid": "$NVMF_PORT", 00:29:04.325 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.325 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.325 "hdgst": ${hdgst:-false}, 00:29:04.325 "ddgst": ${ddgst:-false} 00:29:04.325 }, 00:29:04.325 "method": "bdev_nvme_attach_controller" 00:29:04.325 } 00:29:04.325 EOF 00:29:04.325 )") 00:29:04.325 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:29:04.325 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:04.325 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:04.325 { 00:29:04.325 "params": { 00:29:04.325 "name": "Nvme$subsystem", 00:29:04.325 "trtype": "$TEST_TRANSPORT", 00:29:04.325 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.325 "adrfam": "ipv4", 00:29:04.325 "trsvcid": "$NVMF_PORT", 00:29:04.325 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.325 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.325 "hdgst": ${hdgst:-false}, 00:29:04.325 "ddgst": ${ddgst:-false} 00:29:04.325 }, 00:29:04.325 "method": "bdev_nvme_attach_controller" 00:29:04.325 } 00:29:04.325 EOF 00:29:04.325 )") 00:29:04.325 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:29:04.325 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:04.325 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:04.325 { 00:29:04.325 "params": { 00:29:04.325 "name": "Nvme$subsystem", 00:29:04.325 "trtype": "$TEST_TRANSPORT", 00:29:04.325 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.325 "adrfam": "ipv4", 00:29:04.325 "trsvcid": "$NVMF_PORT", 00:29:04.325 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.325 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.325 "hdgst": ${hdgst:-false}, 00:29:04.325 "ddgst": ${ddgst:-false} 00:29:04.325 }, 00:29:04.325 "method": "bdev_nvme_attach_controller" 00:29:04.325 } 00:29:04.325 EOF 00:29:04.325 )") 00:29:04.325 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:29:04.325 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:04.325 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:04.325 { 00:29:04.325 "params": { 00:29:04.325 "name": "Nvme$subsystem", 00:29:04.325 "trtype": "$TEST_TRANSPORT", 00:29:04.325 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.325 "adrfam": "ipv4", 00:29:04.325 "trsvcid": "$NVMF_PORT", 00:29:04.325 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.325 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.325 "hdgst": ${hdgst:-false}, 00:29:04.325 "ddgst": ${ddgst:-false} 00:29:04.325 }, 00:29:04.325 "method": "bdev_nvme_attach_controller" 00:29:04.325 } 00:29:04.325 EOF 00:29:04.325 )") 00:29:04.325 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:29:04.325 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:04.325 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:04.325 { 00:29:04.325 "params": { 00:29:04.325 "name": "Nvme$subsystem", 00:29:04.325 "trtype": "$TEST_TRANSPORT", 00:29:04.325 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.325 "adrfam": "ipv4", 00:29:04.325 "trsvcid": "$NVMF_PORT", 00:29:04.325 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.325 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.325 "hdgst": ${hdgst:-false}, 00:29:04.325 "ddgst": ${ddgst:-false} 00:29:04.325 }, 00:29:04.325 "method": "bdev_nvme_attach_controller" 00:29:04.325 } 00:29:04.325 EOF 00:29:04.325 )") 00:29:04.325 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:29:04.325 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:04.325 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:04.325 { 00:29:04.325 "params": { 00:29:04.325 "name": "Nvme$subsystem", 00:29:04.325 "trtype": "$TEST_TRANSPORT", 00:29:04.325 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.325 "adrfam": "ipv4", 00:29:04.325 "trsvcid": "$NVMF_PORT", 00:29:04.325 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.325 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.325 "hdgst": ${hdgst:-false}, 00:29:04.325 "ddgst": ${ddgst:-false} 00:29:04.325 }, 00:29:04.325 "method": "bdev_nvme_attach_controller" 00:29:04.325 } 00:29:04.325 EOF 00:29:04.325 )") 00:29:04.325 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:29:04.325 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:04.325 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:04.325 { 00:29:04.325 "params": { 00:29:04.325 "name": "Nvme$subsystem", 00:29:04.325 "trtype": "$TEST_TRANSPORT", 00:29:04.325 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.325 "adrfam": "ipv4", 00:29:04.325 "trsvcid": "$NVMF_PORT", 00:29:04.325 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.325 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.325 "hdgst": ${hdgst:-false}, 00:29:04.325 "ddgst": ${ddgst:-false} 00:29:04.325 }, 00:29:04.325 "method": "bdev_nvme_attach_controller" 00:29:04.325 } 00:29:04.325 EOF 00:29:04.325 )") 00:29:04.325 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:29:04.325 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:04.325 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:04.325 { 00:29:04.325 "params": { 00:29:04.325 "name": "Nvme$subsystem", 00:29:04.325 "trtype": "$TEST_TRANSPORT", 00:29:04.325 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.325 "adrfam": "ipv4", 00:29:04.325 "trsvcid": "$NVMF_PORT", 00:29:04.325 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.325 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.325 "hdgst": ${hdgst:-false}, 00:29:04.325 "ddgst": ${ddgst:-false} 00:29:04.325 }, 00:29:04.325 "method": "bdev_nvme_attach_controller" 00:29:04.325 } 00:29:04.325 EOF 00:29:04.325 )") 00:29:04.325 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:29:04.325 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # jq . 00:29:04.325 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@581 -- # IFS=, 00:29:04.325 16:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:29:04.325 "params": { 00:29:04.325 "name": "Nvme1", 00:29:04.325 "trtype": "tcp", 00:29:04.325 "traddr": "10.0.0.2", 00:29:04.325 "adrfam": "ipv4", 00:29:04.325 "trsvcid": "4420", 00:29:04.325 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:04.325 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:04.325 "hdgst": false, 00:29:04.325 "ddgst": false 00:29:04.325 }, 00:29:04.325 "method": "bdev_nvme_attach_controller" 00:29:04.325 },{ 00:29:04.325 "params": { 00:29:04.325 "name": "Nvme2", 00:29:04.325 "trtype": "tcp", 00:29:04.326 "traddr": "10.0.0.2", 00:29:04.326 "adrfam": "ipv4", 00:29:04.326 "trsvcid": "4420", 00:29:04.326 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:04.326 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:04.326 "hdgst": false, 00:29:04.326 "ddgst": false 00:29:04.326 }, 00:29:04.326 "method": "bdev_nvme_attach_controller" 00:29:04.326 },{ 00:29:04.326 "params": { 00:29:04.326 "name": "Nvme3", 00:29:04.326 "trtype": "tcp", 00:29:04.326 "traddr": "10.0.0.2", 00:29:04.326 "adrfam": "ipv4", 00:29:04.326 "trsvcid": "4420", 00:29:04.326 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:04.326 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:04.326 "hdgst": false, 00:29:04.326 "ddgst": false 00:29:04.326 }, 00:29:04.326 "method": "bdev_nvme_attach_controller" 00:29:04.326 },{ 00:29:04.326 "params": { 00:29:04.326 "name": "Nvme4", 00:29:04.326 "trtype": "tcp", 00:29:04.326 "traddr": "10.0.0.2", 00:29:04.326 "adrfam": "ipv4", 00:29:04.326 "trsvcid": "4420", 00:29:04.326 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:04.326 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:04.326 "hdgst": false, 00:29:04.326 "ddgst": false 00:29:04.326 }, 00:29:04.326 "method": "bdev_nvme_attach_controller" 00:29:04.326 },{ 00:29:04.326 "params": { 00:29:04.326 "name": "Nvme5", 00:29:04.326 "trtype": "tcp", 00:29:04.326 "traddr": "10.0.0.2", 00:29:04.326 "adrfam": "ipv4", 00:29:04.326 "trsvcid": "4420", 00:29:04.326 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:04.326 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:04.326 "hdgst": false, 00:29:04.326 "ddgst": false 00:29:04.326 }, 00:29:04.326 "method": "bdev_nvme_attach_controller" 00:29:04.326 },{ 00:29:04.326 "params": { 00:29:04.326 "name": "Nvme6", 00:29:04.326 "trtype": "tcp", 00:29:04.326 "traddr": "10.0.0.2", 00:29:04.326 "adrfam": "ipv4", 00:29:04.326 "trsvcid": "4420", 00:29:04.326 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:04.326 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:04.326 "hdgst": false, 00:29:04.326 "ddgst": false 00:29:04.326 }, 00:29:04.326 "method": "bdev_nvme_attach_controller" 00:29:04.326 },{ 00:29:04.326 "params": { 00:29:04.326 "name": "Nvme7", 00:29:04.326 "trtype": "tcp", 00:29:04.326 "traddr": "10.0.0.2", 00:29:04.326 "adrfam": "ipv4", 00:29:04.326 "trsvcid": "4420", 00:29:04.326 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:04.326 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:04.326 "hdgst": false, 00:29:04.326 "ddgst": false 00:29:04.326 }, 00:29:04.326 "method": "bdev_nvme_attach_controller" 00:29:04.326 },{ 00:29:04.326 "params": { 00:29:04.326 "name": "Nvme8", 00:29:04.326 "trtype": "tcp", 00:29:04.326 "traddr": "10.0.0.2", 00:29:04.326 "adrfam": "ipv4", 00:29:04.326 "trsvcid": "4420", 00:29:04.326 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:04.326 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:04.326 "hdgst": false, 00:29:04.326 "ddgst": false 00:29:04.326 }, 00:29:04.326 "method": "bdev_nvme_attach_controller" 00:29:04.326 },{ 00:29:04.326 "params": { 00:29:04.326 "name": "Nvme9", 00:29:04.326 "trtype": "tcp", 00:29:04.326 "traddr": "10.0.0.2", 00:29:04.326 "adrfam": "ipv4", 00:29:04.326 "trsvcid": "4420", 00:29:04.326 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:04.326 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:04.326 "hdgst": false, 00:29:04.326 "ddgst": false 00:29:04.326 }, 00:29:04.326 "method": "bdev_nvme_attach_controller" 00:29:04.326 },{ 00:29:04.326 "params": { 00:29:04.326 "name": "Nvme10", 00:29:04.326 "trtype": "tcp", 00:29:04.326 "traddr": "10.0.0.2", 00:29:04.326 "adrfam": "ipv4", 00:29:04.326 "trsvcid": "4420", 00:29:04.326 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:04.326 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:04.326 "hdgst": false, 00:29:04.326 "ddgst": false 00:29:04.326 }, 00:29:04.326 "method": "bdev_nvme_attach_controller" 00:29:04.326 }' 00:29:04.584 [2024-09-29 16:37:04.894648] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:29:04.584 [2024-09-29 16:37:04.894821] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:29:04.584 [2024-09-29 16:37:05.029130] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:04.842 [2024-09-29 16:37:05.269481] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:07.371 16:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:07.371 16:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:29:07.371 16:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:07.371 16:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.371 16:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:07.371 16:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.371 16:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 3243732 00:29:07.371 16:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:29:07.371 16:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:29:08.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 3243732 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:29:08.304 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 3243424 00:29:08.304 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:29:08.304 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:08.304 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # config=() 00:29:08.304 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # local subsystem config 00:29:08.304 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:08.304 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:08.304 { 00:29:08.304 "params": { 00:29:08.304 "name": "Nvme$subsystem", 00:29:08.304 "trtype": "$TEST_TRANSPORT", 00:29:08.304 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:08.304 "adrfam": "ipv4", 00:29:08.304 "trsvcid": "$NVMF_PORT", 00:29:08.304 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:08.304 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:08.304 "hdgst": ${hdgst:-false}, 00:29:08.304 "ddgst": ${ddgst:-false} 00:29:08.304 }, 00:29:08.304 "method": "bdev_nvme_attach_controller" 00:29:08.304 } 00:29:08.304 EOF 00:29:08.304 )") 00:29:08.304 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:29:08.304 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:08.304 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:08.304 { 00:29:08.304 "params": { 00:29:08.304 "name": "Nvme$subsystem", 00:29:08.304 "trtype": "$TEST_TRANSPORT", 00:29:08.304 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:08.304 "adrfam": "ipv4", 00:29:08.304 "trsvcid": "$NVMF_PORT", 00:29:08.304 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:08.304 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:08.304 "hdgst": ${hdgst:-false}, 00:29:08.304 "ddgst": ${ddgst:-false} 00:29:08.304 }, 00:29:08.304 "method": "bdev_nvme_attach_controller" 00:29:08.305 } 00:29:08.305 EOF 00:29:08.305 )") 00:29:08.305 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:29:08.305 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:08.305 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:08.305 { 00:29:08.305 "params": { 00:29:08.305 "name": "Nvme$subsystem", 00:29:08.305 "trtype": "$TEST_TRANSPORT", 00:29:08.305 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:08.305 "adrfam": "ipv4", 00:29:08.305 "trsvcid": "$NVMF_PORT", 00:29:08.305 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:08.305 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:08.305 "hdgst": ${hdgst:-false}, 00:29:08.305 "ddgst": ${ddgst:-false} 00:29:08.305 }, 00:29:08.305 "method": "bdev_nvme_attach_controller" 00:29:08.305 } 00:29:08.305 EOF 00:29:08.305 )") 00:29:08.305 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:29:08.305 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:08.305 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:08.305 { 00:29:08.305 "params": { 00:29:08.305 "name": "Nvme$subsystem", 00:29:08.305 "trtype": "$TEST_TRANSPORT", 00:29:08.305 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:08.305 "adrfam": "ipv4", 00:29:08.305 "trsvcid": "$NVMF_PORT", 00:29:08.305 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:08.305 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:08.305 "hdgst": ${hdgst:-false}, 00:29:08.305 "ddgst": ${ddgst:-false} 00:29:08.305 }, 00:29:08.305 "method": "bdev_nvme_attach_controller" 00:29:08.305 } 00:29:08.305 EOF 00:29:08.305 )") 00:29:08.305 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:29:08.305 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:08.305 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:08.305 { 00:29:08.305 "params": { 00:29:08.305 "name": "Nvme$subsystem", 00:29:08.305 "trtype": "$TEST_TRANSPORT", 00:29:08.305 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:08.305 "adrfam": "ipv4", 00:29:08.305 "trsvcid": "$NVMF_PORT", 00:29:08.305 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:08.305 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:08.305 "hdgst": ${hdgst:-false}, 00:29:08.305 "ddgst": ${ddgst:-false} 00:29:08.305 }, 00:29:08.305 "method": "bdev_nvme_attach_controller" 00:29:08.305 } 00:29:08.305 EOF 00:29:08.305 )") 00:29:08.305 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:29:08.305 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:08.305 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:08.305 { 00:29:08.305 "params": { 00:29:08.305 "name": "Nvme$subsystem", 00:29:08.305 "trtype": "$TEST_TRANSPORT", 00:29:08.305 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:08.305 "adrfam": "ipv4", 00:29:08.305 "trsvcid": "$NVMF_PORT", 00:29:08.305 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:08.305 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:08.305 "hdgst": ${hdgst:-false}, 00:29:08.305 "ddgst": ${ddgst:-false} 00:29:08.305 }, 00:29:08.305 "method": "bdev_nvme_attach_controller" 00:29:08.305 } 00:29:08.305 EOF 00:29:08.305 )") 00:29:08.305 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:29:08.305 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:08.305 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:08.305 { 00:29:08.305 "params": { 00:29:08.305 "name": "Nvme$subsystem", 00:29:08.305 "trtype": "$TEST_TRANSPORT", 00:29:08.305 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:08.305 "adrfam": "ipv4", 00:29:08.305 "trsvcid": "$NVMF_PORT", 00:29:08.305 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:08.305 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:08.305 "hdgst": ${hdgst:-false}, 00:29:08.305 "ddgst": ${ddgst:-false} 00:29:08.305 }, 00:29:08.305 "method": "bdev_nvme_attach_controller" 00:29:08.305 } 00:29:08.305 EOF 00:29:08.305 )") 00:29:08.305 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:29:08.305 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:08.305 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:08.305 { 00:29:08.305 "params": { 00:29:08.305 "name": "Nvme$subsystem", 00:29:08.305 "trtype": "$TEST_TRANSPORT", 00:29:08.305 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:08.305 "adrfam": "ipv4", 00:29:08.305 "trsvcid": "$NVMF_PORT", 00:29:08.305 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:08.305 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:08.305 "hdgst": ${hdgst:-false}, 00:29:08.305 "ddgst": ${ddgst:-false} 00:29:08.305 }, 00:29:08.305 "method": "bdev_nvme_attach_controller" 00:29:08.305 } 00:29:08.305 EOF 00:29:08.305 )") 00:29:08.305 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:29:08.305 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:08.305 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:08.305 { 00:29:08.305 "params": { 00:29:08.305 "name": "Nvme$subsystem", 00:29:08.305 "trtype": "$TEST_TRANSPORT", 00:29:08.305 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:08.305 "adrfam": "ipv4", 00:29:08.305 "trsvcid": "$NVMF_PORT", 00:29:08.305 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:08.305 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:08.305 "hdgst": ${hdgst:-false}, 00:29:08.305 "ddgst": ${ddgst:-false} 00:29:08.305 }, 00:29:08.305 "method": "bdev_nvme_attach_controller" 00:29:08.305 } 00:29:08.305 EOF 00:29:08.305 )") 00:29:08.305 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:29:08.305 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:08.305 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:08.305 { 00:29:08.305 "params": { 00:29:08.305 "name": "Nvme$subsystem", 00:29:08.305 "trtype": "$TEST_TRANSPORT", 00:29:08.305 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:08.305 "adrfam": "ipv4", 00:29:08.305 "trsvcid": "$NVMF_PORT", 00:29:08.305 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:08.305 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:08.305 "hdgst": ${hdgst:-false}, 00:29:08.305 "ddgst": ${ddgst:-false} 00:29:08.305 }, 00:29:08.305 "method": "bdev_nvme_attach_controller" 00:29:08.305 } 00:29:08.305 EOF 00:29:08.305 )") 00:29:08.305 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:29:08.305 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # jq . 00:29:08.305 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@581 -- # IFS=, 00:29:08.305 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:29:08.305 "params": { 00:29:08.305 "name": "Nvme1", 00:29:08.305 "trtype": "tcp", 00:29:08.305 "traddr": "10.0.0.2", 00:29:08.305 "adrfam": "ipv4", 00:29:08.305 "trsvcid": "4420", 00:29:08.305 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:08.305 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:08.305 "hdgst": false, 00:29:08.305 "ddgst": false 00:29:08.305 }, 00:29:08.305 "method": "bdev_nvme_attach_controller" 00:29:08.305 },{ 00:29:08.305 "params": { 00:29:08.305 "name": "Nvme2", 00:29:08.305 "trtype": "tcp", 00:29:08.305 "traddr": "10.0.0.2", 00:29:08.305 "adrfam": "ipv4", 00:29:08.305 "trsvcid": "4420", 00:29:08.305 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:08.305 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:08.305 "hdgst": false, 00:29:08.305 "ddgst": false 00:29:08.305 }, 00:29:08.305 "method": "bdev_nvme_attach_controller" 00:29:08.305 },{ 00:29:08.305 "params": { 00:29:08.305 "name": "Nvme3", 00:29:08.305 "trtype": "tcp", 00:29:08.305 "traddr": "10.0.0.2", 00:29:08.305 "adrfam": "ipv4", 00:29:08.305 "trsvcid": "4420", 00:29:08.305 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:08.305 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:08.305 "hdgst": false, 00:29:08.305 "ddgst": false 00:29:08.305 }, 00:29:08.305 "method": "bdev_nvme_attach_controller" 00:29:08.305 },{ 00:29:08.305 "params": { 00:29:08.305 "name": "Nvme4", 00:29:08.305 "trtype": "tcp", 00:29:08.305 "traddr": "10.0.0.2", 00:29:08.305 "adrfam": "ipv4", 00:29:08.305 "trsvcid": "4420", 00:29:08.305 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:08.305 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:08.305 "hdgst": false, 00:29:08.305 "ddgst": false 00:29:08.305 }, 00:29:08.305 "method": "bdev_nvme_attach_controller" 00:29:08.305 },{ 00:29:08.305 "params": { 00:29:08.305 "name": "Nvme5", 00:29:08.305 "trtype": "tcp", 00:29:08.305 "traddr": "10.0.0.2", 00:29:08.305 "adrfam": "ipv4", 00:29:08.305 "trsvcid": "4420", 00:29:08.305 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:08.305 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:08.305 "hdgst": false, 00:29:08.305 "ddgst": false 00:29:08.306 }, 00:29:08.306 "method": "bdev_nvme_attach_controller" 00:29:08.306 },{ 00:29:08.306 "params": { 00:29:08.306 "name": "Nvme6", 00:29:08.306 "trtype": "tcp", 00:29:08.306 "traddr": "10.0.0.2", 00:29:08.306 "adrfam": "ipv4", 00:29:08.306 "trsvcid": "4420", 00:29:08.306 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:08.306 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:08.306 "hdgst": false, 00:29:08.306 "ddgst": false 00:29:08.306 }, 00:29:08.306 "method": "bdev_nvme_attach_controller" 00:29:08.306 },{ 00:29:08.306 "params": { 00:29:08.306 "name": "Nvme7", 00:29:08.306 "trtype": "tcp", 00:29:08.306 "traddr": "10.0.0.2", 00:29:08.306 "adrfam": "ipv4", 00:29:08.306 "trsvcid": "4420", 00:29:08.306 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:08.306 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:08.306 "hdgst": false, 00:29:08.306 "ddgst": false 00:29:08.306 }, 00:29:08.306 "method": "bdev_nvme_attach_controller" 00:29:08.306 },{ 00:29:08.306 "params": { 00:29:08.306 "name": "Nvme8", 00:29:08.306 "trtype": "tcp", 00:29:08.306 "traddr": "10.0.0.2", 00:29:08.306 "adrfam": "ipv4", 00:29:08.306 "trsvcid": "4420", 00:29:08.306 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:08.306 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:08.306 "hdgst": false, 00:29:08.306 "ddgst": false 00:29:08.306 }, 00:29:08.306 "method": "bdev_nvme_attach_controller" 00:29:08.306 },{ 00:29:08.306 "params": { 00:29:08.306 "name": "Nvme9", 00:29:08.306 "trtype": "tcp", 00:29:08.306 "traddr": "10.0.0.2", 00:29:08.306 "adrfam": "ipv4", 00:29:08.306 "trsvcid": "4420", 00:29:08.306 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:08.306 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:08.306 "hdgst": false, 00:29:08.306 "ddgst": false 00:29:08.306 }, 00:29:08.306 "method": "bdev_nvme_attach_controller" 00:29:08.306 },{ 00:29:08.306 "params": { 00:29:08.306 "name": "Nvme10", 00:29:08.306 "trtype": "tcp", 00:29:08.306 "traddr": "10.0.0.2", 00:29:08.306 "adrfam": "ipv4", 00:29:08.306 "trsvcid": "4420", 00:29:08.306 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:08.306 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:08.306 "hdgst": false, 00:29:08.306 "ddgst": false 00:29:08.306 }, 00:29:08.306 "method": "bdev_nvme_attach_controller" 00:29:08.306 }' 00:29:08.306 [2024-09-29 16:37:08.736873] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:29:08.306 [2024-09-29 16:37:08.737040] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3244167 ] 00:29:08.306 [2024-09-29 16:37:08.866166] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:08.568 [2024-09-29 16:37:09.103759] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:10.468 Running I/O for 1 seconds... 00:29:11.659 1298.00 IOPS, 81.12 MiB/s 00:29:11.659 Latency(us) 00:29:11.659 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:11.659 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:11.659 Verification LBA range: start 0x0 length 0x400 00:29:11.659 Nvme1n1 : 1.21 161.92 10.12 0.00 0.00 389991.46 3640.89 379040.81 00:29:11.659 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:11.659 Verification LBA range: start 0x0 length 0x400 00:29:11.659 Nvme2n1 : 1.14 172.87 10.80 0.00 0.00 340432.60 24758.04 306028.85 00:29:11.659 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:11.659 Verification LBA range: start 0x0 length 0x400 00:29:11.659 Nvme3n1 : 1.16 165.72 10.36 0.00 0.00 368790.44 23884.23 320009.86 00:29:11.659 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:11.659 Verification LBA range: start 0x0 length 0x400 00:29:11.659 Nvme4n1 : 1.23 208.91 13.06 0.00 0.00 288282.93 23495.87 340204.66 00:29:11.659 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:11.659 Verification LBA range: start 0x0 length 0x400 00:29:11.659 Nvme5n1 : 1.18 162.56 10.16 0.00 0.00 363034.80 23981.32 320009.86 00:29:11.659 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:11.659 Verification LBA range: start 0x0 length 0x400 00:29:11.659 Nvme6n1 : 1.17 163.59 10.22 0.00 0.00 353914.63 26020.22 327777.09 00:29:11.659 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:11.659 Verification LBA range: start 0x0 length 0x400 00:29:11.659 Nvme7n1 : 1.22 210.04 13.13 0.00 0.00 271735.47 22622.06 313796.08 00:29:11.660 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:11.660 Verification LBA range: start 0x0 length 0x400 00:29:11.660 Nvme8n1 : 1.24 206.93 12.93 0.00 0.00 271218.54 41360.50 323116.75 00:29:11.660 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:11.660 Verification LBA range: start 0x0 length 0x400 00:29:11.660 Nvme9n1 : 1.23 207.74 12.98 0.00 0.00 264936.49 22622.06 315349.52 00:29:11.660 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:11.660 Verification LBA range: start 0x0 length 0x400 00:29:11.660 Nvme10n1 : 1.20 160.50 10.03 0.00 0.00 335054.82 24855.13 343311.55 00:29:11.660 =================================================================================================================== 00:29:11.660 Total : 1820.79 113.80 0.00 0.00 318954.93 3640.89 379040.81 00:29:13.032 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:29:13.032 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:13.032 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:13.032 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:13.032 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:13.032 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # nvmfcleanup 00:29:13.032 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:29:13.032 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:13.032 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:29:13.032 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:13.032 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:13.032 rmmod nvme_tcp 00:29:13.032 rmmod nvme_fabrics 00:29:13.032 rmmod nvme_keyring 00:29:13.032 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:13.032 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:29:13.032 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:29:13.032 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@513 -- # '[' -n 3243424 ']' 00:29:13.032 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@514 -- # killprocess 3243424 00:29:13.032 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 3243424 ']' 00:29:13.032 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 3243424 00:29:13.032 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:29:13.032 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:13.032 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3243424 00:29:13.032 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:13.032 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:13.032 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3243424' 00:29:13.032 killing process with pid 3243424 00:29:13.032 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 3243424 00:29:13.032 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 3243424 00:29:16.311 16:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:29:16.311 16:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:29:16.311 16:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:29:16.311 16:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:29:16.311 16:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@787 -- # iptables-save 00:29:16.311 16:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:29:16.311 16:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@787 -- # iptables-restore 00:29:16.311 16:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:16.311 16:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:16.311 16:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:16.311 16:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:16.311 16:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:17.687 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:17.687 00:29:17.687 real 0m17.722s 00:29:17.687 user 0m57.275s 00:29:17.687 sys 0m3.962s 00:29:17.687 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:17.687 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:17.687 ************************************ 00:29:17.687 END TEST nvmf_shutdown_tc1 00:29:17.687 ************************************ 00:29:17.946 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:29:17.946 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:17.946 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:17.946 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:17.946 ************************************ 00:29:17.946 START TEST nvmf_shutdown_tc2 00:29:17.946 ************************************ 00:29:17.946 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:29:17.946 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:29:17.946 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:17.946 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:29:17.946 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:17.946 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@472 -- # prepare_net_devs 00:29:17.946 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@434 -- # local -g is_hw=no 00:29:17.946 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@436 -- # remove_spdk_ns 00:29:17.946 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:17.946 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:17.946 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:17.946 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:29:17.946 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:29:17.946 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:17.946 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:17.946 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:17.946 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:17.946 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:17.946 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:17.946 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:17.946 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:17.946 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:17.946 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:29:17.946 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:17.946 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:29:17.946 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:29:17.946 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:29:17.946 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:29:17.946 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:29:17.946 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:17.946 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:17.946 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:17.946 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:17.946 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:17.946 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:17.946 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:17.946 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:17.946 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:17.946 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:17.946 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:17.946 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:17.946 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:29:17.946 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:29:17.946 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:29:17.946 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:29:17.946 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:29:17.946 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:29:17.946 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:17.946 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:17.946 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:17.946 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:17.946 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:17.946 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:17.946 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:17.946 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:17.946 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:17.946 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:17.946 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:17.946 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:17.946 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:17.946 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:17.946 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:17.946 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:17.946 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:29:17.946 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:29:17.946 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:29:17.946 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:17.946 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:17.946 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:17.946 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:17.946 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:17.947 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:17.947 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:17.947 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:17.947 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:17.947 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:17.947 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:17.947 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:17.947 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:17.947 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:17.947 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:17.947 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:17.947 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:17.947 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:17.947 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:17.947 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:17.947 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:29:17.947 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # is_hw=yes 00:29:17.947 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:29:17.947 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:29:17.947 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:29:17.947 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:17.947 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:17.947 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:17.947 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:17.947 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:17.947 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:17.947 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:17.947 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:17.947 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:17.947 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:17.947 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:17.947 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:17.947 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:17.947 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:17.947 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:17.947 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:17.947 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:17.947 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:17.947 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:17.947 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:17.947 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:17.947 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:17.947 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:17.947 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:17.947 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:29:17.947 00:29:17.947 --- 10.0.0.2 ping statistics --- 00:29:17.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:17.947 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:29:17.947 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:17.947 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:17.947 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:29:17.947 00:29:17.947 --- 10.0.0.1 ping statistics --- 00:29:17.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:17.947 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:29:17.947 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:17.947 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # return 0 00:29:17.947 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:29:17.947 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:17.947 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:29:17.947 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:29:17.947 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:17.947 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:29:17.947 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:29:17.947 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:17.947 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:29:17.947 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:17.947 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:17.947 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@505 -- # nvmfpid=3245448 00:29:17.947 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:17.947 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@506 -- # waitforlisten 3245448 00:29:17.947 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3245448 ']' 00:29:17.947 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:17.947 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:17.947 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:17.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:17.947 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:17.947 16:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:18.205 [2024-09-29 16:37:18.565872] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:29:18.205 [2024-09-29 16:37:18.566043] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:18.205 [2024-09-29 16:37:18.708620] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:18.463 [2024-09-29 16:37:18.970623] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:18.463 [2024-09-29 16:37:18.970724] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:18.463 [2024-09-29 16:37:18.970751] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:18.463 [2024-09-29 16:37:18.970777] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:18.463 [2024-09-29 16:37:18.970797] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:18.463 [2024-09-29 16:37:18.970945] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:29:18.463 [2024-09-29 16:37:18.971055] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:29:18.463 [2024-09-29 16:37:18.971098] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:18.463 [2024-09-29 16:37:18.971106] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:29:19.027 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:19.027 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:29:19.027 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:29:19.027 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:19.027 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:19.027 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:19.027 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:19.027 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.027 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:19.027 [2024-09-29 16:37:19.543480] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:19.027 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.027 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:19.027 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:19.027 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:19.027 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:19.027 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:19.027 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:19.027 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:19.028 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:19.028 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:19.028 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:19.028 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:19.028 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:19.028 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:19.028 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:19.028 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:19.028 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:19.028 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:19.028 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:19.028 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:19.028 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:19.028 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:19.028 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:19.028 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:19.028 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:19.028 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:19.028 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:19.028 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.028 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:19.286 Malloc1 00:29:19.286 [2024-09-29 16:37:19.673092] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:19.286 Malloc2 00:29:19.286 Malloc3 00:29:19.543 Malloc4 00:29:19.543 Malloc5 00:29:19.801 Malloc6 00:29:19.801 Malloc7 00:29:20.059 Malloc8 00:29:20.059 Malloc9 00:29:20.059 Malloc10 00:29:20.059 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:20.059 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:20.059 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:20.059 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:20.059 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=3245673 00:29:20.059 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 3245673 /var/tmp/bdevperf.sock 00:29:20.059 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3245673 ']' 00:29:20.059 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:20.059 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:20.059 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:20.059 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:20.059 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # config=() 00:29:20.059 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:20.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:20.059 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # local subsystem config 00:29:20.059 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:20.059 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:20.059 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:20.059 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:20.059 { 00:29:20.059 "params": { 00:29:20.059 "name": "Nvme$subsystem", 00:29:20.059 "trtype": "$TEST_TRANSPORT", 00:29:20.059 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:20.059 "adrfam": "ipv4", 00:29:20.059 "trsvcid": "$NVMF_PORT", 00:29:20.059 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:20.059 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:20.059 "hdgst": ${hdgst:-false}, 00:29:20.059 "ddgst": ${ddgst:-false} 00:29:20.059 }, 00:29:20.059 "method": "bdev_nvme_attach_controller" 00:29:20.059 } 00:29:20.059 EOF 00:29:20.059 )") 00:29:20.059 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:29:20.059 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:20.059 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:20.059 { 00:29:20.059 "params": { 00:29:20.059 "name": "Nvme$subsystem", 00:29:20.059 "trtype": "$TEST_TRANSPORT", 00:29:20.059 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:20.059 "adrfam": "ipv4", 00:29:20.059 "trsvcid": "$NVMF_PORT", 00:29:20.059 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:20.059 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:20.059 "hdgst": ${hdgst:-false}, 00:29:20.059 "ddgst": ${ddgst:-false} 00:29:20.059 }, 00:29:20.060 "method": "bdev_nvme_attach_controller" 00:29:20.060 } 00:29:20.060 EOF 00:29:20.060 )") 00:29:20.060 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:29:20.319 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:20.319 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:20.319 { 00:29:20.319 "params": { 00:29:20.319 "name": "Nvme$subsystem", 00:29:20.319 "trtype": "$TEST_TRANSPORT", 00:29:20.319 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:20.319 "adrfam": "ipv4", 00:29:20.319 "trsvcid": "$NVMF_PORT", 00:29:20.319 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:20.319 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:20.319 "hdgst": ${hdgst:-false}, 00:29:20.319 "ddgst": ${ddgst:-false} 00:29:20.319 }, 00:29:20.319 "method": "bdev_nvme_attach_controller" 00:29:20.319 } 00:29:20.319 EOF 00:29:20.319 )") 00:29:20.319 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:29:20.319 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:20.319 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:20.319 { 00:29:20.319 "params": { 00:29:20.319 "name": "Nvme$subsystem", 00:29:20.319 "trtype": "$TEST_TRANSPORT", 00:29:20.319 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:20.319 "adrfam": "ipv4", 00:29:20.319 "trsvcid": "$NVMF_PORT", 00:29:20.319 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:20.319 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:20.319 "hdgst": ${hdgst:-false}, 00:29:20.319 "ddgst": ${ddgst:-false} 00:29:20.319 }, 00:29:20.319 "method": "bdev_nvme_attach_controller" 00:29:20.319 } 00:29:20.319 EOF 00:29:20.319 )") 00:29:20.319 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:29:20.319 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:20.319 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:20.319 { 00:29:20.319 "params": { 00:29:20.319 "name": "Nvme$subsystem", 00:29:20.319 "trtype": "$TEST_TRANSPORT", 00:29:20.319 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:20.319 "adrfam": "ipv4", 00:29:20.319 "trsvcid": "$NVMF_PORT", 00:29:20.319 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:20.319 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:20.319 "hdgst": ${hdgst:-false}, 00:29:20.319 "ddgst": ${ddgst:-false} 00:29:20.319 }, 00:29:20.319 "method": "bdev_nvme_attach_controller" 00:29:20.319 } 00:29:20.319 EOF 00:29:20.319 )") 00:29:20.319 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:29:20.319 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:20.319 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:20.319 { 00:29:20.319 "params": { 00:29:20.319 "name": "Nvme$subsystem", 00:29:20.319 "trtype": "$TEST_TRANSPORT", 00:29:20.319 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:20.319 "adrfam": "ipv4", 00:29:20.319 "trsvcid": "$NVMF_PORT", 00:29:20.319 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:20.319 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:20.319 "hdgst": ${hdgst:-false}, 00:29:20.319 "ddgst": ${ddgst:-false} 00:29:20.319 }, 00:29:20.319 "method": "bdev_nvme_attach_controller" 00:29:20.319 } 00:29:20.319 EOF 00:29:20.319 )") 00:29:20.319 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:29:20.319 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:20.319 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:20.319 { 00:29:20.319 "params": { 00:29:20.319 "name": "Nvme$subsystem", 00:29:20.319 "trtype": "$TEST_TRANSPORT", 00:29:20.319 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:20.319 "adrfam": "ipv4", 00:29:20.319 "trsvcid": "$NVMF_PORT", 00:29:20.319 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:20.319 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:20.319 "hdgst": ${hdgst:-false}, 00:29:20.319 "ddgst": ${ddgst:-false} 00:29:20.319 }, 00:29:20.319 "method": "bdev_nvme_attach_controller" 00:29:20.319 } 00:29:20.319 EOF 00:29:20.319 )") 00:29:20.319 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:29:20.319 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:20.319 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:20.319 { 00:29:20.319 "params": { 00:29:20.319 "name": "Nvme$subsystem", 00:29:20.319 "trtype": "$TEST_TRANSPORT", 00:29:20.319 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:20.319 "adrfam": "ipv4", 00:29:20.319 "trsvcid": "$NVMF_PORT", 00:29:20.319 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:20.319 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:20.319 "hdgst": ${hdgst:-false}, 00:29:20.319 "ddgst": ${ddgst:-false} 00:29:20.319 }, 00:29:20.319 "method": "bdev_nvme_attach_controller" 00:29:20.319 } 00:29:20.319 EOF 00:29:20.319 )") 00:29:20.319 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:29:20.319 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:20.319 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:20.319 { 00:29:20.319 "params": { 00:29:20.319 "name": "Nvme$subsystem", 00:29:20.319 "trtype": "$TEST_TRANSPORT", 00:29:20.319 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:20.319 "adrfam": "ipv4", 00:29:20.319 "trsvcid": "$NVMF_PORT", 00:29:20.319 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:20.319 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:20.319 "hdgst": ${hdgst:-false}, 00:29:20.319 "ddgst": ${ddgst:-false} 00:29:20.319 }, 00:29:20.319 "method": "bdev_nvme_attach_controller" 00:29:20.319 } 00:29:20.319 EOF 00:29:20.319 )") 00:29:20.319 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:29:20.319 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:20.319 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:20.319 { 00:29:20.319 "params": { 00:29:20.319 "name": "Nvme$subsystem", 00:29:20.319 "trtype": "$TEST_TRANSPORT", 00:29:20.319 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:20.319 "adrfam": "ipv4", 00:29:20.319 "trsvcid": "$NVMF_PORT", 00:29:20.319 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:20.319 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:20.319 "hdgst": ${hdgst:-false}, 00:29:20.319 "ddgst": ${ddgst:-false} 00:29:20.319 }, 00:29:20.320 "method": "bdev_nvme_attach_controller" 00:29:20.320 } 00:29:20.320 EOF 00:29:20.320 )") 00:29:20.320 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:29:20.320 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # jq . 00:29:20.320 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@581 -- # IFS=, 00:29:20.320 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:29:20.320 "params": { 00:29:20.320 "name": "Nvme1", 00:29:20.320 "trtype": "tcp", 00:29:20.320 "traddr": "10.0.0.2", 00:29:20.320 "adrfam": "ipv4", 00:29:20.320 "trsvcid": "4420", 00:29:20.320 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:20.320 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:20.320 "hdgst": false, 00:29:20.320 "ddgst": false 00:29:20.320 }, 00:29:20.320 "method": "bdev_nvme_attach_controller" 00:29:20.320 },{ 00:29:20.320 "params": { 00:29:20.320 "name": "Nvme2", 00:29:20.320 "trtype": "tcp", 00:29:20.320 "traddr": "10.0.0.2", 00:29:20.320 "adrfam": "ipv4", 00:29:20.320 "trsvcid": "4420", 00:29:20.320 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:20.320 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:20.320 "hdgst": false, 00:29:20.320 "ddgst": false 00:29:20.320 }, 00:29:20.320 "method": "bdev_nvme_attach_controller" 00:29:20.320 },{ 00:29:20.320 "params": { 00:29:20.320 "name": "Nvme3", 00:29:20.320 "trtype": "tcp", 00:29:20.320 "traddr": "10.0.0.2", 00:29:20.320 "adrfam": "ipv4", 00:29:20.320 "trsvcid": "4420", 00:29:20.320 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:20.320 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:20.320 "hdgst": false, 00:29:20.320 "ddgst": false 00:29:20.320 }, 00:29:20.320 "method": "bdev_nvme_attach_controller" 00:29:20.320 },{ 00:29:20.320 "params": { 00:29:20.320 "name": "Nvme4", 00:29:20.320 "trtype": "tcp", 00:29:20.320 "traddr": "10.0.0.2", 00:29:20.320 "adrfam": "ipv4", 00:29:20.320 "trsvcid": "4420", 00:29:20.320 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:20.320 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:20.320 "hdgst": false, 00:29:20.320 "ddgst": false 00:29:20.320 }, 00:29:20.320 "method": "bdev_nvme_attach_controller" 00:29:20.320 },{ 00:29:20.320 "params": { 00:29:20.320 "name": "Nvme5", 00:29:20.320 "trtype": "tcp", 00:29:20.320 "traddr": "10.0.0.2", 00:29:20.320 "adrfam": "ipv4", 00:29:20.320 "trsvcid": "4420", 00:29:20.320 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:20.320 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:20.320 "hdgst": false, 00:29:20.320 "ddgst": false 00:29:20.320 }, 00:29:20.320 "method": "bdev_nvme_attach_controller" 00:29:20.320 },{ 00:29:20.320 "params": { 00:29:20.320 "name": "Nvme6", 00:29:20.320 "trtype": "tcp", 00:29:20.320 "traddr": "10.0.0.2", 00:29:20.320 "adrfam": "ipv4", 00:29:20.320 "trsvcid": "4420", 00:29:20.320 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:20.320 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:20.320 "hdgst": false, 00:29:20.320 "ddgst": false 00:29:20.320 }, 00:29:20.320 "method": "bdev_nvme_attach_controller" 00:29:20.320 },{ 00:29:20.320 "params": { 00:29:20.320 "name": "Nvme7", 00:29:20.320 "trtype": "tcp", 00:29:20.320 "traddr": "10.0.0.2", 00:29:20.320 "adrfam": "ipv4", 00:29:20.320 "trsvcid": "4420", 00:29:20.320 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:20.320 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:20.320 "hdgst": false, 00:29:20.320 "ddgst": false 00:29:20.320 }, 00:29:20.320 "method": "bdev_nvme_attach_controller" 00:29:20.320 },{ 00:29:20.320 "params": { 00:29:20.320 "name": "Nvme8", 00:29:20.320 "trtype": "tcp", 00:29:20.320 "traddr": "10.0.0.2", 00:29:20.320 "adrfam": "ipv4", 00:29:20.320 "trsvcid": "4420", 00:29:20.320 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:20.320 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:20.320 "hdgst": false, 00:29:20.320 "ddgst": false 00:29:20.320 }, 00:29:20.320 "method": "bdev_nvme_attach_controller" 00:29:20.320 },{ 00:29:20.320 "params": { 00:29:20.320 "name": "Nvme9", 00:29:20.320 "trtype": "tcp", 00:29:20.320 "traddr": "10.0.0.2", 00:29:20.320 "adrfam": "ipv4", 00:29:20.320 "trsvcid": "4420", 00:29:20.320 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:20.320 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:20.320 "hdgst": false, 00:29:20.320 "ddgst": false 00:29:20.320 }, 00:29:20.320 "method": "bdev_nvme_attach_controller" 00:29:20.320 },{ 00:29:20.320 "params": { 00:29:20.320 "name": "Nvme10", 00:29:20.320 "trtype": "tcp", 00:29:20.320 "traddr": "10.0.0.2", 00:29:20.320 "adrfam": "ipv4", 00:29:20.320 "trsvcid": "4420", 00:29:20.320 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:20.320 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:20.320 "hdgst": false, 00:29:20.320 "ddgst": false 00:29:20.320 }, 00:29:20.320 "method": "bdev_nvme_attach_controller" 00:29:20.320 }' 00:29:20.320 [2024-09-29 16:37:20.701803] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:29:20.320 [2024-09-29 16:37:20.701935] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3245673 ] 00:29:20.320 [2024-09-29 16:37:20.833769] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:20.579 [2024-09-29 16:37:21.073020] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:23.108 Running I/O for 10 seconds... 00:29:23.108 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:23.108 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:29:23.108 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:23.108 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:23.108 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:23.108 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:23.108 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:23.108 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:23.108 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:29:23.108 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:29:23.108 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:29:23.108 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:29:23.108 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:23.108 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:23.108 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:23.108 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:23.108 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:23.108 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:23.108 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:29:23.108 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:29:23.108 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:23.367 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:23.367 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:23.367 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:23.367 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:23.367 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:23.367 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:23.367 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:23.367 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:29:23.367 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:29:23.367 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:23.625 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:23.625 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:23.625 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:23.625 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:23.625 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:23.625 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:23.625 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:23.625 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:29:23.625 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:29:23.625 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:29:23.625 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:29:23.625 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:29:23.625 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 3245673 00:29:23.625 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 3245673 ']' 00:29:23.625 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 3245673 00:29:23.625 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:29:23.625 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:23.625 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3245673 00:29:23.625 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:23.625 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:23.625 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3245673' 00:29:23.625 killing process with pid 3245673 00:29:23.625 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 3245673 00:29:23.625 16:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 3245673 00:29:23.883 1819.00 IOPS, 113.69 MiB/s Received shutdown signal, test time was about 1.034839 seconds 00:29:23.883 00:29:23.883 Latency(us) 00:29:23.883 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:23.883 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:23.883 Verification LBA range: start 0x0 length 0x400 00:29:23.883 Nvme1n1 : 0.97 197.42 12.34 0.00 0.00 318129.75 23301.69 298261.62 00:29:23.883 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:23.883 Verification LBA range: start 0x0 length 0x400 00:29:23.883 Nvme2n1 : 0.99 194.27 12.14 0.00 0.00 318844.27 25243.50 299815.06 00:29:23.883 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:23.883 Verification LBA range: start 0x0 length 0x400 00:29:23.883 Nvme3n1 : 0.96 205.58 12.85 0.00 0.00 290605.40 6747.78 282727.16 00:29:23.883 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:23.883 Verification LBA range: start 0x0 length 0x400 00:29:23.883 Nvme4n1 : 0.97 217.24 13.58 0.00 0.00 265724.38 21165.70 296708.17 00:29:23.883 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:23.883 Verification LBA range: start 0x0 length 0x400 00:29:23.883 Nvme5n1 : 1.01 189.43 11.84 0.00 0.00 307681.91 25437.68 310689.19 00:29:23.883 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:23.883 Verification LBA range: start 0x0 length 0x400 00:29:23.883 Nvme6n1 : 0.96 203.81 12.74 0.00 0.00 275550.89 7815.77 281173.71 00:29:23.883 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:23.883 Verification LBA range: start 0x0 length 0x400 00:29:23.883 Nvme7n1 : 1.00 192.91 12.06 0.00 0.00 288357.14 22622.06 301368.51 00:29:23.883 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:23.884 Verification LBA range: start 0x0 length 0x400 00:29:23.884 Nvme8n1 : 0.98 196.05 12.25 0.00 0.00 276338.92 23787.14 295154.73 00:29:23.884 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:23.884 Verification LBA range: start 0x0 length 0x400 00:29:23.884 Nvme9n1 : 1.03 185.69 11.61 0.00 0.00 288218.83 28350.39 341758.10 00:29:23.884 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:23.884 Verification LBA range: start 0x0 length 0x400 00:29:23.884 Nvme10n1 : 1.02 187.68 11.73 0.00 0.00 271539.01 22622.06 307582.29 00:29:23.884 =================================================================================================================== 00:29:23.884 Total : 1970.07 123.13 0.00 0.00 289832.74 6747.78 341758.10 00:29:24.880 16:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:29:25.813 16:37:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 3245448 00:29:25.813 16:37:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:29:25.813 16:37:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:25.813 16:37:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:25.813 16:37:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:25.813 16:37:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:25.813 16:37:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # nvmfcleanup 00:29:25.813 16:37:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:29:25.813 16:37:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:25.813 16:37:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:29:25.813 16:37:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:25.813 16:37:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:25.813 rmmod nvme_tcp 00:29:26.071 rmmod nvme_fabrics 00:29:26.071 rmmod nvme_keyring 00:29:26.071 16:37:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:26.071 16:37:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:29:26.071 16:37:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:29:26.071 16:37:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@513 -- # '[' -n 3245448 ']' 00:29:26.071 16:37:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@514 -- # killprocess 3245448 00:29:26.071 16:37:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 3245448 ']' 00:29:26.071 16:37:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 3245448 00:29:26.071 16:37:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:29:26.071 16:37:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:26.071 16:37:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3245448 00:29:26.071 16:37:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:26.071 16:37:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:26.071 16:37:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3245448' 00:29:26.071 killing process with pid 3245448 00:29:26.071 16:37:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 3245448 00:29:26.071 16:37:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 3245448 00:29:29.354 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:29:29.354 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:29:29.354 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:29:29.354 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:29:29.354 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@787 -- # iptables-save 00:29:29.354 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:29:29.354 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@787 -- # iptables-restore 00:29:29.354 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:29.354 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:29.354 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:29.354 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:29.354 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:31.261 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:31.261 00:29:31.261 real 0m13.126s 00:29:31.261 user 0m44.075s 00:29:31.261 sys 0m2.150s 00:29:31.261 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:31.261 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:31.261 ************************************ 00:29:31.261 END TEST nvmf_shutdown_tc2 00:29:31.261 ************************************ 00:29:31.261 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:29:31.261 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:31.261 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:31.261 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:31.261 ************************************ 00:29:31.261 START TEST nvmf_shutdown_tc3 00:29:31.261 ************************************ 00:29:31.261 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:29:31.261 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:29:31.261 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:31.261 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:29:31.261 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:31.261 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@472 -- # prepare_net_devs 00:29:31.261 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@434 -- # local -g is_hw=no 00:29:31.261 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@436 -- # remove_spdk_ns 00:29:31.261 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:31.261 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:31.261 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:31.261 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:29:31.261 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:29:31.261 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:31.261 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:31.261 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:31.261 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:31.261 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:31.261 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:31.261 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:31.261 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:31.261 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:31.261 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:29:31.261 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:31.261 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:31.262 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:31.262 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:31.262 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:31.262 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # is_hw=yes 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:31.262 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:31.262 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:29:31.262 00:29:31.262 --- 10.0.0.2 ping statistics --- 00:29:31.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:31.262 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:31.262 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:31.262 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.062 ms 00:29:31.262 00:29:31.262 --- 10.0.0.1 ping statistics --- 00:29:31.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:31.262 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # return 0 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:29:31.262 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:31.263 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:29:31.263 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:29:31.263 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:31.263 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:29:31.263 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:29:31.263 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:31.263 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:29:31.263 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:31.263 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:31.263 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@505 -- # nvmfpid=3247083 00:29:31.263 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:31.263 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@506 -- # waitforlisten 3247083 00:29:31.263 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 3247083 ']' 00:29:31.263 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:31.263 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:31.263 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:31.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:31.263 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:31.263 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:31.263 [2024-09-29 16:37:31.750002] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:29:31.263 [2024-09-29 16:37:31.750146] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:31.522 [2024-09-29 16:37:31.904840] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:31.780 [2024-09-29 16:37:32.263630] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:31.780 [2024-09-29 16:37:32.263746] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:31.780 [2024-09-29 16:37:32.263778] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:31.780 [2024-09-29 16:37:32.263825] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:31.780 [2024-09-29 16:37:32.263851] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:31.780 [2024-09-29 16:37:32.263955] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:29:31.780 [2024-09-29 16:37:32.264011] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:29:31.780 [2024-09-29 16:37:32.264061] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:31.780 [2024-09-29 16:37:32.264062] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:29:32.346 16:37:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:32.346 16:37:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:29:32.346 16:37:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:29:32.346 16:37:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:32.346 16:37:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:32.346 16:37:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:32.346 16:37:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:32.346 16:37:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.346 16:37:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:32.346 [2024-09-29 16:37:32.768433] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:32.346 16:37:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.346 16:37:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:32.346 16:37:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:32.346 16:37:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:32.346 16:37:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:32.346 16:37:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:32.346 16:37:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:32.346 16:37:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:32.346 16:37:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:32.346 16:37:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:32.346 16:37:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:32.346 16:37:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:32.346 16:37:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:32.346 16:37:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:32.346 16:37:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:32.346 16:37:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:32.346 16:37:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:32.346 16:37:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:32.346 16:37:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:32.346 16:37:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:32.347 16:37:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:32.347 16:37:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:32.347 16:37:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:32.347 16:37:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:32.347 16:37:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:32.347 16:37:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:32.347 16:37:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:32.347 16:37:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.347 16:37:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:32.347 Malloc1 00:29:32.605 [2024-09-29 16:37:32.912386] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:32.605 Malloc2 00:29:32.605 Malloc3 00:29:32.863 Malloc4 00:29:32.863 Malloc5 00:29:32.863 Malloc6 00:29:33.121 Malloc7 00:29:33.121 Malloc8 00:29:33.380 Malloc9 00:29:33.380 Malloc10 00:29:33.380 16:37:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.380 16:37:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:33.380 16:37:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:33.380 16:37:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:33.380 16:37:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=3247395 00:29:33.380 16:37:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 3247395 /var/tmp/bdevperf.sock 00:29:33.380 16:37:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 3247395 ']' 00:29:33.380 16:37:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:33.380 16:37:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:33.380 16:37:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:33.380 16:37:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:33.380 16:37:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # config=() 00:29:33.380 16:37:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:33.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:33.380 16:37:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # local subsystem config 00:29:33.380 16:37:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:33.380 16:37:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:33.380 16:37:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:33.380 16:37:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:33.380 { 00:29:33.380 "params": { 00:29:33.380 "name": "Nvme$subsystem", 00:29:33.380 "trtype": "$TEST_TRANSPORT", 00:29:33.380 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:33.380 "adrfam": "ipv4", 00:29:33.380 "trsvcid": "$NVMF_PORT", 00:29:33.380 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:33.380 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:33.380 "hdgst": ${hdgst:-false}, 00:29:33.380 "ddgst": ${ddgst:-false} 00:29:33.380 }, 00:29:33.380 "method": "bdev_nvme_attach_controller" 00:29:33.380 } 00:29:33.380 EOF 00:29:33.380 )") 00:29:33.380 16:37:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:29:33.380 16:37:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:33.380 16:37:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:33.380 { 00:29:33.380 "params": { 00:29:33.380 "name": "Nvme$subsystem", 00:29:33.380 "trtype": "$TEST_TRANSPORT", 00:29:33.380 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:33.380 "adrfam": "ipv4", 00:29:33.380 "trsvcid": "$NVMF_PORT", 00:29:33.380 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:33.380 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:33.380 "hdgst": ${hdgst:-false}, 00:29:33.380 "ddgst": ${ddgst:-false} 00:29:33.380 }, 00:29:33.380 "method": "bdev_nvme_attach_controller" 00:29:33.380 } 00:29:33.380 EOF 00:29:33.380 )") 00:29:33.380 16:37:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:29:33.380 16:37:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:33.380 16:37:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:33.380 { 00:29:33.380 "params": { 00:29:33.381 "name": "Nvme$subsystem", 00:29:33.381 "trtype": "$TEST_TRANSPORT", 00:29:33.381 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:33.381 "adrfam": "ipv4", 00:29:33.381 "trsvcid": "$NVMF_PORT", 00:29:33.381 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:33.381 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:33.381 "hdgst": ${hdgst:-false}, 00:29:33.381 "ddgst": ${ddgst:-false} 00:29:33.381 }, 00:29:33.381 "method": "bdev_nvme_attach_controller" 00:29:33.381 } 00:29:33.381 EOF 00:29:33.381 )") 00:29:33.381 16:37:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:29:33.381 16:37:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:33.381 16:37:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:33.381 { 00:29:33.381 "params": { 00:29:33.381 "name": "Nvme$subsystem", 00:29:33.381 "trtype": "$TEST_TRANSPORT", 00:29:33.381 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:33.381 "adrfam": "ipv4", 00:29:33.381 "trsvcid": "$NVMF_PORT", 00:29:33.381 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:33.381 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:33.381 "hdgst": ${hdgst:-false}, 00:29:33.381 "ddgst": ${ddgst:-false} 00:29:33.381 }, 00:29:33.381 "method": "bdev_nvme_attach_controller" 00:29:33.381 } 00:29:33.381 EOF 00:29:33.381 )") 00:29:33.381 16:37:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:29:33.381 16:37:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:33.381 16:37:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:33.381 { 00:29:33.381 "params": { 00:29:33.381 "name": "Nvme$subsystem", 00:29:33.381 "trtype": "$TEST_TRANSPORT", 00:29:33.381 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:33.381 "adrfam": "ipv4", 00:29:33.381 "trsvcid": "$NVMF_PORT", 00:29:33.381 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:33.381 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:33.381 "hdgst": ${hdgst:-false}, 00:29:33.381 "ddgst": ${ddgst:-false} 00:29:33.381 }, 00:29:33.381 "method": "bdev_nvme_attach_controller" 00:29:33.381 } 00:29:33.381 EOF 00:29:33.381 )") 00:29:33.381 16:37:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:29:33.381 16:37:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:33.381 16:37:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:33.381 { 00:29:33.381 "params": { 00:29:33.381 "name": "Nvme$subsystem", 00:29:33.381 "trtype": "$TEST_TRANSPORT", 00:29:33.381 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:33.381 "adrfam": "ipv4", 00:29:33.381 "trsvcid": "$NVMF_PORT", 00:29:33.381 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:33.381 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:33.381 "hdgst": ${hdgst:-false}, 00:29:33.381 "ddgst": ${ddgst:-false} 00:29:33.381 }, 00:29:33.381 "method": "bdev_nvme_attach_controller" 00:29:33.381 } 00:29:33.381 EOF 00:29:33.381 )") 00:29:33.381 16:37:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:29:33.381 16:37:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:33.381 16:37:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:33.381 { 00:29:33.381 "params": { 00:29:33.381 "name": "Nvme$subsystem", 00:29:33.381 "trtype": "$TEST_TRANSPORT", 00:29:33.381 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:33.381 "adrfam": "ipv4", 00:29:33.381 "trsvcid": "$NVMF_PORT", 00:29:33.381 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:33.381 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:33.381 "hdgst": ${hdgst:-false}, 00:29:33.381 "ddgst": ${ddgst:-false} 00:29:33.381 }, 00:29:33.381 "method": "bdev_nvme_attach_controller" 00:29:33.381 } 00:29:33.381 EOF 00:29:33.381 )") 00:29:33.381 16:37:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:29:33.381 16:37:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:33.381 16:37:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:33.381 { 00:29:33.381 "params": { 00:29:33.381 "name": "Nvme$subsystem", 00:29:33.381 "trtype": "$TEST_TRANSPORT", 00:29:33.381 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:33.381 "adrfam": "ipv4", 00:29:33.381 "trsvcid": "$NVMF_PORT", 00:29:33.381 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:33.381 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:33.381 "hdgst": ${hdgst:-false}, 00:29:33.381 "ddgst": ${ddgst:-false} 00:29:33.381 }, 00:29:33.381 "method": "bdev_nvme_attach_controller" 00:29:33.381 } 00:29:33.381 EOF 00:29:33.381 )") 00:29:33.381 16:37:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:29:33.381 16:37:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:33.381 16:37:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:33.381 { 00:29:33.381 "params": { 00:29:33.381 "name": "Nvme$subsystem", 00:29:33.381 "trtype": "$TEST_TRANSPORT", 00:29:33.381 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:33.381 "adrfam": "ipv4", 00:29:33.381 "trsvcid": "$NVMF_PORT", 00:29:33.381 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:33.381 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:33.381 "hdgst": ${hdgst:-false}, 00:29:33.381 "ddgst": ${ddgst:-false} 00:29:33.381 }, 00:29:33.381 "method": "bdev_nvme_attach_controller" 00:29:33.381 } 00:29:33.381 EOF 00:29:33.381 )") 00:29:33.381 16:37:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:29:33.381 16:37:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:33.381 16:37:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:33.381 { 00:29:33.381 "params": { 00:29:33.381 "name": "Nvme$subsystem", 00:29:33.381 "trtype": "$TEST_TRANSPORT", 00:29:33.381 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:33.381 "adrfam": "ipv4", 00:29:33.381 "trsvcid": "$NVMF_PORT", 00:29:33.381 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:33.381 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:33.381 "hdgst": ${hdgst:-false}, 00:29:33.381 "ddgst": ${ddgst:-false} 00:29:33.381 }, 00:29:33.381 "method": "bdev_nvme_attach_controller" 00:29:33.381 } 00:29:33.381 EOF 00:29:33.381 )") 00:29:33.381 16:37:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:29:33.381 16:37:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # jq . 00:29:33.381 16:37:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@581 -- # IFS=, 00:29:33.381 16:37:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:29:33.381 "params": { 00:29:33.381 "name": "Nvme1", 00:29:33.381 "trtype": "tcp", 00:29:33.381 "traddr": "10.0.0.2", 00:29:33.381 "adrfam": "ipv4", 00:29:33.381 "trsvcid": "4420", 00:29:33.381 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:33.381 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:33.381 "hdgst": false, 00:29:33.381 "ddgst": false 00:29:33.381 }, 00:29:33.381 "method": "bdev_nvme_attach_controller" 00:29:33.381 },{ 00:29:33.381 "params": { 00:29:33.381 "name": "Nvme2", 00:29:33.381 "trtype": "tcp", 00:29:33.381 "traddr": "10.0.0.2", 00:29:33.381 "adrfam": "ipv4", 00:29:33.381 "trsvcid": "4420", 00:29:33.381 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:33.381 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:33.381 "hdgst": false, 00:29:33.381 "ddgst": false 00:29:33.381 }, 00:29:33.381 "method": "bdev_nvme_attach_controller" 00:29:33.381 },{ 00:29:33.381 "params": { 00:29:33.381 "name": "Nvme3", 00:29:33.381 "trtype": "tcp", 00:29:33.381 "traddr": "10.0.0.2", 00:29:33.381 "adrfam": "ipv4", 00:29:33.381 "trsvcid": "4420", 00:29:33.381 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:33.381 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:33.381 "hdgst": false, 00:29:33.381 "ddgst": false 00:29:33.381 }, 00:29:33.381 "method": "bdev_nvme_attach_controller" 00:29:33.381 },{ 00:29:33.381 "params": { 00:29:33.381 "name": "Nvme4", 00:29:33.381 "trtype": "tcp", 00:29:33.381 "traddr": "10.0.0.2", 00:29:33.381 "adrfam": "ipv4", 00:29:33.381 "trsvcid": "4420", 00:29:33.381 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:33.381 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:33.381 "hdgst": false, 00:29:33.381 "ddgst": false 00:29:33.381 }, 00:29:33.381 "method": "bdev_nvme_attach_controller" 00:29:33.381 },{ 00:29:33.381 "params": { 00:29:33.381 "name": "Nvme5", 00:29:33.381 "trtype": "tcp", 00:29:33.381 "traddr": "10.0.0.2", 00:29:33.381 "adrfam": "ipv4", 00:29:33.381 "trsvcid": "4420", 00:29:33.381 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:33.381 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:33.381 "hdgst": false, 00:29:33.381 "ddgst": false 00:29:33.381 }, 00:29:33.381 "method": "bdev_nvme_attach_controller" 00:29:33.381 },{ 00:29:33.381 "params": { 00:29:33.381 "name": "Nvme6", 00:29:33.381 "trtype": "tcp", 00:29:33.381 "traddr": "10.0.0.2", 00:29:33.381 "adrfam": "ipv4", 00:29:33.381 "trsvcid": "4420", 00:29:33.381 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:33.381 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:33.381 "hdgst": false, 00:29:33.381 "ddgst": false 00:29:33.381 }, 00:29:33.381 "method": "bdev_nvme_attach_controller" 00:29:33.381 },{ 00:29:33.382 "params": { 00:29:33.382 "name": "Nvme7", 00:29:33.382 "trtype": "tcp", 00:29:33.382 "traddr": "10.0.0.2", 00:29:33.382 "adrfam": "ipv4", 00:29:33.382 "trsvcid": "4420", 00:29:33.382 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:33.382 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:33.382 "hdgst": false, 00:29:33.382 "ddgst": false 00:29:33.382 }, 00:29:33.382 "method": "bdev_nvme_attach_controller" 00:29:33.382 },{ 00:29:33.382 "params": { 00:29:33.382 "name": "Nvme8", 00:29:33.382 "trtype": "tcp", 00:29:33.382 "traddr": "10.0.0.2", 00:29:33.382 "adrfam": "ipv4", 00:29:33.382 "trsvcid": "4420", 00:29:33.382 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:33.382 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:33.382 "hdgst": false, 00:29:33.382 "ddgst": false 00:29:33.382 }, 00:29:33.382 "method": "bdev_nvme_attach_controller" 00:29:33.382 },{ 00:29:33.382 "params": { 00:29:33.382 "name": "Nvme9", 00:29:33.382 "trtype": "tcp", 00:29:33.382 "traddr": "10.0.0.2", 00:29:33.382 "adrfam": "ipv4", 00:29:33.382 "trsvcid": "4420", 00:29:33.382 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:33.382 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:33.382 "hdgst": false, 00:29:33.382 "ddgst": false 00:29:33.382 }, 00:29:33.382 "method": "bdev_nvme_attach_controller" 00:29:33.382 },{ 00:29:33.382 "params": { 00:29:33.382 "name": "Nvme10", 00:29:33.382 "trtype": "tcp", 00:29:33.382 "traddr": "10.0.0.2", 00:29:33.382 "adrfam": "ipv4", 00:29:33.382 "trsvcid": "4420", 00:29:33.382 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:33.382 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:33.382 "hdgst": false, 00:29:33.382 "ddgst": false 00:29:33.382 }, 00:29:33.382 "method": "bdev_nvme_attach_controller" 00:29:33.382 }' 00:29:33.382 [2024-09-29 16:37:33.931786] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:29:33.382 [2024-09-29 16:37:33.931921] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3247395 ] 00:29:33.640 [2024-09-29 16:37:34.063062] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:33.898 [2024-09-29 16:37:34.300283] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:35.797 Running I/O for 10 seconds... 00:29:36.363 16:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:36.363 16:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:29:36.363 16:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:36.363 16:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.363 16:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:36.363 16:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.363 16:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:36.363 16:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:36.363 16:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:36.363 16:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:29:36.363 16:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:29:36.363 16:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:29:36.363 16:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:29:36.363 16:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:36.363 16:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:36.363 16:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:36.363 16:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.363 16:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:36.363 16:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.363 16:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:29:36.363 16:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:29:36.363 16:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:36.634 16:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:36.634 16:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:36.634 16:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:36.634 16:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:36.634 16:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.634 16:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:36.634 16:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.634 16:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:29:36.634 16:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:29:36.634 16:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:29:36.634 16:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:29:36.634 16:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:29:36.634 16:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 3247083 00:29:36.634 16:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 3247083 ']' 00:29:36.634 16:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 3247083 00:29:36.634 16:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:29:36.635 16:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:36.635 16:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3247083 00:29:36.635 16:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:36.635 16:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:36.635 16:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3247083' 00:29:36.635 killing process with pid 3247083 00:29:36.635 16:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 3247083 00:29:36.635 16:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 3247083 00:29:36.635 [2024-09-29 16:37:37.105172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:36.635 [2024-09-29 16:37:37.105269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:36.635 [2024-09-29 16:37:37.105291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:36.635 [2024-09-29 16:37:37.105311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:36.635 [2024-09-29 16:37:37.105330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:36.635 [2024-09-29 16:37:37.105349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:36.635 [2024-09-29 16:37:37.105367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:36.635 [2024-09-29 16:37:37.105387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:36.635 [2024-09-29 16:37:37.105404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:36.635 [2024-09-29 16:37:37.105424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:36.635 [2024-09-29 16:37:37.105442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:36.635 [2024-09-29 16:37:37.105460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:36.635 [2024-09-29 16:37:37.105479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:36.635 [2024-09-29 16:37:37.105497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:36.635 [2024-09-29 16:37:37.105516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:36.635 [2024-09-29 16:37:37.105534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:36.635 [2024-09-29 16:37:37.105552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:36.635 [2024-09-29 16:37:37.105581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:36.635 [2024-09-29 16:37:37.105600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:36.635 [2024-09-29 16:37:37.105619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:36.635 [2024-09-29 16:37:37.105637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:36.635 [2024-09-29 16:37:37.105655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:36.635 [2024-09-29 16:37:37.105682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:36.635 [2024-09-29 16:37:37.105703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:36.635 [2024-09-29 16:37:37.105732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:36.635 [2024-09-29 16:37:37.105750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:36.635 [2024-09-29 16:37:37.105768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:36.635 [2024-09-29 16:37:37.105786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:36.635 [2024-09-29 16:37:37.105804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:36.635 [2024-09-29 16:37:37.105822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:36.635 [2024-09-29 16:37:37.105841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:36.635 [2024-09-29 16:37:37.105858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:36.635 [2024-09-29 16:37:37.105876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:36.635 [2024-09-29 16:37:37.105894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:36.635 [2024-09-29 16:37:37.105911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:36.635 [2024-09-29 16:37:37.105929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:36.635 [2024-09-29 16:37:37.105947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:36.635 [2024-09-29 16:37:37.105965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:36.635 [2024-09-29 16:37:37.105993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:36.635 [2024-09-29 16:37:37.106010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:36.635 [2024-09-29 16:37:37.106028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:36.635 [2024-09-29 16:37:37.106055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:36.635 [2024-09-29 16:37:37.106074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:36.635 [2024-09-29 16:37:37.106097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:36.635 [2024-09-29 16:37:37.106116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:36.635 [2024-09-29 16:37:37.106134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:36.635 [2024-09-29 16:37:37.106152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:36.635 [2024-09-29 16:37:37.106171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:36.635 [2024-09-29 16:37:37.106190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:36.635 [2024-09-29 16:37:37.106208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:36.635 [2024-09-29 16:37:37.106227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:36.635 [2024-09-29 16:37:37.106245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:36.635 [2024-09-29 16:37:37.106262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:36.635 [2024-09-29 16:37:37.106280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:36.635 [2024-09-29 16:37:37.106297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:36.635 [2024-09-29 16:37:37.106315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:36.635 [2024-09-29 16:37:37.106333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:36.635 [2024-09-29 16:37:37.106352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:36.635 [2024-09-29 16:37:37.106369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:36.635 [2024-09-29 16:37:37.106388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:36.635 [2024-09-29 16:37:37.106405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:36.635 [2024-09-29 16:37:37.106423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:36.635 [2024-09-29 16:37:37.106441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:36.635 [2024-09-29 16:37:37.111927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:36.635 [2024-09-29 16:37:37.111970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:36.635 [2024-09-29 16:37:37.111992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:36.635 [2024-09-29 16:37:37.112011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:36.635 [2024-09-29 16:37:37.112029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:36.635 [2024-09-29 16:37:37.112047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:36.635 [2024-09-29 16:37:37.112065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:36.635 [2024-09-29 16:37:37.112090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:36.635 [2024-09-29 16:37:37.112109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.112127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.112146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.112164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.112182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.112200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.112218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.112236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.112254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.112272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.112290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.112308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.112326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.112344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.112362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.112380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.112398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.112416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.112434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.112453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.112470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.112488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.112506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.112524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.112543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.112565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.112584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.112602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.112620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.112639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.112657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.112692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.112715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.112734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.112752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.112770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.112787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.112805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.112823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.112841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.112859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.112876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.112894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.112912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.112930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.112947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.112965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.112986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.113004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.113022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.113039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.113062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.113081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.113098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.113116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.116660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.116719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.116751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.116771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.116790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.116809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.116828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.116846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.116865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.116884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.116902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.116919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.116937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.116954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.116972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.116990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.117008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.117025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.117043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.117062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.117080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.117098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.117116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.117153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.117174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.117192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.117210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.117229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.636 [2024-09-29 16:37:37.117247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.117266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.117284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.117302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.117321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.117339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.117357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.117376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.117394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.117412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.117430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.117448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.117466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.117485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.117502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.117521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.117539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.117557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.117575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.117592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.117610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.117632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.117651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.117669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.117698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.117716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.117738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.117756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.117774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.117791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.117809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.120421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.120462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.120485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.120503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.120521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.120538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.120555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.120573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.120590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.120607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.120625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.120643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.120660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.120687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.120707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.120724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.120754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.120773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.120791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.120808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.120825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.120843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.120861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.120879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.120896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.120913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.120931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.120948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.120966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.120984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.121001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.121019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.121036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.121054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.121072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.121089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.121106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.121124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.121141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.121158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.121175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.121192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.121216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.121256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.121275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.121293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.121310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.121327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.121345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.121362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.637 [2024-09-29 16:37:37.121379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.121397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.121414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.121431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.121448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.121465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.121482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.121500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.121517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.121535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.121552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.121570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.121587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.123170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.123208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.123229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.123247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.123265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.123282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.123307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.123326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.123344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.123362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.123380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.123398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.123415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.123433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.123450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.123468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.123486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.123503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.123521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.123539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.123557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.123574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.123592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.123609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.123626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.123644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.123662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.123688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.123708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.123726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.123744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.123782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.123805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.123823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.123841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.123858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.123875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.123894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.123912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.123929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.123946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.123963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.123989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.124007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.124024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.124041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.124059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.124077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.124094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.124112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.124129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.124146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.124164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.124181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.124199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.124216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.124234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.124251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.124272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.124291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.124308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.124325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.124342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.126802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.126836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.126856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.126874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.126892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.126911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.126929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.638 [2024-09-29 16:37:37.126946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.639 [2024-09-29 16:37:37.126964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.639 [2024-09-29 16:37:37.126981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.639 [2024-09-29 16:37:37.127002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.639 [2024-09-29 16:37:37.127020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.639 [2024-09-29 16:37:37.127037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.639 [2024-09-29 16:37:37.127055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.639 [2024-09-29 16:37:37.127072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.639 [2024-09-29 16:37:37.127090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.639 [2024-09-29 16:37:37.127108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.639 [2024-09-29 16:37:37.127126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.639 [2024-09-29 16:37:37.127144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.639 [2024-09-29 16:37:37.127162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.639 [2024-09-29 16:37:37.127179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.639 [2024-09-29 16:37:37.127203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.639 [2024-09-29 16:37:37.127221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.639 [2024-09-29 16:37:37.127240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.639 [2024-09-29 16:37:37.127257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.639 [2024-09-29 16:37:37.127275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.639 [2024-09-29 16:37:37.127293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.639 [2024-09-29 16:37:37.127311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.639 [2024-09-29 16:37:37.127328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.639 [2024-09-29 16:37:37.127346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.639 [2024-09-29 16:37:37.127363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.639 [2024-09-29 16:37:37.127381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.639 [2024-09-29 16:37:37.127399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.639 [2024-09-29 16:37:37.127417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.639 [2024-09-29 16:37:37.127435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.639 [2024-09-29 16:37:37.127452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.639 [2024-09-29 16:37:37.127469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.639 [2024-09-29 16:37:37.127487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.639 [2024-09-29 16:37:37.127505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.639 [2024-09-29 16:37:37.127522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.639 [2024-09-29 16:37:37.127540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.639 [2024-09-29 16:37:37.127557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.639 [2024-09-29 16:37:37.127575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.639 [2024-09-29 16:37:37.127593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.639 [2024-09-29 16:37:37.127611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.639 [2024-09-29 16:37:37.127629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.639 [2024-09-29 16:37:37.127646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.639 [2024-09-29 16:37:37.127665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.639 [2024-09-29 16:37:37.127700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.639 [2024-09-29 16:37:37.127730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.639 [2024-09-29 16:37:37.127748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.639 [2024-09-29 16:37:37.127770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.639 [2024-09-29 16:37:37.127791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.639 [2024-09-29 16:37:37.127809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.639 [2024-09-29 16:37:37.127828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.639 [2024-09-29 16:37:37.127846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.639 [2024-09-29 16:37:37.127864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.639 [2024-09-29 16:37:37.127847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.639 [2024-09-29 16:37:37.127882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.639 [2024-09-29 16:37:37.127900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.639 [2024-09-29 16:37:37.127900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.639 [2024-09-29 16:37:37.127918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.639 [2024-09-29 16:37:37.127929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.639 [2024-09-29 16:37:37.127936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.639 [2024-09-29 16:37:37.127951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.639 [2024-09-29 16:37:37.127955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.639 [2024-09-29 16:37:37.127982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.639 [2024-09-29 16:37:37.128003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.639 [2024-09-29 16:37:37.128026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.639 [2024-09-29 16:37:37.127979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.639 [2024-09-29 16:37:37.128046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.639 [2024-09-29 16:37:37.128066] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7f00 is same with the state(6) to be set 00:29:36.639 [2024-09-29 16:37:37.128170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.639 [2024-09-29 16:37:37.128204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.639 [2024-09-29 16:37:37.128228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.639 [2024-09-29 16:37:37.128249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.640 [2024-09-29 16:37:37.128271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.640 [2024-09-29 16:37:37.128291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.640 [2024-09-29 16:37:37.128312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.640 [2024-09-29 16:37:37.128332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.640 [2024-09-29 16:37:37.128352] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2f00 is same with the state(6) to be set 00:29:36.640 [2024-09-29 16:37:37.128418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.640 [2024-09-29 16:37:37.128445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.640 [2024-09-29 16:37:37.128468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.640 [2024-09-29 16:37:37.128489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.640 [2024-09-29 16:37:37.128511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.640 [2024-09-29 16:37:37.128531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.640 [2024-09-29 16:37:37.128552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.640 [2024-09-29 16:37:37.128572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.640 [2024-09-29 16:37:37.128591] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3900 is same with the state(6) to be set 00:29:36.640 [2024-09-29 16:37:37.128659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.640 [2024-09-29 16:37:37.128695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.640 [2024-09-29 16:37:37.128719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.640 [2024-09-29 16:37:37.128770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.640 [2024-09-29 16:37:37.128793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.640 [2024-09-29 16:37:37.128813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.640 [2024-09-29 16:37:37.128834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.640 [2024-09-29 16:37:37.128855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.640 [2024-09-29 16:37:37.128874] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4300 is same with the state(6) to be set 00:29:36.640 [2024-09-29 16:37:37.128947] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.640 [2024-09-29 16:37:37.128984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.640 [2024-09-29 16:37:37.129006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.640 [2024-09-29 16:37:37.129027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.640 [2024-09-29 16:37:37.129049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.640 [2024-09-29 16:37:37.129069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.640 [2024-09-29 16:37:37.129090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.640 [2024-09-29 16:37:37.129109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.640 [2024-09-29 16:37:37.129128] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4d00 is same with the state(6) to be set 00:29:36.640 [2024-09-29 16:37:37.129195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.640 [2024-09-29 16:37:37.129222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.640 [2024-09-29 16:37:37.129245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.640 [2024-09-29 16:37:37.129265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.640 [2024-09-29 16:37:37.129286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.640 [2024-09-29 16:37:37.129306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.640 [2024-09-29 16:37:37.129328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.640 [2024-09-29 16:37:37.129347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.640 [2024-09-29 16:37:37.129367] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:29:36.640 [2024-09-29 16:37:37.131398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.640 [2024-09-29 16:37:37.131437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.640 [2024-09-29 16:37:37.131478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.640 [2024-09-29 16:37:37.131502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.640 [2024-09-29 16:37:37.131528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.640 [2024-09-29 16:37:37.131550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.640 [2024-09-29 16:37:37.131574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.640 [2024-09-29 16:37:37.131601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.640 [2024-09-29 16:37:37.131627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.640 [2024-09-29 16:37:37.131648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.640 [2024-09-29 16:37:37.131681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.640 [2024-09-29 16:37:37.131705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.640 [2024-09-29 16:37:37.131740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.640 [2024-09-29 16:37:37.131732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:36.640 [2024-09-29 16:37:37.131761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.640 [2024-09-29 16:37:37.131785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:12[2024-09-29 16:37:37.131776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.640 with the state(6) to be set 00:29:36.640 [2024-09-29 16:37:37.131809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.640 [2024-09-29 16:37:37.131812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:36.640 [2024-09-29 16:37:37.131833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.640 [2024-09-29 16:37:37.131840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:36.640 [2024-09-29 16:37:37.131854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.640 [2024-09-29 16:37:37.131872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:36.640 [2024-09-29 16:37:37.131878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.640 [2024-09-29 16:37:37.131892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:36.640 [2024-09-29 16:37:37.131899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.640 [2024-09-29 16:37:37.131910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:36.640 [2024-09-29 16:37:37.131923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.640 [2024-09-29 16:37:37.131928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:36.640 [2024-09-29 16:37:37.131944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.640 [2024-09-29 16:37:37.131959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:36.640 [2024-09-29 16:37:37.131968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.640 [2024-09-29 16:37:37.131990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.640 [2024-09-29 16:37:37.131992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:36.640 [2024-09-29 16:37:37.132019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.640 [2024-09-29 16:37:37.132025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:36.640 [2024-09-29 16:37:37.132041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.640 [2024-09-29 16:37:37.132047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:36.640 [2024-09-29 16:37:37.132066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.641 [2024-09-29 16:37:37.132079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same [2024-09-29 16:37:37.132087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(6) to be set 00:29:36.641 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.641 [2024-09-29 16:37:37.132108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:36.641 [2024-09-29 16:37:37.132114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.641 [2024-09-29 16:37:37.132127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:36.641 [2024-09-29 16:37:37.132136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.641 [2024-09-29 16:37:37.132160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.641 [2024-09-29 16:37:37.132155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:36.641 [2024-09-29 16:37:37.132182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.641 [2024-09-29 16:37:37.132192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:36.641 [2024-09-29 16:37:37.132206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.641 [2024-09-29 16:37:37.132227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-09-29 16:37:37.132226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.641 with the state(6) to be set 00:29:36.641 [2024-09-29 16:37:37.132253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:1[2024-09-29 16:37:37.132249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.641 with the state(6) to be set 00:29:36.641 [2024-09-29 16:37:37.132277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.641 [2024-09-29 16:37:37.132283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:36.641 [2024-09-29 16:37:37.132302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.641 [2024-09-29 16:37:37.132308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:36.641 [2024-09-29 16:37:37.132324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.641 [2024-09-29 16:37:37.132334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:36.641 [2024-09-29 16:37:37.132348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.641 [2024-09-29 16:37:37.132370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.641 [2024-09-29 16:37:37.132366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:36.641 [2024-09-29 16:37:37.132394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.641 [2024-09-29 16:37:37.132401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:36.641 [2024-09-29 16:37:37.132415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.641 [2024-09-29 16:37:37.132435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:36.641 [2024-09-29 16:37:37.132440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.641 [2024-09-29 16:37:37.132462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.641 [2024-09-29 16:37:37.132458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:36.641 [2024-09-29 16:37:37.132487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.641 [2024-09-29 16:37:37.132493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:36.641 [2024-09-29 16:37:37.132508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.641 [2024-09-29 16:37:37.132519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:36.641 [2024-09-29 16:37:37.132531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.641 [2024-09-29 16:37:37.132538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:36.641 [2024-09-29 16:37:37.132553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.641 [2024-09-29 16:37:37.132565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:36.641 [2024-09-29 16:37:37.132578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.641 [2024-09-29 16:37:37.132599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.641 [2024-09-29 16:37:37.132598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:36.641 [2024-09-29 16:37:37.132623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.641 [2024-09-29 16:37:37.132633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:36.641 [2024-09-29 16:37:37.132644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.641 [2024-09-29 16:37:37.132656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:36.641 [2024-09-29 16:37:37.132679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.641 [2024-09-29 16:37:37.132697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same [2024-09-29 16:37:37.132703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(6) to be set 00:29:36.641 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.641 [2024-09-29 16:37:37.132725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:36.641 [2024-09-29 16:37:37.132741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.641 [2024-09-29 16:37:37.132749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:36.641 [2024-09-29 16:37:37.132763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.641 [2024-09-29 16:37:37.132769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:36.641 [2024-09-29 16:37:37.132787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.641 [2024-09-29 16:37:37.132808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.641 [2024-09-29 16:37:37.132802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:36.641 [2024-09-29 16:37:37.132832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.641 [2024-09-29 16:37:37.132838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:36.641 [2024-09-29 16:37:37.132854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.641 [2024-09-29 16:37:37.132867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:36.641 [2024-09-29 16:37:37.132878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.641 [2024-09-29 16:37:37.132889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same [2024-09-29 16:37:37.132899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(6) to be set 00:29:36.641 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.641 [2024-09-29 16:37:37.132924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.641 [2024-09-29 16:37:37.132925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:36.641 [2024-09-29 16:37:37.132946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.641 [2024-09-29 16:37:37.132948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:36.641 [2024-09-29 16:37:37.132970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:1[2024-09-29 16:37:37.132967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.641 with the state(6) to be set 00:29:36.641 [2024-09-29 16:37:37.133008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.641 [2024-09-29 16:37:37.133013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:36.641 [2024-09-29 16:37:37.133033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.641 [2024-09-29 16:37:37.133045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:36.641 [2024-09-29 16:37:37.133055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.641 [2024-09-29 16:37:37.133072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:36.641 [2024-09-29 16:37:37.133079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.641 [2024-09-29 16:37:37.133091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:36.641 [2024-09-29 16:37:37.133101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.641 [2024-09-29 16:37:37.133123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same [2024-09-29 16:37:37.133126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:1with the state(6) to be set 00:29:36.641 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.641 [2024-09-29 16:37:37.133148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same [2024-09-29 16:37:37.133150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(6) to be set 00:29:36.642 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.642 [2024-09-29 16:37:37.133168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:36.642 [2024-09-29 16:37:37.133175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.642 [2024-09-29 16:37:37.133198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.642 [2024-09-29 16:37:37.133198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:36.642 [2024-09-29 16:37:37.133222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.642 [2024-09-29 16:37:37.133226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:36.642 [2024-09-29 16:37:37.133244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-09-29 16:37:37.133245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.642 with the state(6) to be set 00:29:36.642 [2024-09-29 16:37:37.133265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:36.642 [2024-09-29 16:37:37.133270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.642 [2024-09-29 16:37:37.133283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:36.642 [2024-09-29 16:37:37.133291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.642 [2024-09-29 16:37:37.133302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:36.642 [2024-09-29 16:37:37.133314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.642 [2024-09-29 16:37:37.133327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:36.642 [2024-09-29 16:37:37.133345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.642 [2024-09-29 16:37:37.133347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:36.642 [2024-09-29 16:37:37.133369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.642 [2024-09-29 16:37:37.133378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:36.642 [2024-09-29 16:37:37.133391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.642 [2024-09-29 16:37:37.133404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:36.642 [2024-09-29 16:37:37.133415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.642 [2024-09-29 16:37:37.133423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:36.642 [2024-09-29 16:37:37.133436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.642 [2024-09-29 16:37:37.133442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:36.642 [2024-09-29 16:37:37.133461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.642 [2024-09-29 16:37:37.133482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.642 [2024-09-29 16:37:37.133506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.642 [2024-09-29 16:37:37.133527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.642 [2024-09-29 16:37:37.133551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.642 [2024-09-29 16:37:37.133572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.642 [2024-09-29 16:37:37.133597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.642 [2024-09-29 16:37:37.133618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.642 [2024-09-29 16:37:37.133642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.642 [2024-09-29 16:37:37.133663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.642 [2024-09-29 16:37:37.133696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.642 [2024-09-29 16:37:37.133718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.642 [2024-09-29 16:37:37.133742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.642 [2024-09-29 16:37:37.133768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.642 [2024-09-29 16:37:37.133794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.642 [2024-09-29 16:37:37.133815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.642 [2024-09-29 16:37:37.133839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.642 [2024-09-29 16:37:37.133860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.642 [2024-09-29 16:37:37.133884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.642 [2024-09-29 16:37:37.133906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.642 [2024-09-29 16:37:37.133948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.642 [2024-09-29 16:37:37.133970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.642 [2024-09-29 16:37:37.133995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.642 [2024-09-29 16:37:37.134016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.642 [2024-09-29 16:37:37.134040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.642 [2024-09-29 16:37:37.134060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.642 [2024-09-29 16:37:37.134085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.642 [2024-09-29 16:37:37.134106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.642 [2024-09-29 16:37:37.134129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.642 [2024-09-29 16:37:37.134150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.642 [2024-09-29 16:37:37.134174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.642 [2024-09-29 16:37:37.134198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.642 [2024-09-29 16:37:37.134223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.642 [2024-09-29 16:37:37.134244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.642 [2024-09-29 16:37:37.134269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.642 [2024-09-29 16:37:37.134290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.642 [2024-09-29 16:37:37.134314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.642 [2024-09-29 16:37:37.134335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.642 [2024-09-29 16:37:37.134365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.642 [2024-09-29 16:37:37.134387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.642 [2024-09-29 16:37:37.134412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.642 [2024-09-29 16:37:37.134434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.642 [2024-09-29 16:37:37.134458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.642 [2024-09-29 16:37:37.134480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.642 [2024-09-29 16:37:37.134501] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fb600 is same with the state(6) to be set 00:29:36.642 [2024-09-29 16:37:37.135735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:36.642 [2024-09-29 16:37:37.135771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:36.642 [2024-09-29 16:37:37.135791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:36.642 [2024-09-29 16:37:37.135808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:36.642 [2024-09-29 16:37:37.135826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:36.642 [2024-09-29 16:37:37.135842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:36.642 [2024-09-29 16:37:37.135859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:36.642 [2024-09-29 16:37:37.135877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:36.642 [2024-09-29 16:37:37.135894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:36.642 [2024-09-29 16:37:37.135911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:36.643 [2024-09-29 16:37:37.135928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:36.643 [2024-09-29 16:37:37.135945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:36.643 [2024-09-29 16:37:37.135962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:36.643 [2024-09-29 16:37:37.135978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:36.643 [2024-09-29 16:37:37.135995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:36.643 [2024-09-29 16:37:37.136012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:36.643 [2024-09-29 16:37:37.136029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:36.643 [2024-09-29 16:37:37.136046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:36.643 [2024-09-29 16:37:37.136063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:36.643 [2024-09-29 16:37:37.136086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:36.643 [2024-09-29 16:37:37.136104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:36.643 [2024-09-29 16:37:37.136121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:36.643 [2024-09-29 16:37:37.136139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:36.643 [2024-09-29 16:37:37.136156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:36.643 [2024-09-29 16:37:37.136173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:36.643 [2024-09-29 16:37:37.136190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:36.643 [2024-09-29 16:37:37.136207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:36.643 [2024-09-29 16:37:37.136225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:36.643 [2024-09-29 16:37:37.136242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:36.643 [2024-09-29 16:37:37.136259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:36.643 [2024-09-29 16:37:37.136276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:36.643 [2024-09-29 16:37:37.136293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:36.643 [2024-09-29 16:37:37.136311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:36.643 [2024-09-29 16:37:37.136329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:36.643 [2024-09-29 16:37:37.136346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:36.643 [2024-09-29 16:37:37.136363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:36.643 [2024-09-29 16:37:37.136380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:36.643 [2024-09-29 16:37:37.136397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:36.643 [2024-09-29 16:37:37.136415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:36.643 [2024-09-29 16:37:37.136432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:36.643 [2024-09-29 16:37:37.136449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:36.643 [2024-09-29 16:37:37.136466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:36.643 [2024-09-29 16:37:37.136483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:36.643 [2024-09-29 16:37:37.136501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:36.643 [2024-09-29 16:37:37.136518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:36.643 [2024-09-29 16:37:37.136539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:36.643 [2024-09-29 16:37:37.136557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:36.643 [2024-09-29 16:37:37.136574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:36.643 [2024-09-29 16:37:37.136592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:36.643 [2024-09-29 16:37:37.136609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:36.643 [2024-09-29 16:37:37.136626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:36.643 [2024-09-29 16:37:37.136643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:36.643 [2024-09-29 16:37:37.136660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:36.643 [2024-09-29 16:37:37.136687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:36.643 [2024-09-29 16:37:37.136706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:36.643 [2024-09-29 16:37:37.136724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:36.643 [2024-09-29 16:37:37.136741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:36.643 [2024-09-29 16:37:37.136758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:36.643 [2024-09-29 16:37:37.136775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:36.643 [2024-09-29 16:37:37.136792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:36.643 [2024-09-29 16:37:37.136810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:36.643 [2024-09-29 16:37:37.136826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:36.643 [2024-09-29 16:37:37.136843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:36.643 [2024-09-29 16:37:37.138197] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001fb600 was disconnected and freed. reset controller. 00:29:36.643 [2024-09-29 16:37:37.138249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:36.643 [2024-09-29 16:37:37.138287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:36.643 [2024-09-29 16:37:37.138307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:36.643 [2024-09-29 16:37:37.138325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:36.643 [2024-09-29 16:37:37.138368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.643 [2024-09-29 16:37:37.138400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.643 [2024-09-29 16:37:37.138424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.643 [2024-09-29 16:37:37.138445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.643 [2024-09-29 16:37:37.138472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.643 [2024-09-29 16:37:37.138493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.643 [2024-09-29 16:37:37.138514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.643 [2024-09-29 16:37:37.138534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.643 [2024-09-29 16:37:37.138552] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f5700 is same with the state(6) to be set 00:29:36.643 [2024-09-29 16:37:37.138623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.643 [2024-09-29 16:37:37.138650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.643 [2024-09-29 16:37:37.138680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.643 [2024-09-29 16:37:37.138703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.643 [2024-09-29 16:37:37.138725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.643 [2024-09-29 16:37:37.138745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.644 [2024-09-29 16:37:37.138766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.644 [2024-09-29 16:37:37.138786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.644 [2024-09-29 16:37:37.138805] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6100 is same with the state(6) to be set 00:29:36.644 [2024-09-29 16:37:37.138872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.644 [2024-09-29 16:37:37.138899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.644 [2024-09-29 16:37:37.138922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.644 [2024-09-29 16:37:37.138943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.644 [2024-09-29 16:37:37.138965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.644 [2024-09-29 16:37:37.138986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.644 [2024-09-29 16:37:37.139009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.644 [2024-09-29 16:37:37.139029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.644 [2024-09-29 16:37:37.139048] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6b00 is same with the state(6) to be set 00:29:36.644 [2024-09-29 16:37:37.139104] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7f00 (9): Bad file descriptor 00:29:36.644 [2024-09-29 16:37:37.139171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.644 [2024-09-29 16:37:37.139203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.644 [2024-09-29 16:37:37.139226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.644 [2024-09-29 16:37:37.139246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.644 [2024-09-29 16:37:37.139268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.644 [2024-09-29 16:37:37.139288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.644 [2024-09-29 16:37:37.139309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.644 [2024-09-29 16:37:37.139329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.644 [2024-09-29 16:37:37.139348] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(6) to be set 00:29:36.644 [2024-09-29 16:37:37.139393] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2f00 (9): Bad file descriptor 00:29:36.644 [2024-09-29 16:37:37.139441] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3900 (9): Bad file descriptor 00:29:36.644 [2024-09-29 16:37:37.139487] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4300 (9): Bad file descriptor 00:29:36.644 [2024-09-29 16:37:37.139534] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4d00 (9): Bad file descriptor 00:29:36.644 [2024-09-29 16:37:37.139595] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:29:36.644 [2024-09-29 16:37:37.140803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.644 [2024-09-29 16:37:37.140844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.644 [2024-09-29 16:37:37.140894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.644 [2024-09-29 16:37:37.140918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.644 [2024-09-29 16:37:37.140943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.644 [2024-09-29 16:37:37.140965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.644 [2024-09-29 16:37:37.140989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.644 [2024-09-29 16:37:37.141010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.644 [2024-09-29 16:37:37.141035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.644 [2024-09-29 16:37:37.141056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.644 [2024-09-29 16:37:37.141080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.644 [2024-09-29 16:37:37.141101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.644 [2024-09-29 16:37:37.141131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.644 [2024-09-29 16:37:37.141153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.644 [2024-09-29 16:37:37.141176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.644 [2024-09-29 16:37:37.141197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.644 [2024-09-29 16:37:37.141222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.644 [2024-09-29 16:37:37.141243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.644 [2024-09-29 16:37:37.141267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.644 [2024-09-29 16:37:37.141288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.644 [2024-09-29 16:37:37.141312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.644 [2024-09-29 16:37:37.141333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.644 [2024-09-29 16:37:37.141356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.644 [2024-09-29 16:37:37.141376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.644 [2024-09-29 16:37:37.141400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.644 [2024-09-29 16:37:37.141421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.644 [2024-09-29 16:37:37.141445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.644 [2024-09-29 16:37:37.141465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.644 [2024-09-29 16:37:37.141489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.644 [2024-09-29 16:37:37.141510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.644 [2024-09-29 16:37:37.141534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.644 [2024-09-29 16:37:37.141554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.644 [2024-09-29 16:37:37.141578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.644 [2024-09-29 16:37:37.141599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.644 [2024-09-29 16:37:37.141623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.644 [2024-09-29 16:37:37.141645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.644 [2024-09-29 16:37:37.141668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.644 [2024-09-29 16:37:37.141704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.644 [2024-09-29 16:37:37.141730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.644 [2024-09-29 16:37:37.141752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.644 [2024-09-29 16:37:37.141777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.644 [2024-09-29 16:37:37.141799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.644 [2024-09-29 16:37:37.141822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.644 [2024-09-29 16:37:37.141843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.644 [2024-09-29 16:37:37.141868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.644 [2024-09-29 16:37:37.141889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.644 [2024-09-29 16:37:37.141913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.644 [2024-09-29 16:37:37.141934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.644 [2024-09-29 16:37:37.141959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.644 [2024-09-29 16:37:37.141982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.644 [2024-09-29 16:37:37.142006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.644 [2024-09-29 16:37:37.142028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.644 [2024-09-29 16:37:37.142052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.645 [2024-09-29 16:37:37.142074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.645 [2024-09-29 16:37:37.142098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.645 [2024-09-29 16:37:37.142119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.645 [2024-09-29 16:37:37.142143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.645 [2024-09-29 16:37:37.142164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.645 [2024-09-29 16:37:37.142187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.645 [2024-09-29 16:37:37.142209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.645 [2024-09-29 16:37:37.142250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.645 [2024-09-29 16:37:37.142272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.645 [2024-09-29 16:37:37.142300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.645 [2024-09-29 16:37:37.142322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.645 [2024-09-29 16:37:37.142346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.645 [2024-09-29 16:37:37.142368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.645 [2024-09-29 16:37:37.142392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.645 [2024-09-29 16:37:37.142413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.645 [2024-09-29 16:37:37.142438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.645 [2024-09-29 16:37:37.142459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.645 [2024-09-29 16:37:37.142483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.645 [2024-09-29 16:37:37.142504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.645 [2024-09-29 16:37:37.142527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.645 [2024-09-29 16:37:37.142547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.645 [2024-09-29 16:37:37.142570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.645 [2024-09-29 16:37:37.142591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.645 [2024-09-29 16:37:37.142615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.645 [2024-09-29 16:37:37.142636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.645 [2024-09-29 16:37:37.142659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.645 [2024-09-29 16:37:37.142688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.645 [2024-09-29 16:37:37.142713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.645 [2024-09-29 16:37:37.142735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.645 [2024-09-29 16:37:37.142758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.645 [2024-09-29 16:37:37.142779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.645 [2024-09-29 16:37:37.142802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.645 [2024-09-29 16:37:37.142823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.645 [2024-09-29 16:37:37.142846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.645 [2024-09-29 16:37:37.142870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.645 [2024-09-29 16:37:37.142894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.645 [2024-09-29 16:37:37.142915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.645 [2024-09-29 16:37:37.142938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.645 [2024-09-29 16:37:37.142959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.645 [2024-09-29 16:37:37.142983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.645 [2024-09-29 16:37:37.143003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.645 [2024-09-29 16:37:37.143027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.645 [2024-09-29 16:37:37.143048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.645 [2024-09-29 16:37:37.143071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.645 [2024-09-29 16:37:37.143092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.645 [2024-09-29 16:37:37.143115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.645 [2024-09-29 16:37:37.143135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.645 [2024-09-29 16:37:37.143159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.645 [2024-09-29 16:37:37.143179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.645 [2024-09-29 16:37:37.143202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.645 [2024-09-29 16:37:37.143223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.645 [2024-09-29 16:37:37.143246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.645 [2024-09-29 16:37:37.143266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.645 [2024-09-29 16:37:37.143290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.645 [2024-09-29 16:37:37.143310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.645 [2024-09-29 16:37:37.143334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.645 [2024-09-29 16:37:37.143355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.645 [2024-09-29 16:37:37.143378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.645 [2024-09-29 16:37:37.143398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.645 [2024-09-29 16:37:37.143426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.645 [2024-09-29 16:37:37.143448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.645 [2024-09-29 16:37:37.143472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.645 [2024-09-29 16:37:37.143493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.645 [2024-09-29 16:37:37.143517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.645 [2024-09-29 16:37:37.143538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.645 [2024-09-29 16:37:37.143561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.645 [2024-09-29 16:37:37.143582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.645 [2024-09-29 16:37:37.143604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.645 [2024-09-29 16:37:37.143625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.645 [2024-09-29 16:37:37.143649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.645 [2024-09-29 16:37:37.143669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.645 [2024-09-29 16:37:37.143703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.645 [2024-09-29 16:37:37.143724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.645 [2024-09-29 16:37:37.143747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.645 [2024-09-29 16:37:37.143768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.645 [2024-09-29 16:37:37.143837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.645 [2024-09-29 16:37:37.144133] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001fb380 was disconnected and freed. reset controller. 00:29:36.645 [2024-09-29 16:37:37.147312] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:29:36.645 [2024-09-29 16:37:37.148815] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:29:36.645 [2024-09-29 16:37:37.148873] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:29:36.646 [2024-09-29 16:37:37.149073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.646 [2024-09-29 16:37:37.149114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7f00 with addr=10.0.0.2, port=4420 00:29:36.646 [2024-09-29 16:37:37.149143] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7f00 is same with the state(6) to be set 00:29:36.646 [2024-09-29 16:37:37.149184] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f5700 (9): Bad file descriptor 00:29:36.646 [2024-09-29 16:37:37.149238] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6100 (9): Bad file descriptor 00:29:36.646 [2024-09-29 16:37:37.149290] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6b00 (9): Bad file descriptor 00:29:36.646 [2024-09-29 16:37:37.149456] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:36.646 [2024-09-29 16:37:37.149627] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:36.646 [2024-09-29 16:37:37.149727] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:36.646 [2024-09-29 16:37:37.149817] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:36.646 [2024-09-29 16:37:37.149917] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:36.646 [2024-09-29 16:37:37.150950] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7f00 (9): Bad file descriptor 00:29:36.646 [2024-09-29 16:37:37.151103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.646 [2024-09-29 16:37:37.151147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.646 [2024-09-29 16:37:37.151194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.646 [2024-09-29 16:37:37.151219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.646 [2024-09-29 16:37:37.151246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.646 [2024-09-29 16:37:37.151268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.646 [2024-09-29 16:37:37.151292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.646 [2024-09-29 16:37:37.151313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.646 [2024-09-29 16:37:37.151338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.646 [2024-09-29 16:37:37.151360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.646 [2024-09-29 16:37:37.151384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.646 [2024-09-29 16:37:37.151405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.646 [2024-09-29 16:37:37.151430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.646 [2024-09-29 16:37:37.151451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.646 [2024-09-29 16:37:37.151476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.646 [2024-09-29 16:37:37.151496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.646 [2024-09-29 16:37:37.151521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.646 [2024-09-29 16:37:37.151542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.646 [2024-09-29 16:37:37.151566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.646 [2024-09-29 16:37:37.151587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.646 [2024-09-29 16:37:37.151612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.646 [2024-09-29 16:37:37.151638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.646 [2024-09-29 16:37:37.151663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.646 [2024-09-29 16:37:37.151696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.646 [2024-09-29 16:37:37.151722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.646 [2024-09-29 16:37:37.151743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.646 [2024-09-29 16:37:37.151768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.646 [2024-09-29 16:37:37.151789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.646 [2024-09-29 16:37:37.151813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.646 [2024-09-29 16:37:37.151834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.646 [2024-09-29 16:37:37.151859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.646 [2024-09-29 16:37:37.151880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.646 [2024-09-29 16:37:37.151903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.646 [2024-09-29 16:37:37.151924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.646 [2024-09-29 16:37:37.151948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.646 [2024-09-29 16:37:37.151970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.646 [2024-09-29 16:37:37.151993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.646 [2024-09-29 16:37:37.152014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.646 [2024-09-29 16:37:37.152038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.646 [2024-09-29 16:37:37.152058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.646 [2024-09-29 16:37:37.152083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.646 [2024-09-29 16:37:37.152104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.646 [2024-09-29 16:37:37.152127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.646 [2024-09-29 16:37:37.152163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.646 [2024-09-29 16:37:37.152190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.646 [2024-09-29 16:37:37.152211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.646 [2024-09-29 16:37:37.152239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.646 [2024-09-29 16:37:37.152261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.646 [2024-09-29 16:37:37.152285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.646 [2024-09-29 16:37:37.152306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.646 [2024-09-29 16:37:37.152332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.646 [2024-09-29 16:37:37.152352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.646 [2024-09-29 16:37:37.152377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.646 [2024-09-29 16:37:37.152398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.646 [2024-09-29 16:37:37.152422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.646 [2024-09-29 16:37:37.152443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.646 [2024-09-29 16:37:37.152467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.646 [2024-09-29 16:37:37.152488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.646 [2024-09-29 16:37:37.152512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.646 [2024-09-29 16:37:37.152533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.646 [2024-09-29 16:37:37.152557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.646 [2024-09-29 16:37:37.152578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.646 [2024-09-29 16:37:37.152601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.646 [2024-09-29 16:37:37.152622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.646 [2024-09-29 16:37:37.152646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.646 [2024-09-29 16:37:37.152666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.646 [2024-09-29 16:37:37.152699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.646 [2024-09-29 16:37:37.152721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.646 [2024-09-29 16:37:37.152745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.646 [2024-09-29 16:37:37.152766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.647 [2024-09-29 16:37:37.152790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.647 [2024-09-29 16:37:37.152816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.647 [2024-09-29 16:37:37.152841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.647 [2024-09-29 16:37:37.152862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.647 [2024-09-29 16:37:37.152886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.647 [2024-09-29 16:37:37.152906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.647 [2024-09-29 16:37:37.152931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.647 [2024-09-29 16:37:37.152952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.647 [2024-09-29 16:37:37.152976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.647 [2024-09-29 16:37:37.152996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.647 [2024-09-29 16:37:37.153021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.647 [2024-09-29 16:37:37.153042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.647 [2024-09-29 16:37:37.153065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.647 [2024-09-29 16:37:37.153085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.647 [2024-09-29 16:37:37.153109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.647 [2024-09-29 16:37:37.153130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.647 [2024-09-29 16:37:37.153154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.647 [2024-09-29 16:37:37.153176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.647 [2024-09-29 16:37:37.153199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.647 [2024-09-29 16:37:37.153220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.647 [2024-09-29 16:37:37.153245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.647 [2024-09-29 16:37:37.153266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.647 [2024-09-29 16:37:37.153290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.647 [2024-09-29 16:37:37.153311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.647 [2024-09-29 16:37:37.153334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.647 [2024-09-29 16:37:37.153355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.647 [2024-09-29 16:37:37.153387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.647 [2024-09-29 16:37:37.153409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.647 [2024-09-29 16:37:37.153434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.647 [2024-09-29 16:37:37.153455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.647 [2024-09-29 16:37:37.153479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.647 [2024-09-29 16:37:37.153499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.647 [2024-09-29 16:37:37.153523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.647 [2024-09-29 16:37:37.153545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.647 [2024-09-29 16:37:37.153568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.647 [2024-09-29 16:37:37.153589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.647 [2024-09-29 16:37:37.153613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.647 [2024-09-29 16:37:37.153634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.647 [2024-09-29 16:37:37.153657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.647 [2024-09-29 16:37:37.153685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.647 [2024-09-29 16:37:37.153710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.647 [2024-09-29 16:37:37.153731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.647 [2024-09-29 16:37:37.153755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.647 [2024-09-29 16:37:37.153776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.647 [2024-09-29 16:37:37.153800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.647 [2024-09-29 16:37:37.153820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.647 [2024-09-29 16:37:37.153844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.647 [2024-09-29 16:37:37.153864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.647 [2024-09-29 16:37:37.153888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.647 [2024-09-29 16:37:37.153909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.647 [2024-09-29 16:37:37.153932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.647 [2024-09-29 16:37:37.153957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.647 [2024-09-29 16:37:37.153982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.647 [2024-09-29 16:37:37.154003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.647 [2024-09-29 16:37:37.154027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.647 [2024-09-29 16:37:37.154048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.647 [2024-09-29 16:37:37.154072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.647 [2024-09-29 16:37:37.154092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.647 [2024-09-29 16:37:37.154114] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f9f80 is same with the state(6) to be set 00:29:36.647 [2024-09-29 16:37:37.155851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.647 [2024-09-29 16:37:37.155885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.647 [2024-09-29 16:37:37.155918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.647 [2024-09-29 16:37:37.155941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.647 [2024-09-29 16:37:37.155966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.647 [2024-09-29 16:37:37.155988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.647 [2024-09-29 16:37:37.156013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.647 [2024-09-29 16:37:37.156033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.647 [2024-09-29 16:37:37.156058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.647 [2024-09-29 16:37:37.156079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.647 [2024-09-29 16:37:37.156103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.647 [2024-09-29 16:37:37.156124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.647 [2024-09-29 16:37:37.156147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.647 [2024-09-29 16:37:37.156168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.647 [2024-09-29 16:37:37.156192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.647 [2024-09-29 16:37:37.156213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.647 [2024-09-29 16:37:37.156237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.648 [2024-09-29 16:37:37.156263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.648 [2024-09-29 16:37:37.156288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.648 [2024-09-29 16:37:37.156309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.648 [2024-09-29 16:37:37.156334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.648 [2024-09-29 16:37:37.156354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.648 [2024-09-29 16:37:37.156378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.648 [2024-09-29 16:37:37.156398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.648 [2024-09-29 16:37:37.156422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.648 [2024-09-29 16:37:37.156443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.648 [2024-09-29 16:37:37.156467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.648 [2024-09-29 16:37:37.156487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.648 [2024-09-29 16:37:37.156511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.648 [2024-09-29 16:37:37.156532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.648 [2024-09-29 16:37:37.156555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.648 [2024-09-29 16:37:37.156576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.648 [2024-09-29 16:37:37.156599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.648 [2024-09-29 16:37:37.156620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.648 [2024-09-29 16:37:37.156644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.648 [2024-09-29 16:37:37.156664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.648 [2024-09-29 16:37:37.156698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.648 [2024-09-29 16:37:37.156720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.648 [2024-09-29 16:37:37.156743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.648 [2024-09-29 16:37:37.156764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.648 [2024-09-29 16:37:37.156786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.648 [2024-09-29 16:37:37.156808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.648 [2024-09-29 16:37:37.156857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.648 [2024-09-29 16:37:37.156879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.648 [2024-09-29 16:37:37.156903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.648 [2024-09-29 16:37:37.156924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.648 [2024-09-29 16:37:37.156948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.648 [2024-09-29 16:37:37.156969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.648 [2024-09-29 16:37:37.156993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.648 [2024-09-29 16:37:37.157013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.648 [2024-09-29 16:37:37.157036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.648 [2024-09-29 16:37:37.157057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.648 [2024-09-29 16:37:37.157082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.648 [2024-09-29 16:37:37.157103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.648 [2024-09-29 16:37:37.157127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.648 [2024-09-29 16:37:37.157147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.648 [2024-09-29 16:37:37.157172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.648 [2024-09-29 16:37:37.157193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.648 [2024-09-29 16:37:37.157218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.648 [2024-09-29 16:37:37.157238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.648 [2024-09-29 16:37:37.157261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.648 [2024-09-29 16:37:37.157282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.648 [2024-09-29 16:37:37.157306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.648 [2024-09-29 16:37:37.157326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.648 [2024-09-29 16:37:37.157350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.648 [2024-09-29 16:37:37.157371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.648 [2024-09-29 16:37:37.157395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.648 [2024-09-29 16:37:37.157419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.648 [2024-09-29 16:37:37.157444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.648 [2024-09-29 16:37:37.157465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.648 [2024-09-29 16:37:37.157489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.648 [2024-09-29 16:37:37.157509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.648 [2024-09-29 16:37:37.157534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.648 [2024-09-29 16:37:37.157555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.648 [2024-09-29 16:37:37.157578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.648 [2024-09-29 16:37:37.157599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.648 [2024-09-29 16:37:37.157622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.648 [2024-09-29 16:37:37.157643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.648 [2024-09-29 16:37:37.157667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.648 [2024-09-29 16:37:37.157696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.648 [2024-09-29 16:37:37.157721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.648 [2024-09-29 16:37:37.157742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.648 [2024-09-29 16:37:37.157765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.648 [2024-09-29 16:37:37.157787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.648 [2024-09-29 16:37:37.157811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.648 [2024-09-29 16:37:37.157831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.648 [2024-09-29 16:37:37.157856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.648 [2024-09-29 16:37:37.157876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.649 [2024-09-29 16:37:37.157901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.649 [2024-09-29 16:37:37.157921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.649 [2024-09-29 16:37:37.157945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.649 [2024-09-29 16:37:37.157966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.649 [2024-09-29 16:37:37.157994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.649 [2024-09-29 16:37:37.158015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.649 [2024-09-29 16:37:37.158039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.649 [2024-09-29 16:37:37.158060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.649 [2024-09-29 16:37:37.158084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.649 [2024-09-29 16:37:37.158104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.649 [2024-09-29 16:37:37.158128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.649 [2024-09-29 16:37:37.158149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.649 [2024-09-29 16:37:37.158172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.649 [2024-09-29 16:37:37.158193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.649 [2024-09-29 16:37:37.158217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.649 [2024-09-29 16:37:37.158238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.649 [2024-09-29 16:37:37.158261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.649 [2024-09-29 16:37:37.158282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.649 [2024-09-29 16:37:37.158306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.649 [2024-09-29 16:37:37.158326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.649 [2024-09-29 16:37:37.158350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.649 [2024-09-29 16:37:37.158371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.649 [2024-09-29 16:37:37.158394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.649 [2024-09-29 16:37:37.158415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.649 [2024-09-29 16:37:37.158438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.649 [2024-09-29 16:37:37.158459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.649 [2024-09-29 16:37:37.158482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.649 [2024-09-29 16:37:37.158504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.649 [2024-09-29 16:37:37.158528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.649 [2024-09-29 16:37:37.158552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.649 [2024-09-29 16:37:37.158576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.649 [2024-09-29 16:37:37.158597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.649 [2024-09-29 16:37:37.158621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.649 [2024-09-29 16:37:37.158642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.649 [2024-09-29 16:37:37.158665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.649 [2024-09-29 16:37:37.158702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.649 [2024-09-29 16:37:37.158728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.649 [2024-09-29 16:37:37.158749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.649 [2024-09-29 16:37:37.158773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.649 [2024-09-29 16:37:37.158794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.649 [2024-09-29 16:37:37.160400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.649 [2024-09-29 16:37:37.160432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.649 [2024-09-29 16:37:37.160475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.649 [2024-09-29 16:37:37.160500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.649 [2024-09-29 16:37:37.160526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.649 [2024-09-29 16:37:37.160548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.649 [2024-09-29 16:37:37.160573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.649 [2024-09-29 16:37:37.160594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.649 [2024-09-29 16:37:37.160617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.649 [2024-09-29 16:37:37.160639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.649 [2024-09-29 16:37:37.160664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.649 [2024-09-29 16:37:37.160712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.649 [2024-09-29 16:37:37.160739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.649 [2024-09-29 16:37:37.160760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.649 [2024-09-29 16:37:37.160789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.649 [2024-09-29 16:37:37.160812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.649 [2024-09-29 16:37:37.160836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.649 [2024-09-29 16:37:37.160858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.649 [2024-09-29 16:37:37.160881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.649 [2024-09-29 16:37:37.160902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.649 [2024-09-29 16:37:37.160926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.649 [2024-09-29 16:37:37.160947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.649 [2024-09-29 16:37:37.160972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.649 [2024-09-29 16:37:37.160994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.649 [2024-09-29 16:37:37.161018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.649 [2024-09-29 16:37:37.161039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.649 [2024-09-29 16:37:37.161063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.649 [2024-09-29 16:37:37.161084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.649 [2024-09-29 16:37:37.161108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.649 [2024-09-29 16:37:37.161129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.649 [2024-09-29 16:37:37.161153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.649 [2024-09-29 16:37:37.161174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.649 [2024-09-29 16:37:37.161198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.649 [2024-09-29 16:37:37.161219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.649 [2024-09-29 16:37:37.161243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.649 [2024-09-29 16:37:37.161265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.649 [2024-09-29 16:37:37.161288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.649 [2024-09-29 16:37:37.161310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.649 [2024-09-29 16:37:37.161336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.649 [2024-09-29 16:37:37.161361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.650 [2024-09-29 16:37:37.161385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.650 [2024-09-29 16:37:37.161406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.650 [2024-09-29 16:37:37.161491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.650 [2024-09-29 16:37:37.161514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.650 [2024-09-29 16:37:37.161539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.650 [2024-09-29 16:37:37.161560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.650 [2024-09-29 16:37:37.161584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.650 [2024-09-29 16:37:37.161606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.650 [2024-09-29 16:37:37.161630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.650 [2024-09-29 16:37:37.161651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.650 [2024-09-29 16:37:37.161698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.650 [2024-09-29 16:37:37.161736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.650 [2024-09-29 16:37:37.161767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.650 [2024-09-29 16:37:37.161790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.650 [2024-09-29 16:37:37.161815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.650 [2024-09-29 16:37:37.161837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.650 [2024-09-29 16:37:37.161861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.650 [2024-09-29 16:37:37.161882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.650 [2024-09-29 16:37:37.161906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.650 [2024-09-29 16:37:37.161928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.650 [2024-09-29 16:37:37.161952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.650 [2024-09-29 16:37:37.161973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.650 [2024-09-29 16:37:37.161997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.650 [2024-09-29 16:37:37.162019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.650 [2024-09-29 16:37:37.162048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.650 [2024-09-29 16:37:37.162071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.650 [2024-09-29 16:37:37.162095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.650 [2024-09-29 16:37:37.162116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.650 [2024-09-29 16:37:37.162140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.650 [2024-09-29 16:37:37.162161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.650 [2024-09-29 16:37:37.162185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.650 [2024-09-29 16:37:37.162206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.650 [2024-09-29 16:37:37.162230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.650 [2024-09-29 16:37:37.162252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.650 [2024-09-29 16:37:37.162276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.650 [2024-09-29 16:37:37.162298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.650 [2024-09-29 16:37:37.162322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.650 [2024-09-29 16:37:37.162342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.650 [2024-09-29 16:37:37.162366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.650 [2024-09-29 16:37:37.162387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.650 [2024-09-29 16:37:37.162411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.650 [2024-09-29 16:37:37.162432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.650 [2024-09-29 16:37:37.162456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.650 [2024-09-29 16:37:37.162478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.650 [2024-09-29 16:37:37.162501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.650 [2024-09-29 16:37:37.162523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.650 [2024-09-29 16:37:37.162547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.650 [2024-09-29 16:37:37.162569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.650 [2024-09-29 16:37:37.162592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.650 [2024-09-29 16:37:37.162618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.650 [2024-09-29 16:37:37.162643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.650 [2024-09-29 16:37:37.162664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.650 [2024-09-29 16:37:37.162705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.650 [2024-09-29 16:37:37.162728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.650 [2024-09-29 16:37:37.162752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.650 [2024-09-29 16:37:37.162773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.650 [2024-09-29 16:37:37.162797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.650 [2024-09-29 16:37:37.162818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.650 [2024-09-29 16:37:37.162842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.650 [2024-09-29 16:37:37.162863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.650 [2024-09-29 16:37:37.162887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.650 [2024-09-29 16:37:37.162909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.650 [2024-09-29 16:37:37.162932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.650 [2024-09-29 16:37:37.162953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.650 [2024-09-29 16:37:37.162978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.650 [2024-09-29 16:37:37.162999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.650 [2024-09-29 16:37:37.163023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.650 [2024-09-29 16:37:37.163045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.650 [2024-09-29 16:37:37.163069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.650 [2024-09-29 16:37:37.163090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.650 [2024-09-29 16:37:37.163114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.650 [2024-09-29 16:37:37.163135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.650 [2024-09-29 16:37:37.163159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.650 [2024-09-29 16:37:37.163180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.650 [2024-09-29 16:37:37.163203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.650 [2024-09-29 16:37:37.163230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.650 [2024-09-29 16:37:37.163255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.650 [2024-09-29 16:37:37.163276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.650 [2024-09-29 16:37:37.163299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.650 [2024-09-29 16:37:37.163321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.650 [2024-09-29 16:37:37.163345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.651 [2024-09-29 16:37:37.163367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.651 [2024-09-29 16:37:37.163392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.651 [2024-09-29 16:37:37.163413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.651 [2024-09-29 16:37:37.163438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.651 [2024-09-29 16:37:37.163460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.651 [2024-09-29 16:37:37.163484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.651 [2024-09-29 16:37:37.163505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.651 [2024-09-29 16:37:37.163527] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fa480 is same with the state(6) to be set 00:29:36.651 [2024-09-29 16:37:37.165111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.651 [2024-09-29 16:37:37.165144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.651 [2024-09-29 16:37:37.165177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.651 [2024-09-29 16:37:37.165201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.651 [2024-09-29 16:37:37.165226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.651 [2024-09-29 16:37:37.165249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.651 [2024-09-29 16:37:37.165273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.651 [2024-09-29 16:37:37.165295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.651 [2024-09-29 16:37:37.165319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.651 [2024-09-29 16:37:37.165341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.651 [2024-09-29 16:37:37.165365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.651 [2024-09-29 16:37:37.165392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.651 [2024-09-29 16:37:37.165417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.651 [2024-09-29 16:37:37.165438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.651 [2024-09-29 16:37:37.165463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.651 [2024-09-29 16:37:37.165485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.651 [2024-09-29 16:37:37.165508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.651 [2024-09-29 16:37:37.165529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.651 [2024-09-29 16:37:37.165553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.651 [2024-09-29 16:37:37.165575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.651 [2024-09-29 16:37:37.165599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.651 [2024-09-29 16:37:37.165620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.651 [2024-09-29 16:37:37.165644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.651 [2024-09-29 16:37:37.165665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.651 [2024-09-29 16:37:37.165699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.651 [2024-09-29 16:37:37.165722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.651 [2024-09-29 16:37:37.165746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.651 [2024-09-29 16:37:37.165767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.651 [2024-09-29 16:37:37.165790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.651 [2024-09-29 16:37:37.165811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.651 [2024-09-29 16:37:37.165836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.651 [2024-09-29 16:37:37.165857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.651 [2024-09-29 16:37:37.165881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.651 [2024-09-29 16:37:37.165902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.651 [2024-09-29 16:37:37.165927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.651 [2024-09-29 16:37:37.165948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.651 [2024-09-29 16:37:37.165977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.651 [2024-09-29 16:37:37.165999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.651 [2024-09-29 16:37:37.166023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.651 [2024-09-29 16:37:37.166044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.651 [2024-09-29 16:37:37.166069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.651 [2024-09-29 16:37:37.166103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.651 [2024-09-29 16:37:37.166129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.651 [2024-09-29 16:37:37.166150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.651 [2024-09-29 16:37:37.166175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.651 [2024-09-29 16:37:37.166196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.651 [2024-09-29 16:37:37.166221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.651 [2024-09-29 16:37:37.166242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.651 [2024-09-29 16:37:37.166265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.651 [2024-09-29 16:37:37.166286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.651 [2024-09-29 16:37:37.166310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.651 [2024-09-29 16:37:37.166331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.651 [2024-09-29 16:37:37.166356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.651 [2024-09-29 16:37:37.166377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.651 [2024-09-29 16:37:37.166400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.651 [2024-09-29 16:37:37.166421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.651 [2024-09-29 16:37:37.166445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.651 [2024-09-29 16:37:37.166466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.651 [2024-09-29 16:37:37.166490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.651 [2024-09-29 16:37:37.166511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.651 [2024-09-29 16:37:37.166535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.651 [2024-09-29 16:37:37.166560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.651 [2024-09-29 16:37:37.166585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.651 [2024-09-29 16:37:37.166605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.651 [2024-09-29 16:37:37.166630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.651 [2024-09-29 16:37:37.166651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.651 [2024-09-29 16:37:37.166681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.651 [2024-09-29 16:37:37.166704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.651 [2024-09-29 16:37:37.166729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.651 [2024-09-29 16:37:37.166750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.651 [2024-09-29 16:37:37.166773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.651 [2024-09-29 16:37:37.166793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.652 [2024-09-29 16:37:37.166817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.652 [2024-09-29 16:37:37.166838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.652 [2024-09-29 16:37:37.166862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.652 [2024-09-29 16:37:37.166883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.652 [2024-09-29 16:37:37.166906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.652 [2024-09-29 16:37:37.166927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.652 [2024-09-29 16:37:37.166951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.652 [2024-09-29 16:37:37.166971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.652 [2024-09-29 16:37:37.166996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.652 [2024-09-29 16:37:37.167017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.652 [2024-09-29 16:37:37.167041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.652 [2024-09-29 16:37:37.167061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.652 [2024-09-29 16:37:37.167085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.652 [2024-09-29 16:37:37.167106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.652 [2024-09-29 16:37:37.167134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.652 [2024-09-29 16:37:37.167155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.652 [2024-09-29 16:37:37.167188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.652 [2024-09-29 16:37:37.167209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.652 [2024-09-29 16:37:37.167233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.652 [2024-09-29 16:37:37.167254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.652 [2024-09-29 16:37:37.167279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.652 [2024-09-29 16:37:37.167300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.652 [2024-09-29 16:37:37.167324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.652 [2024-09-29 16:37:37.167344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.652 [2024-09-29 16:37:37.167369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.652 [2024-09-29 16:37:37.167389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.652 [2024-09-29 16:37:37.167414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.652 [2024-09-29 16:37:37.167434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.652 [2024-09-29 16:37:37.167458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.652 [2024-09-29 16:37:37.167479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.652 [2024-09-29 16:37:37.167502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.652 [2024-09-29 16:37:37.167523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.652 [2024-09-29 16:37:37.167547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.652 [2024-09-29 16:37:37.167568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.652 [2024-09-29 16:37:37.167592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.652 [2024-09-29 16:37:37.167612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.652 [2024-09-29 16:37:37.167637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.652 [2024-09-29 16:37:37.167658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.652 [2024-09-29 16:37:37.167689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.652 [2024-09-29 16:37:37.167716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.652 [2024-09-29 16:37:37.167740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.652 [2024-09-29 16:37:37.167761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.652 [2024-09-29 16:37:37.167786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.652 [2024-09-29 16:37:37.167807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.652 [2024-09-29 16:37:37.167831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.652 [2024-09-29 16:37:37.167852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.652 [2024-09-29 16:37:37.167875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.652 [2024-09-29 16:37:37.167896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.652 [2024-09-29 16:37:37.167920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.652 [2024-09-29 16:37:37.167941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.652 [2024-09-29 16:37:37.167964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.652 [2024-09-29 16:37:37.167985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.652 [2024-09-29 16:37:37.168008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.652 [2024-09-29 16:37:37.168029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.652 [2024-09-29 16:37:37.168053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.652 [2024-09-29 16:37:37.168074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.652 [2024-09-29 16:37:37.168095] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fa700 is same with the state(6) to be set 00:29:36.652 [2024-09-29 16:37:37.169682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.652 [2024-09-29 16:37:37.169713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.652 [2024-09-29 16:37:37.169744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.652 [2024-09-29 16:37:37.169767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.652 [2024-09-29 16:37:37.169792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.652 [2024-09-29 16:37:37.169813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.652 [2024-09-29 16:37:37.169837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.652 [2024-09-29 16:37:37.169863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.652 [2024-09-29 16:37:37.169889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.652 [2024-09-29 16:37:37.169910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.652 [2024-09-29 16:37:37.169934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.652 [2024-09-29 16:37:37.169955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.652 [2024-09-29 16:37:37.169979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.653 [2024-09-29 16:37:37.170000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.653 [2024-09-29 16:37:37.170025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.653 [2024-09-29 16:37:37.170046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.653 [2024-09-29 16:37:37.170070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.653 [2024-09-29 16:37:37.170091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.653 [2024-09-29 16:37:37.170116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.653 [2024-09-29 16:37:37.170136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.653 [2024-09-29 16:37:37.170161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.653 [2024-09-29 16:37:37.170182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.653 [2024-09-29 16:37:37.170206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.653 [2024-09-29 16:37:37.170226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.653 [2024-09-29 16:37:37.170251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.653 [2024-09-29 16:37:37.170272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.653 [2024-09-29 16:37:37.170296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.653 [2024-09-29 16:37:37.170317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.653 [2024-09-29 16:37:37.170341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.653 [2024-09-29 16:37:37.170362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.653 [2024-09-29 16:37:37.170387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.653 [2024-09-29 16:37:37.170407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.653 [2024-09-29 16:37:37.170438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.653 [2024-09-29 16:37:37.170460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.653 [2024-09-29 16:37:37.170484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.653 [2024-09-29 16:37:37.170505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.653 [2024-09-29 16:37:37.170530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.653 [2024-09-29 16:37:37.170551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.653 [2024-09-29 16:37:37.170574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.653 [2024-09-29 16:37:37.170595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.653 [2024-09-29 16:37:37.170635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.653 [2024-09-29 16:37:37.170657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.653 [2024-09-29 16:37:37.170688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.653 [2024-09-29 16:37:37.170711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.653 [2024-09-29 16:37:37.170736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.653 [2024-09-29 16:37:37.170757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.653 [2024-09-29 16:37:37.170781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.653 [2024-09-29 16:37:37.170802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.653 [2024-09-29 16:37:37.170825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.653 [2024-09-29 16:37:37.170847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.653 [2024-09-29 16:37:37.170871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.653 [2024-09-29 16:37:37.170893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.653 [2024-09-29 16:37:37.170917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.653 [2024-09-29 16:37:37.170938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.653 [2024-09-29 16:37:37.170962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.653 [2024-09-29 16:37:37.170983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.653 [2024-09-29 16:37:37.171007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.653 [2024-09-29 16:37:37.171032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.653 [2024-09-29 16:37:37.171058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.653 [2024-09-29 16:37:37.171079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.653 [2024-09-29 16:37:37.171103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.653 [2024-09-29 16:37:37.171124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.653 [2024-09-29 16:37:37.171148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.653 [2024-09-29 16:37:37.171169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.653 [2024-09-29 16:37:37.171194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.653 [2024-09-29 16:37:37.171215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.653 [2024-09-29 16:37:37.171239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.653 [2024-09-29 16:37:37.171259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.653 [2024-09-29 16:37:37.171284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.653 [2024-09-29 16:37:37.171305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.653 [2024-09-29 16:37:37.171329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.653 [2024-09-29 16:37:37.171349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.653 [2024-09-29 16:37:37.171375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.653 [2024-09-29 16:37:37.171396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.653 [2024-09-29 16:37:37.171420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.653 [2024-09-29 16:37:37.171441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.653 [2024-09-29 16:37:37.171464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.653 [2024-09-29 16:37:37.171486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.653 [2024-09-29 16:37:37.171510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.653 [2024-09-29 16:37:37.171531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.653 [2024-09-29 16:37:37.171554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.653 [2024-09-29 16:37:37.171575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.653 [2024-09-29 16:37:37.171606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.653 [2024-09-29 16:37:37.171628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.653 [2024-09-29 16:37:37.171652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.653 [2024-09-29 16:37:37.171679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.653 [2024-09-29 16:37:37.171706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.653 [2024-09-29 16:37:37.171727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.653 [2024-09-29 16:37:37.171751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.653 [2024-09-29 16:37:37.171771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.653 [2024-09-29 16:37:37.171796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.653 [2024-09-29 16:37:37.171817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.653 [2024-09-29 16:37:37.171840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.654 [2024-09-29 16:37:37.171861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.654 [2024-09-29 16:37:37.171885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.654 [2024-09-29 16:37:37.171906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.654 [2024-09-29 16:37:37.171930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.654 [2024-09-29 16:37:37.171951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.654 [2024-09-29 16:37:37.171975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.654 [2024-09-29 16:37:37.171995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.654 [2024-09-29 16:37:37.172019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.654 [2024-09-29 16:37:37.172040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.654 [2024-09-29 16:37:37.172064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.654 [2024-09-29 16:37:37.172085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.654 [2024-09-29 16:37:37.172108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.654 [2024-09-29 16:37:37.172129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.654 [2024-09-29 16:37:37.172153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.654 [2024-09-29 16:37:37.172178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.654 [2024-09-29 16:37:37.172203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.654 [2024-09-29 16:37:37.172223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.654 [2024-09-29 16:37:37.172247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.654 [2024-09-29 16:37:37.172267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.654 [2024-09-29 16:37:37.172291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.654 [2024-09-29 16:37:37.172312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.654 [2024-09-29 16:37:37.172336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.654 [2024-09-29 16:37:37.172356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.654 [2024-09-29 16:37:37.172379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.654 [2024-09-29 16:37:37.172400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.654 [2024-09-29 16:37:37.172424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.654 [2024-09-29 16:37:37.172444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.654 [2024-09-29 16:37:37.172467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.654 [2024-09-29 16:37:37.172487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.654 [2024-09-29 16:37:37.172511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.654 [2024-09-29 16:37:37.172532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.654 [2024-09-29 16:37:37.172556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.654 [2024-09-29 16:37:37.172576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.654 [2024-09-29 16:37:37.172599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.654 [2024-09-29 16:37:37.172620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.654 [2024-09-29 16:37:37.172642] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fa980 is same with the state(6) to be set 00:29:36.654 [2024-09-29 16:37:37.174336] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:36.654 [2024-09-29 16:37:37.174457] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:36.654 [2024-09-29 16:37:37.174699] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.654 [2024-09-29 16:37:37.174743] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:29:36.654 [2024-09-29 16:37:37.174776] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:29:36.654 [2024-09-29 16:37:37.174802] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:29:36.654 [2024-09-29 16:37:37.175085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.654 [2024-09-29 16:37:37.175125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:29:36.654 [2024-09-29 16:37:37.175150] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(6) to be set 00:29:36.654 [2024-09-29 16:37:37.175176] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:29:36.654 [2024-09-29 16:37:37.175198] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:29:36.654 [2024-09-29 16:37:37.175223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:29:36.654 [2024-09-29 16:37:37.175307] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:36.654 [2024-09-29 16:37:37.175382] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:36.654 [2024-09-29 16:37:37.175423] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:29:36.654 [2024-09-29 16:37:37.175621] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:29:36.654 [2024-09-29 16:37:37.175662] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.654 [2024-09-29 16:37:37.175910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.654 [2024-09-29 16:37:37.175947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:29:36.654 [2024-09-29 16:37:37.175971] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:29:36.654 [2024-09-29 16:37:37.176089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.654 [2024-09-29 16:37:37.176123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:29:36.654 [2024-09-29 16:37:37.176146] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2f00 is same with the state(6) to be set 00:29:36.654 [2024-09-29 16:37:37.176286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.654 [2024-09-29 16:37:37.176319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f3900 with addr=10.0.0.2, port=4420 00:29:36.654 [2024-09-29 16:37:37.176343] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3900 is same with the state(6) to be set 00:29:36.654 [2024-09-29 16:37:37.176454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.654 [2024-09-29 16:37:37.176486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4300 with addr=10.0.0.2, port=4420 00:29:36.654 [2024-09-29 16:37:37.176509] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4300 is same with the state(6) to be set 00:29:36.654 [2024-09-29 16:37:37.179020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.654 [2024-09-29 16:37:37.179052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.654 [2024-09-29 16:37:37.179100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.654 [2024-09-29 16:37:37.179123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.654 [2024-09-29 16:37:37.179153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.654 [2024-09-29 16:37:37.179175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.654 [2024-09-29 16:37:37.179198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.654 [2024-09-29 16:37:37.179219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.654 [2024-09-29 16:37:37.179242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.654 [2024-09-29 16:37:37.179262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.654 [2024-09-29 16:37:37.179300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.654 [2024-09-29 16:37:37.179322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.654 [2024-09-29 16:37:37.179346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.654 [2024-09-29 16:37:37.179366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.654 [2024-09-29 16:37:37.179389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.654 [2024-09-29 16:37:37.179409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.654 [2024-09-29 16:37:37.179432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.654 [2024-09-29 16:37:37.179452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.654 [2024-09-29 16:37:37.179475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.655 [2024-09-29 16:37:37.179496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.655 [2024-09-29 16:37:37.179519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.655 [2024-09-29 16:37:37.179540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.655 [2024-09-29 16:37:37.179563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.655 [2024-09-29 16:37:37.179583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.655 [2024-09-29 16:37:37.179605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.655 [2024-09-29 16:37:37.179626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.655 [2024-09-29 16:37:37.179649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.655 [2024-09-29 16:37:37.179670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.655 [2024-09-29 16:37:37.179703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.655 [2024-09-29 16:37:37.179729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.655 [2024-09-29 16:37:37.179752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.655 [2024-09-29 16:37:37.179774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.655 [2024-09-29 16:37:37.179796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.655 [2024-09-29 16:37:37.179817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.655 [2024-09-29 16:37:37.179840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.655 [2024-09-29 16:37:37.179860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.655 [2024-09-29 16:37:37.179883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.655 [2024-09-29 16:37:37.179904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.655 [2024-09-29 16:37:37.179926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.655 [2024-09-29 16:37:37.179947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.655 [2024-09-29 16:37:37.179969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.655 [2024-09-29 16:37:37.179989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.655 [2024-09-29 16:37:37.180012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.655 [2024-09-29 16:37:37.180032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.655 [2024-09-29 16:37:37.180055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.655 [2024-09-29 16:37:37.180075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.655 [2024-09-29 16:37:37.180098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.655 [2024-09-29 16:37:37.180119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.655 [2024-09-29 16:37:37.180151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.655 [2024-09-29 16:37:37.180188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.655 [2024-09-29 16:37:37.180231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.655 [2024-09-29 16:37:37.180266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.655 [2024-09-29 16:37:37.180309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.655 [2024-09-29 16:37:37.180348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.655 [2024-09-29 16:37:37.180381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.655 [2024-09-29 16:37:37.180425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.655 [2024-09-29 16:37:37.180468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.655 [2024-09-29 16:37:37.180495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.655 [2024-09-29 16:37:37.180519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.655 [2024-09-29 16:37:37.180540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.655 [2024-09-29 16:37:37.180563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.655 [2024-09-29 16:37:37.180583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.655 [2024-09-29 16:37:37.180606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.655 [2024-09-29 16:37:37.180627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.655 [2024-09-29 16:37:37.180650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.655 [2024-09-29 16:37:37.180670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.655 [2024-09-29 16:37:37.180702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.655 [2024-09-29 16:37:37.180724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.655 [2024-09-29 16:37:37.180747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.655 [2024-09-29 16:37:37.180767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.655 [2024-09-29 16:37:37.180790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.655 [2024-09-29 16:37:37.180810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.655 [2024-09-29 16:37:37.180833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.655 [2024-09-29 16:37:37.180854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.655 [2024-09-29 16:37:37.180876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.655 [2024-09-29 16:37:37.180897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.655 [2024-09-29 16:37:37.180921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.655 [2024-09-29 16:37:37.180941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.655 [2024-09-29 16:37:37.180965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.655 [2024-09-29 16:37:37.180985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.655 [2024-09-29 16:37:37.181013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.655 [2024-09-29 16:37:37.181035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.655 [2024-09-29 16:37:37.181057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.655 [2024-09-29 16:37:37.181079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.655 [2024-09-29 16:37:37.181102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.655 [2024-09-29 16:37:37.181123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.655 [2024-09-29 16:37:37.181146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.655 [2024-09-29 16:37:37.181170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.655 [2024-09-29 16:37:37.181213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.655 [2024-09-29 16:37:37.181249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.655 [2024-09-29 16:37:37.181277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.655 [2024-09-29 16:37:37.181297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.655 [2024-09-29 16:37:37.181320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.655 [2024-09-29 16:37:37.181340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.655 [2024-09-29 16:37:37.181364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.655 [2024-09-29 16:37:37.181385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.655 [2024-09-29 16:37:37.181407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.655 [2024-09-29 16:37:37.181428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.655 [2024-09-29 16:37:37.181449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.656 [2024-09-29 16:37:37.181470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.656 [2024-09-29 16:37:37.181493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.656 [2024-09-29 16:37:37.181525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.656 [2024-09-29 16:37:37.181555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.656 [2024-09-29 16:37:37.181577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.656 [2024-09-29 16:37:37.181600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.656 [2024-09-29 16:37:37.181626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.656 [2024-09-29 16:37:37.181650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.656 [2024-09-29 16:37:37.181670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.656 [2024-09-29 16:37:37.181711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.656 [2024-09-29 16:37:37.181733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.656 [2024-09-29 16:37:37.181756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.656 [2024-09-29 16:37:37.181777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.656 [2024-09-29 16:37:37.181799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.656 [2024-09-29 16:37:37.181820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.656 [2024-09-29 16:37:37.181842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.656 [2024-09-29 16:37:37.181862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.656 [2024-09-29 16:37:37.181886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.656 [2024-09-29 16:37:37.181907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.656 [2024-09-29 16:37:37.181929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.656 [2024-09-29 16:37:37.181949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.656 [2024-09-29 16:37:37.181972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.656 [2024-09-29 16:37:37.181993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.656 [2024-09-29 16:37:37.182016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.656 [2024-09-29 16:37:37.182036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.656 [2024-09-29 16:37:37.182059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.656 [2024-09-29 16:37:37.182079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.656 [2024-09-29 16:37:37.182102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.656 [2024-09-29 16:37:37.182123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.656 [2024-09-29 16:37:37.182144] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fac00 is same with the state(6) to be set 00:29:36.656 [2024-09-29 16:37:37.183780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.656 [2024-09-29 16:37:37.183822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.656 [2024-09-29 16:37:37.183857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.656 [2024-09-29 16:37:37.183880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.656 [2024-09-29 16:37:37.183905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.656 [2024-09-29 16:37:37.183926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.656 [2024-09-29 16:37:37.183951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.656 [2024-09-29 16:37:37.183973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.656 [2024-09-29 16:37:37.183996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.656 [2024-09-29 16:37:37.184031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.656 [2024-09-29 16:37:37.184056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.656 [2024-09-29 16:37:37.184078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.656 [2024-09-29 16:37:37.184101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.656 [2024-09-29 16:37:37.184123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.656 [2024-09-29 16:37:37.184145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.656 [2024-09-29 16:37:37.184166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.656 [2024-09-29 16:37:37.184189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.656 [2024-09-29 16:37:37.184211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.656 [2024-09-29 16:37:37.184234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.656 [2024-09-29 16:37:37.184255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.656 [2024-09-29 16:37:37.184278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.656 [2024-09-29 16:37:37.184303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.656 [2024-09-29 16:37:37.184350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.656 [2024-09-29 16:37:37.184381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.656 [2024-09-29 16:37:37.184406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.656 [2024-09-29 16:37:37.184426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.656 [2024-09-29 16:37:37.184454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.656 [2024-09-29 16:37:37.184490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.656 [2024-09-29 16:37:37.184534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.656 [2024-09-29 16:37:37.184572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.656 [2024-09-29 16:37:37.184618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.656 [2024-09-29 16:37:37.184649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.656 [2024-09-29 16:37:37.184685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.656 [2024-09-29 16:37:37.184710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.656 [2024-09-29 16:37:37.184736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.656 [2024-09-29 16:37:37.184757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.656 [2024-09-29 16:37:37.184779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.656 [2024-09-29 16:37:37.184800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.656 [2024-09-29 16:37:37.184823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.656 [2024-09-29 16:37:37.184843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.656 [2024-09-29 16:37:37.184866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.656 [2024-09-29 16:37:37.184888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.656 [2024-09-29 16:37:37.184911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.656 [2024-09-29 16:37:37.184931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.656 [2024-09-29 16:37:37.184954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.656 [2024-09-29 16:37:37.184975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.656 [2024-09-29 16:37:37.184998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.917 [2024-09-29 16:37:37.185018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.917 [2024-09-29 16:37:37.185041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.917 [2024-09-29 16:37:37.185062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.917 [2024-09-29 16:37:37.185085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.917 [2024-09-29 16:37:37.185112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.917 [2024-09-29 16:37:37.185136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.917 [2024-09-29 16:37:37.185157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.917 [2024-09-29 16:37:37.185181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.917 [2024-09-29 16:37:37.185201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.918 [2024-09-29 16:37:37.185224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.918 [2024-09-29 16:37:37.185245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.918 [2024-09-29 16:37:37.185269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.918 [2024-09-29 16:37:37.185289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.918 [2024-09-29 16:37:37.185312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.918 [2024-09-29 16:37:37.185332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.918 [2024-09-29 16:37:37.185355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.918 [2024-09-29 16:37:37.185376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.918 [2024-09-29 16:37:37.185400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.918 [2024-09-29 16:37:37.185420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.918 [2024-09-29 16:37:37.185442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.918 [2024-09-29 16:37:37.185477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.918 [2024-09-29 16:37:37.185517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.918 [2024-09-29 16:37:37.185555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.918 [2024-09-29 16:37:37.185603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.918 [2024-09-29 16:37:37.185636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.918 [2024-09-29 16:37:37.185677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.918 [2024-09-29 16:37:37.185712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.918 [2024-09-29 16:37:37.185749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.918 [2024-09-29 16:37:37.185786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.918 [2024-09-29 16:37:37.185838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.918 [2024-09-29 16:37:37.185877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.918 [2024-09-29 16:37:37.185914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.918 [2024-09-29 16:37:37.185945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.918 [2024-09-29 16:37:37.185979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.918 [2024-09-29 16:37:37.186013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.918 [2024-09-29 16:37:37.186060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.918 [2024-09-29 16:37:37.186095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.918 [2024-09-29 16:37:37.186131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.918 [2024-09-29 16:37:37.186162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.918 [2024-09-29 16:37:37.186203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.918 [2024-09-29 16:37:37.186239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.918 [2024-09-29 16:37:37.186285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.918 [2024-09-29 16:37:37.186321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.918 [2024-09-29 16:37:37.186358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.918 [2024-09-29 16:37:37.186389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.918 [2024-09-29 16:37:37.186425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.918 [2024-09-29 16:37:37.186459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.918 [2024-09-29 16:37:37.186505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.918 [2024-09-29 16:37:37.186540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.918 [2024-09-29 16:37:37.186575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.918 [2024-09-29 16:37:37.186612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.918 [2024-09-29 16:37:37.186652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.918 [2024-09-29 16:37:37.186694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.918 [2024-09-29 16:37:37.186741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.918 [2024-09-29 16:37:37.186782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.918 [2024-09-29 16:37:37.186819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.918 [2024-09-29 16:37:37.186850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.918 [2024-09-29 16:37:37.186885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.918 [2024-09-29 16:37:37.186924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.918 [2024-09-29 16:37:37.186970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.918 [2024-09-29 16:37:37.187006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.918 [2024-09-29 16:37:37.187042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.918 [2024-09-29 16:37:37.187072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.918 [2024-09-29 16:37:37.187106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.918 [2024-09-29 16:37:37.187142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.918 [2024-09-29 16:37:37.187188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.918 [2024-09-29 16:37:37.187224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.918 [2024-09-29 16:37:37.187260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.918 [2024-09-29 16:37:37.187297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.918 [2024-09-29 16:37:37.187334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.918 [2024-09-29 16:37:37.187369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.918 [2024-09-29 16:37:37.187415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.918 [2024-09-29 16:37:37.187449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.918 [2024-09-29 16:37:37.187491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.918 [2024-09-29 16:37:37.187519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.918 [2024-09-29 16:37:37.187544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.918 [2024-09-29 16:37:37.187565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.918 [2024-09-29 16:37:37.187589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.918 [2024-09-29 16:37:37.187626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.918 [2024-09-29 16:37:37.187662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.918 [2024-09-29 16:37:37.187692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.918 [2024-09-29 16:37:37.187715] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fae80 is same with the state(6) to be set 00:29:36.918 [2024-09-29 16:37:37.189265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.918 [2024-09-29 16:37:37.189298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.918 [2024-09-29 16:37:37.189329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.918 [2024-09-29 16:37:37.189351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.918 [2024-09-29 16:37:37.189374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.918 [2024-09-29 16:37:37.189395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.919 [2024-09-29 16:37:37.189418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.919 [2024-09-29 16:37:37.189439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.919 [2024-09-29 16:37:37.189479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.919 [2024-09-29 16:37:37.189501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.919 [2024-09-29 16:37:37.189524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.919 [2024-09-29 16:37:37.189545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.919 [2024-09-29 16:37:37.189568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.919 [2024-09-29 16:37:37.189589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.919 [2024-09-29 16:37:37.189612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.919 [2024-09-29 16:37:37.189632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.919 [2024-09-29 16:37:37.189656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.919 [2024-09-29 16:37:37.189689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.919 [2024-09-29 16:37:37.189716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.919 [2024-09-29 16:37:37.189738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.919 [2024-09-29 16:37:37.189761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.919 [2024-09-29 16:37:37.189781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.919 [2024-09-29 16:37:37.189810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.919 [2024-09-29 16:37:37.189831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.919 [2024-09-29 16:37:37.189854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.919 [2024-09-29 16:37:37.189875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.919 [2024-09-29 16:37:37.189897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.919 [2024-09-29 16:37:37.189918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.919 [2024-09-29 16:37:37.189940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.919 [2024-09-29 16:37:37.189961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.919 [2024-09-29 16:37:37.189984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.919 [2024-09-29 16:37:37.190005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.919 [2024-09-29 16:37:37.190027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.919 [2024-09-29 16:37:37.190048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.919 [2024-09-29 16:37:37.190071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.919 [2024-09-29 16:37:37.190091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.919 [2024-09-29 16:37:37.190114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.919 [2024-09-29 16:37:37.190135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.919 [2024-09-29 16:37:37.190157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.919 [2024-09-29 16:37:37.190178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.919 [2024-09-29 16:37:37.190201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.919 [2024-09-29 16:37:37.190221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.919 [2024-09-29 16:37:37.190244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.919 [2024-09-29 16:37:37.190264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.919 [2024-09-29 16:37:37.190287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.919 [2024-09-29 16:37:37.190307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.919 [2024-09-29 16:37:37.190330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.919 [2024-09-29 16:37:37.190355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.919 [2024-09-29 16:37:37.190379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.919 [2024-09-29 16:37:37.190400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.919 [2024-09-29 16:37:37.190423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.919 [2024-09-29 16:37:37.190443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.919 [2024-09-29 16:37:37.190466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.919 [2024-09-29 16:37:37.190487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.919 [2024-09-29 16:37:37.190509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.919 [2024-09-29 16:37:37.190530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.919 [2024-09-29 16:37:37.190553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.919 [2024-09-29 16:37:37.190573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.919 [2024-09-29 16:37:37.190596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.919 [2024-09-29 16:37:37.190616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.919 [2024-09-29 16:37:37.190639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.919 [2024-09-29 16:37:37.190659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.919 [2024-09-29 16:37:37.190690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.919 [2024-09-29 16:37:37.190713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.919 [2024-09-29 16:37:37.190736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.919 [2024-09-29 16:37:37.190757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.919 [2024-09-29 16:37:37.190780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.919 [2024-09-29 16:37:37.190801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.919 [2024-09-29 16:37:37.190823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.919 [2024-09-29 16:37:37.190843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.919 [2024-09-29 16:37:37.190866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.919 [2024-09-29 16:37:37.190887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.919 [2024-09-29 16:37:37.190914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.919 [2024-09-29 16:37:37.190935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.919 [2024-09-29 16:37:37.190958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.919 [2024-09-29 16:37:37.190979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.919 [2024-09-29 16:37:37.191002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.919 [2024-09-29 16:37:37.191023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.919 [2024-09-29 16:37:37.191045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.919 [2024-09-29 16:37:37.191065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.919 [2024-09-29 16:37:37.191088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.919 [2024-09-29 16:37:37.191108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.919 [2024-09-29 16:37:37.191131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.919 [2024-09-29 16:37:37.191152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.919 [2024-09-29 16:37:37.191174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.920 [2024-09-29 16:37:37.191195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.920 [2024-09-29 16:37:37.191218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.920 [2024-09-29 16:37:37.191238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.920 [2024-09-29 16:37:37.191260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.920 [2024-09-29 16:37:37.191280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.920 [2024-09-29 16:37:37.191303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.920 [2024-09-29 16:37:37.191324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.920 [2024-09-29 16:37:37.191347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.920 [2024-09-29 16:37:37.191367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.920 [2024-09-29 16:37:37.191390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.920 [2024-09-29 16:37:37.191411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.920 [2024-09-29 16:37:37.191434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.920 [2024-09-29 16:37:37.191483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.920 [2024-09-29 16:37:37.191508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.920 [2024-09-29 16:37:37.191529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.920 [2024-09-29 16:37:37.191552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.920 [2024-09-29 16:37:37.191573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.920 [2024-09-29 16:37:37.191596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.920 [2024-09-29 16:37:37.191617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.920 [2024-09-29 16:37:37.191640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.920 [2024-09-29 16:37:37.191660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.920 [2024-09-29 16:37:37.191691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.920 [2024-09-29 16:37:37.191714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.920 [2024-09-29 16:37:37.191737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.920 [2024-09-29 16:37:37.191758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.920 [2024-09-29 16:37:37.191781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.920 [2024-09-29 16:37:37.191801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.920 [2024-09-29 16:37:37.191824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.920 [2024-09-29 16:37:37.191845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.920 [2024-09-29 16:37:37.191868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.920 [2024-09-29 16:37:37.191888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.920 [2024-09-29 16:37:37.191911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.920 [2024-09-29 16:37:37.191932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.920 [2024-09-29 16:37:37.191955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.920 [2024-09-29 16:37:37.191976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.920 [2024-09-29 16:37:37.191999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.920 [2024-09-29 16:37:37.192020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.920 [2024-09-29 16:37:37.192048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.920 [2024-09-29 16:37:37.192070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.920 [2024-09-29 16:37:37.192093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.920 [2024-09-29 16:37:37.192114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.920 [2024-09-29 16:37:37.192136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.920 [2024-09-29 16:37:37.192156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.920 [2024-09-29 16:37:37.192177] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fb100 is same with the state(6) to be set 00:29:36.920 [2024-09-29 16:37:37.196747] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:29:36.920 [2024-09-29 16:37:37.196789] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:29:36.920 task offset: 16384 on job bdev=Nvme10n1 fails 00:29:36.920 00:29:36.920 Latency(us) 00:29:36.920 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:36.920 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:36.920 Job: Nvme1n1 ended in about 0.96 seconds with error 00:29:36.920 Verification LBA range: start 0x0 length 0x400 00:29:36.920 Nvme1n1 : 0.96 132.89 8.31 66.44 0.00 317439.24 20680.25 309135.74 00:29:36.920 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:36.920 Job: Nvme2n1 ended in about 0.97 seconds with error 00:29:36.920 Verification LBA range: start 0x0 length 0x400 00:29:36.920 Nvme2n1 : 0.97 132.25 8.27 66.12 0.00 312269.94 41166.32 260978.92 00:29:36.920 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:36.920 Job: Nvme3n1 ended in about 0.97 seconds with error 00:29:36.920 Verification LBA range: start 0x0 length 0x400 00:29:36.920 Nvme3n1 : 0.97 135.72 8.48 65.80 0.00 300982.62 40389.59 265639.25 00:29:36.920 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:36.920 Job: Nvme4n1 ended in about 0.98 seconds with error 00:29:36.920 Verification LBA range: start 0x0 length 0x400 00:29:36.920 Nvme4n1 : 0.98 135.09 8.44 65.50 0.00 295918.82 23107.51 326223.64 00:29:36.920 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:36.920 Job: Nvme5n1 ended in about 0.98 seconds with error 00:29:36.920 Verification LBA range: start 0x0 length 0x400 00:29:36.920 Nvme5n1 : 0.98 130.39 8.15 65.20 0.00 296997.93 24855.13 309135.74 00:29:36.920 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:36.920 Job: Nvme6n1 ended in about 0.99 seconds with error 00:29:36.920 Verification LBA range: start 0x0 length 0x400 00:29:36.920 Nvme6n1 : 0.99 129.14 8.07 64.57 0.00 293593.44 21942.42 313796.08 00:29:36.920 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:36.920 Job: Nvme7n1 ended in about 1.00 seconds with error 00:29:36.920 Verification LBA range: start 0x0 length 0x400 00:29:36.920 Nvme7n1 : 1.00 132.44 8.28 64.21 0.00 283023.38 23981.32 304475.40 00:29:36.920 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:36.920 Job: Nvme8n1 ended in about 1.00 seconds with error 00:29:36.920 Verification LBA range: start 0x0 length 0x400 00:29:36.920 Nvme8n1 : 1.00 127.85 7.99 63.93 0.00 283864.94 23690.05 346418.44 00:29:36.920 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:36.920 Job: Nvme9n1 ended in about 0.96 seconds with error 00:29:36.920 Verification LBA range: start 0x0 length 0x400 00:29:36.920 Nvme9n1 : 0.96 134.02 8.38 67.01 0.00 261820.68 5679.79 316902.97 00:29:36.920 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:36.920 Job: Nvme10n1 ended in about 0.95 seconds with error 00:29:36.920 Verification LBA range: start 0x0 length 0x400 00:29:36.920 Nvme10n1 : 0.95 134.29 8.39 67.14 0.00 254658.43 16893.72 337097.77 00:29:36.920 =================================================================================================================== 00:29:36.920 Total : 1324.08 82.75 655.93 0.00 290077.14 5679.79 346418.44 00:29:36.920 [2024-09-29 16:37:37.279774] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:36.920 [2024-09-29 16:37:37.279879] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:29:36.920 [2024-09-29 16:37:37.280296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.920 [2024-09-29 16:37:37.280341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4d00 with addr=10.0.0.2, port=4420 00:29:36.920 [2024-09-29 16:37:37.280371] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4d00 is same with the state(6) to be set 00:29:36.920 [2024-09-29 16:37:37.280430] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:29:36.920 [2024-09-29 16:37:37.280467] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2f00 (9): Bad file descriptor 00:29:36.920 [2024-09-29 16:37:37.280496] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3900 (9): Bad file descriptor 00:29:36.920 [2024-09-29 16:37:37.280523] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4300 (9): Bad file descriptor 00:29:36.920 [2024-09-29 16:37:37.280548] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:29:36.921 [2024-09-29 16:37:37.280569] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:29:36.921 [2024-09-29 16:37:37.280593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:29:36.921 [2024-09-29 16:37:37.280703] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:36.921 [2024-09-29 16:37:37.280739] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:36.921 [2024-09-29 16:37:37.280766] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:36.921 [2024-09-29 16:37:37.280796] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:36.921 [2024-09-29 16:37:37.280824] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:36.921 [2024-09-29 16:37:37.280853] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4d00 (9): Bad file descriptor 00:29:36.921 [2024-09-29 16:37:37.281550] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.921 [2024-09-29 16:37:37.281773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.921 [2024-09-29 16:37:37.281811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f5700 with addr=10.0.0.2, port=4420 00:29:36.921 [2024-09-29 16:37:37.281834] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f5700 is same with the state(6) to be set 00:29:36.921 [2024-09-29 16:37:37.281984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.921 [2024-09-29 16:37:37.282017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f6100 with addr=10.0.0.2, port=4420 00:29:36.921 [2024-09-29 16:37:37.282040] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6100 is same with the state(6) to be set 00:29:36.921 [2024-09-29 16:37:37.282167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.921 [2024-09-29 16:37:37.282200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f6b00 with addr=10.0.0.2, port=4420 00:29:36.921 [2024-09-29 16:37:37.282223] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6b00 is same with the state(6) to be set 00:29:36.921 [2024-09-29 16:37:37.282247] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.921 [2024-09-29 16:37:37.282266] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.921 [2024-09-29 16:37:37.282285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.921 [2024-09-29 16:37:37.282315] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:29:36.921 [2024-09-29 16:37:37.282336] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:29:36.921 [2024-09-29 16:37:37.282355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:29:36.921 [2024-09-29 16:37:37.282383] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:29:36.921 [2024-09-29 16:37:37.282402] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:29:36.921 [2024-09-29 16:37:37.282421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:29:36.921 [2024-09-29 16:37:37.282449] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:29:36.921 [2024-09-29 16:37:37.282469] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:29:36.921 [2024-09-29 16:37:37.282487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:29:36.921 [2024-09-29 16:37:37.282536] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:36.921 [2024-09-29 16:37:37.282576] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:36.921 [2024-09-29 16:37:37.282605] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:36.921 [2024-09-29 16:37:37.282631] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:36.921 [2024-09-29 16:37:37.282656] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:36.921 [2024-09-29 16:37:37.282697] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:36.921 [2024-09-29 16:37:37.284345] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:29:36.921 [2024-09-29 16:37:37.284426] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.921 [2024-09-29 16:37:37.284450] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.921 [2024-09-29 16:37:37.284468] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.921 [2024-09-29 16:37:37.284485] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.921 [2024-09-29 16:37:37.284591] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f5700 (9): Bad file descriptor 00:29:36.921 [2024-09-29 16:37:37.284626] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6100 (9): Bad file descriptor 00:29:36.921 [2024-09-29 16:37:37.284654] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6b00 (9): Bad file descriptor 00:29:36.921 [2024-09-29 16:37:37.284685] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:29:36.921 [2024-09-29 16:37:37.284711] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:29:36.921 [2024-09-29 16:37:37.284731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:29:36.921 [2024-09-29 16:37:37.285123] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:29:36.921 [2024-09-29 16:37:37.285157] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.921 [2024-09-29 16:37:37.285336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.921 [2024-09-29 16:37:37.285373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7f00 with addr=10.0.0.2, port=4420 00:29:36.921 [2024-09-29 16:37:37.285395] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7f00 is same with the state(6) to be set 00:29:36.921 [2024-09-29 16:37:37.285417] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:29:36.921 [2024-09-29 16:37:37.285435] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:29:36.921 [2024-09-29 16:37:37.285453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:29:36.921 [2024-09-29 16:37:37.285482] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:29:36.921 [2024-09-29 16:37:37.285503] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:29:36.921 [2024-09-29 16:37:37.285522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:29:36.921 [2024-09-29 16:37:37.285548] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:29:36.921 [2024-09-29 16:37:37.285567] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:29:36.921 [2024-09-29 16:37:37.285586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:29:36.921 [2024-09-29 16:37:37.285693] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.921 [2024-09-29 16:37:37.285720] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.921 [2024-09-29 16:37:37.285738] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.921 [2024-09-29 16:37:37.285893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.921 [2024-09-29 16:37:37.285928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:29:36.921 [2024-09-29 16:37:37.285950] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(6) to be set 00:29:36.921 [2024-09-29 16:37:37.285976] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7f00 (9): Bad file descriptor 00:29:36.921 [2024-09-29 16:37:37.286063] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:29:36.921 [2024-09-29 16:37:37.286095] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:29:36.921 [2024-09-29 16:37:37.286115] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:29:36.921 [2024-09-29 16:37:37.286134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:29:36.921 [2024-09-29 16:37:37.286200] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.921 [2024-09-29 16:37:37.286228] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:29:36.921 [2024-09-29 16:37:37.286246] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:29:36.921 [2024-09-29 16:37:37.286270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:29:36.921 [2024-09-29 16:37:37.286334] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.203 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:29:40.769 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 3247395 00:29:40.769 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:29:40.769 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3247395 00:29:40.769 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:29:40.769 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:40.769 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:29:40.769 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:40.769 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 3247395 00:29:40.769 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:29:40.769 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:40.769 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:29:40.769 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:29:40.769 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:29:40.769 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:40.769 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:29:40.769 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:40.769 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:40.769 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:40.769 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:40.769 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # nvmfcleanup 00:29:40.769 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:29:40.769 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:40.769 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:29:40.769 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:40.769 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:40.769 rmmod nvme_tcp 00:29:40.769 rmmod nvme_fabrics 00:29:40.769 rmmod nvme_keyring 00:29:40.769 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:40.769 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:29:40.769 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:29:40.769 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@513 -- # '[' -n 3247083 ']' 00:29:40.769 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@514 -- # killprocess 3247083 00:29:40.769 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 3247083 ']' 00:29:40.769 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 3247083 00:29:40.769 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3247083) - No such process 00:29:40.769 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@977 -- # echo 'Process with pid 3247083 is not found' 00:29:40.769 Process with pid 3247083 is not found 00:29:40.769 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:29:40.769 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:29:40.769 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:29:40.769 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:29:40.769 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@787 -- # iptables-save 00:29:40.769 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:29:40.769 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@787 -- # iptables-restore 00:29:40.769 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:40.769 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:40.769 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:40.769 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:40.769 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:42.669 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:42.670 00:29:42.670 real 0m11.741s 00:29:42.670 user 0m33.865s 00:29:42.670 sys 0m2.072s 00:29:42.670 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:42.670 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:42.670 ************************************ 00:29:42.670 END TEST nvmf_shutdown_tc3 00:29:42.670 ************************************ 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:42.930 ************************************ 00:29:42.930 START TEST nvmf_shutdown_tc4 00:29:42.930 ************************************ 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc4 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@472 -- # prepare_net_devs 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@434 -- # local -g is_hw=no 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@436 -- # remove_spdk_ns 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:42.930 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:42.930 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:42.930 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:42.930 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # is_hw=yes 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:29:42.930 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:42.931 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:42.931 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:42.931 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:42.931 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:42.931 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:42.931 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:42.931 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:42.931 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:42.931 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:42.931 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:42.931 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:42.931 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:42.931 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:42.931 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:42.931 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:42.931 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:42.931 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:42.931 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:42.931 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:42.931 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:42.931 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:42.931 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:42.931 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:42.931 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.165 ms 00:29:42.931 00:29:42.931 --- 10.0.0.2 ping statistics --- 00:29:42.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:42.931 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:29:42.931 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:42.931 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:42.931 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:29:42.931 00:29:42.931 --- 10.0.0.1 ping statistics --- 00:29:42.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:42.931 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:29:42.931 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:42.931 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # return 0 00:29:42.931 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:29:42.931 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:42.931 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:29:42.931 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:29:42.931 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:42.931 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:29:42.931 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:29:42.931 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:42.931 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:29:42.931 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:42.931 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:42.931 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@505 -- # nvmfpid=3248678 00:29:42.931 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:42.931 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@506 -- # waitforlisten 3248678 00:29:42.931 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@831 -- # '[' -z 3248678 ']' 00:29:42.931 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:43.189 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:43.189 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:43.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:43.189 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:43.189 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:43.189 [2024-09-29 16:37:43.588631] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:29:43.189 [2024-09-29 16:37:43.588823] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:43.189 [2024-09-29 16:37:43.725826] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:43.446 [2024-09-29 16:37:43.976777] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:43.446 [2024-09-29 16:37:43.976856] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:43.446 [2024-09-29 16:37:43.976881] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:43.446 [2024-09-29 16:37:43.976905] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:43.446 [2024-09-29 16:37:43.976924] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:43.446 [2024-09-29 16:37:43.977062] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:29:43.446 [2024-09-29 16:37:43.977178] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:29:43.446 [2024-09-29 16:37:43.977223] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:43.446 [2024-09-29 16:37:43.977230] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:29:44.012 16:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:44.012 16:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # return 0 00:29:44.012 16:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:29:44.012 16:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:44.012 16:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:44.270 16:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:44.270 16:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:44.270 16:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.270 16:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:44.270 [2024-09-29 16:37:44.588249] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:44.270 16:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.270 16:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:44.270 16:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:44.270 16:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:44.270 16:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:44.270 16:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:44.270 16:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:44.270 16:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:44.270 16:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:44.270 16:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:44.270 16:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:44.270 16:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:44.270 16:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:44.270 16:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:44.270 16:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:44.270 16:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:44.270 16:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:44.270 16:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:44.270 16:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:44.270 16:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:44.270 16:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:44.270 16:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:44.270 16:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:44.270 16:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:44.270 16:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:44.270 16:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:44.270 16:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:44.270 16:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.270 16:37:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:44.270 Malloc1 00:29:44.270 [2024-09-29 16:37:44.731312] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:44.270 Malloc2 00:29:44.528 Malloc3 00:29:44.528 Malloc4 00:29:44.786 Malloc5 00:29:44.786 Malloc6 00:29:44.786 Malloc7 00:29:45.044 Malloc8 00:29:45.044 Malloc9 00:29:45.302 Malloc10 00:29:45.302 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.302 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:45.302 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:45.302 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:45.302 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=3248949 00:29:45.302 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:29:45.302 16:37:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:29:45.302 [2024-09-29 16:37:45.761866] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:50.570 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:50.570 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 3248678 00:29:50.570 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 3248678 ']' 00:29:50.570 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 3248678 00:29:50.570 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # uname 00:29:50.570 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:50.570 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3248678 00:29:50.570 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:50.570 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:50.570 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3248678' 00:29:50.570 killing process with pid 3248678 00:29:50.570 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@969 -- # kill 3248678 00:29:50.570 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@974 -- # wait 3248678 00:29:50.570 [2024-09-29 16:37:50.720777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000dc80 is same with the state(6) to be set 00:29:50.570 [2024-09-29 16:37:50.720869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000dc80 is same with the state(6) to be set 00:29:50.570 [2024-09-29 16:37:50.720893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000dc80 is same with the state(6) to be set 00:29:50.570 [2024-09-29 16:37:50.720915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000dc80 is same with the state(6) to be set 00:29:50.570 [2024-09-29 16:37:50.720935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000dc80 is same with the state(6) to be set 00:29:50.570 [2024-09-29 16:37:50.720955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000dc80 is same with the state(6) to be set 00:29:50.570 [2024-09-29 16:37:50.726037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006480 is same with the state(6) to be set 00:29:50.570 [2024-09-29 16:37:50.726127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006480 is same with the state(6) to be set 00:29:50.570 [2024-09-29 16:37:50.726154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006480 is same with the state(6) to be set 00:29:50.570 [2024-09-29 16:37:50.726175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006480 is same with the state(6) to be set 00:29:50.570 [2024-09-29 16:37:50.726195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006480 is same with the state(6) to be set 00:29:50.570 [2024-09-29 16:37:50.726526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006880 is same with the state(6) to be set 00:29:50.570 [2024-09-29 16:37:50.726569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006880 is same with the state(6) to be set 00:29:50.570 [2024-09-29 16:37:50.726592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006880 is same with the state(6) to be set 00:29:50.570 [2024-09-29 16:37:50.726612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006880 is same with the state(6) to be set 00:29:50.570 [2024-09-29 16:37:50.726632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006880 is same with the state(6) to be set 00:29:50.570 [2024-09-29 16:37:50.726651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006880 is same with the state(6) to be set 00:29:50.570 [2024-09-29 16:37:50.726706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006880 is same with the state(6) to be set 00:29:50.570 [2024-09-29 16:37:50.726762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006880 is same with the state(6) to be set 00:29:50.570 [2024-09-29 16:37:50.726783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006880 is same with the state(6) to be set 00:29:50.570 [2024-09-29 16:37:50.726802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006880 is same with the state(6) to be set 00:29:50.570 [2024-09-29 16:37:50.726821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006880 is same with the state(6) to be set 00:29:50.570 [2024-09-29 16:37:50.726839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006880 is same with the state(6) to be set 00:29:50.570 [2024-09-29 16:37:50.726857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006880 is same with the state(6) to be set 00:29:50.570 [2024-09-29 16:37:50.726887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006880 is same with the state(6) to be set 00:29:50.570 [2024-09-29 16:37:50.726909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006880 is same with the state(6) to be set 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 [2024-09-29 16:37:50.734960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 [2024-09-29 16:37:50.737430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.571 [2024-09-29 16:37:50.740166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.571 starting I/O failed: -6 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.571 Write completed with error (sct=0, sc=8) 00:29:50.571 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 [2024-09-29 16:37:50.749885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:50.572 NVMe io qpair process completion error 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 [2024-09-29 16:37:50.752117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 [2024-09-29 16:37:50.754077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.572 starting I/O failed: -6 00:29:50.572 Write completed with error (sct=0, sc=8) 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 [2024-09-29 16:37:50.756912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 [2024-09-29 16:37:50.766731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:50.573 NVMe io qpair process completion error 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 Write completed with error (sct=0, sc=8) 00:29:50.573 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 [2024-09-29 16:37:50.769046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 [2024-09-29 16:37:50.771326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 [2024-09-29 16:37:50.774059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.574 Write completed with error (sct=0, sc=8) 00:29:50.574 starting I/O failed: -6 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 starting I/O failed: -6 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 starting I/O failed: -6 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 starting I/O failed: -6 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 starting I/O failed: -6 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 starting I/O failed: -6 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 starting I/O failed: -6 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 starting I/O failed: -6 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 starting I/O failed: -6 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 starting I/O failed: -6 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 starting I/O failed: -6 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 starting I/O failed: -6 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 starting I/O failed: -6 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 starting I/O failed: -6 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 starting I/O failed: -6 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 starting I/O failed: -6 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 starting I/O failed: -6 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 starting I/O failed: -6 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 starting I/O failed: -6 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 starting I/O failed: -6 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 starting I/O failed: -6 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 starting I/O failed: -6 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 starting I/O failed: -6 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 starting I/O failed: -6 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 starting I/O failed: -6 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 starting I/O failed: -6 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 starting I/O failed: -6 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 starting I/O failed: -6 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 starting I/O failed: -6 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 starting I/O failed: -6 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 starting I/O failed: -6 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 starting I/O failed: -6 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 starting I/O failed: -6 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 starting I/O failed: -6 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 starting I/O failed: -6 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 starting I/O failed: -6 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 starting I/O failed: -6 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 starting I/O failed: -6 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 starting I/O failed: -6 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 starting I/O failed: -6 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 starting I/O failed: -6 00:29:50.575 [2024-09-29 16:37:50.787534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.575 NVMe io qpair process completion error 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 starting I/O failed: -6 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 starting I/O failed: -6 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 starting I/O failed: -6 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 starting I/O failed: -6 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 starting I/O failed: -6 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 starting I/O failed: -6 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 starting I/O failed: -6 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 starting I/O failed: -6 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 starting I/O failed: -6 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 starting I/O failed: -6 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 [2024-09-29 16:37:50.789861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 starting I/O failed: -6 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 starting I/O failed: -6 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 starting I/O failed: -6 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 starting I/O failed: -6 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 starting I/O failed: -6 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 starting I/O failed: -6 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 starting I/O failed: -6 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 starting I/O failed: -6 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 starting I/O failed: -6 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 starting I/O failed: -6 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 starting I/O failed: -6 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 starting I/O failed: -6 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 starting I/O failed: -6 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 starting I/O failed: -6 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 starting I/O failed: -6 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 starting I/O failed: -6 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 Write completed with error (sct=0, sc=8) 00:29:50.575 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 [2024-09-29 16:37:50.791890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 [2024-09-29 16:37:50.794590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 starting I/O failed: -6 00:29:50.576 [2024-09-29 16:37:50.807214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:50.576 NVMe io qpair process completion error 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.576 Write completed with error (sct=0, sc=8) 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 [2024-09-29 16:37:50.809385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 [2024-09-29 16:37:50.811546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 [2024-09-29 16:37:50.814209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.577 starting I/O failed: -6 00:29:50.577 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 [2024-09-29 16:37:50.829914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:50.578 NVMe io qpair process completion error 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 [2024-09-29 16:37:50.832088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.578 starting I/O failed: -6 00:29:50.578 starting I/O failed: -6 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 [2024-09-29 16:37:50.834310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.578 starting I/O failed: -6 00:29:50.578 Write completed with error (sct=0, sc=8) 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 [2024-09-29 16:37:50.836999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 [2024-09-29 16:37:50.847778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:50.579 NVMe io qpair process completion error 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 starting I/O failed: -6 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.579 Write completed with error (sct=0, sc=8) 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 [2024-09-29 16:37:50.854170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.580 Write completed with error (sct=0, sc=8) 00:29:50.580 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 [2024-09-29 16:37:50.863803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.581 NVMe io qpair process completion error 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 [2024-09-29 16:37:50.865830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 [2024-09-29 16:37:50.867771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.581 starting I/O failed: -6 00:29:50.581 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 [2024-09-29 16:37:50.870339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.582 starting I/O failed: -6 00:29:50.582 starting I/O failed: -6 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 [2024-09-29 16:37:50.883066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:50.582 NVMe io qpair process completion error 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 starting I/O failed: -6 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.582 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 [2024-09-29 16:37:50.885238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 [2024-09-29 16:37:50.887114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 [2024-09-29 16:37:50.889766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.583 starting I/O failed: -6 00:29:50.583 Write completed with error (sct=0, sc=8) 00:29:50.584 starting I/O failed: -6 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 starting I/O failed: -6 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 starting I/O failed: -6 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 starting I/O failed: -6 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 starting I/O failed: -6 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 starting I/O failed: -6 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 starting I/O failed: -6 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 starting I/O failed: -6 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 starting I/O failed: -6 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 starting I/O failed: -6 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 starting I/O failed: -6 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 starting I/O failed: -6 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 starting I/O failed: -6 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 starting I/O failed: -6 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 starting I/O failed: -6 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 starting I/O failed: -6 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 starting I/O failed: -6 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 starting I/O failed: -6 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 starting I/O failed: -6 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 starting I/O failed: -6 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 starting I/O failed: -6 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 starting I/O failed: -6 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 starting I/O failed: -6 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 starting I/O failed: -6 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 starting I/O failed: -6 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 starting I/O failed: -6 00:29:50.584 [2024-09-29 16:37:50.902495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:50.584 NVMe io qpair process completion error 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 starting I/O failed: -6 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 starting I/O failed: -6 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 starting I/O failed: -6 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 starting I/O failed: -6 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 starting I/O failed: -6 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 starting I/O failed: -6 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 starting I/O failed: -6 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 starting I/O failed: -6 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 starting I/O failed: -6 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 [2024-09-29 16:37:50.904550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:50.584 starting I/O failed: -6 00:29:50.584 starting I/O failed: -6 00:29:50.584 starting I/O failed: -6 00:29:50.584 starting I/O failed: -6 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 starting I/O failed: -6 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 starting I/O failed: -6 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 starting I/O failed: -6 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 starting I/O failed: -6 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 starting I/O failed: -6 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 starting I/O failed: -6 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 starting I/O failed: -6 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 starting I/O failed: -6 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 starting I/O failed: -6 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 starting I/O failed: -6 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 starting I/O failed: -6 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 starting I/O failed: -6 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 starting I/O failed: -6 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 starting I/O failed: -6 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 starting I/O failed: -6 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 starting I/O failed: -6 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 starting I/O failed: -6 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 starting I/O failed: -6 00:29:50.584 [2024-09-29 16:37:50.906903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 starting I/O failed: -6 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 starting I/O failed: -6 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 starting I/O failed: -6 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 starting I/O failed: -6 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 starting I/O failed: -6 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 starting I/O failed: -6 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 starting I/O failed: -6 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 starting I/O failed: -6 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 starting I/O failed: -6 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 starting I/O failed: -6 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 starting I/O failed: -6 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 starting I/O failed: -6 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 starting I/O failed: -6 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 starting I/O failed: -6 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.584 starting I/O failed: -6 00:29:50.584 Write completed with error (sct=0, sc=8) 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 [2024-09-29 16:37:50.909609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 Write completed with error (sct=0, sc=8) 00:29:50.585 starting I/O failed: -6 00:29:50.585 [2024-09-29 16:37:50.925189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:50.585 NVMe io qpair process completion error 00:29:50.585 Initializing NVMe Controllers 00:29:50.585 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:29:50.585 Controller IO queue size 128, less than required. 00:29:50.585 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:50.585 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:29:50.585 Controller IO queue size 128, less than required. 00:29:50.585 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:50.585 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:29:50.585 Controller IO queue size 128, less than required. 00:29:50.585 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:50.585 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:29:50.585 Controller IO queue size 128, less than required. 00:29:50.585 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:50.585 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:29:50.585 Controller IO queue size 128, less than required. 00:29:50.585 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:50.585 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:50.585 Controller IO queue size 128, less than required. 00:29:50.585 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:50.585 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:29:50.585 Controller IO queue size 128, less than required. 00:29:50.585 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:50.585 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:29:50.585 Controller IO queue size 128, less than required. 00:29:50.585 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:50.585 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:29:50.585 Controller IO queue size 128, less than required. 00:29:50.585 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:50.585 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:29:50.585 Controller IO queue size 128, less than required. 00:29:50.586 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:50.586 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:29:50.586 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:29:50.586 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:29:50.586 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:29:50.586 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:29:50.586 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:50.586 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:29:50.586 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:29:50.586 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:29:50.586 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:29:50.586 Initialization complete. Launching workers. 00:29:50.586 ======================================================== 00:29:50.586 Latency(us) 00:29:50.586 Device Information : IOPS MiB/s Average min max 00:29:50.586 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1272.13 54.66 100663.27 1742.18 231275.78 00:29:50.586 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1325.30 56.95 96785.46 1791.97 248063.87 00:29:50.586 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1337.25 57.46 96084.60 1995.58 226113.75 00:29:50.586 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1354.55 58.20 95045.32 1654.13 273802.56 00:29:50.586 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1340.24 57.59 96266.24 1546.54 252554.22 00:29:50.586 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1272.99 54.70 97525.02 1704.75 160874.98 00:29:50.586 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1299.89 55.85 95664.95 1640.54 158758.19 00:29:50.586 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1327.22 57.03 93860.13 2244.78 159515.36 00:29:50.586 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1371.20 58.92 91075.90 2052.00 161116.24 00:29:50.586 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1328.50 57.08 94224.88 2247.61 177307.30 00:29:50.586 ======================================================== 00:29:50.586 Total : 13229.28 568.45 95675.39 1546.54 273802.56 00:29:50.586 00:29:50.586 [2024-09-29 16:37:50.954432] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000018180 is same with the state(6) to be set 00:29:50.586 [2024-09-29 16:37:50.954576] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016380 is same with the state(6) to be set 00:29:50.586 [2024-09-29 16:37:50.954661] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000017280 is same with the state(6) to be set 00:29:50.586 [2024-09-29 16:37:50.954760] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000017780 is same with the state(6) to be set 00:29:50.586 [2024-09-29 16:37:50.954841] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000017c80 is same with the state(6) to be set 00:29:50.586 [2024-09-29 16:37:50.954928] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000015980 is same with the state(6) to be set 00:29:50.586 [2024-09-29 16:37:50.955010] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016d80 is same with the state(6) to be set 00:29:50.586 [2024-09-29 16:37:50.955091] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000018680 is same with the state(6) to be set 00:29:50.586 [2024-09-29 16:37:50.955173] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000015e80 is same with the state(6) to be set 00:29:50.586 [2024-09-29 16:37:50.955264] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016880 is same with the state(6) to be set 00:29:50.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:29:53.869 16:37:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:29:54.436 16:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 3248949 00:29:54.436 16:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:29:54.436 16:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3248949 00:29:54.436 16:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:29:54.436 16:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:54.436 16:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:29:54.436 16:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:54.436 16:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 3248949 00:29:54.436 16:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:29:54.436 16:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:54.436 16:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:54.436 16:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:54.436 16:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:29:54.436 16:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:54.436 16:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:54.436 16:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:54.436 16:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:54.436 16:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # nvmfcleanup 00:29:54.436 16:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:29:54.437 16:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:54.437 16:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:29:54.437 16:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:54.437 16:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:54.437 rmmod nvme_tcp 00:29:54.437 rmmod nvme_fabrics 00:29:54.437 rmmod nvme_keyring 00:29:54.437 16:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:54.437 16:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:29:54.437 16:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:29:54.437 16:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@513 -- # '[' -n 3248678 ']' 00:29:54.437 16:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@514 -- # killprocess 3248678 00:29:54.437 16:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 3248678 ']' 00:29:54.437 16:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 3248678 00:29:54.437 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3248678) - No such process 00:29:54.437 16:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@977 -- # echo 'Process with pid 3248678 is not found' 00:29:54.437 Process with pid 3248678 is not found 00:29:54.437 16:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:29:54.437 16:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:29:54.437 16:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:29:54.437 16:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:29:54.437 16:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@787 -- # iptables-save 00:29:54.437 16:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:29:54.437 16:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@787 -- # iptables-restore 00:29:54.437 16:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:54.437 16:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:54.437 16:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:54.437 16:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:54.437 16:37:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:56.372 16:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:56.372 00:29:56.372 real 0m13.592s 00:29:56.372 user 0m35.060s 00:29:56.372 sys 0m5.749s 00:29:56.372 16:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:56.372 16:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:56.372 ************************************ 00:29:56.372 END TEST nvmf_shutdown_tc4 00:29:56.372 ************************************ 00:29:56.372 16:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:29:56.372 00:29:56.372 real 0m56.561s 00:29:56.372 user 2m50.477s 00:29:56.372 sys 0m14.133s 00:29:56.372 16:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:56.372 16:37:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:56.372 ************************************ 00:29:56.372 END TEST nvmf_shutdown 00:29:56.372 ************************************ 00:29:56.372 16:37:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:29:56.372 00:29:56.372 real 18m33.070s 00:29:56.372 user 51m5.087s 00:29:56.372 sys 3m30.352s 00:29:56.372 16:37:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:56.372 16:37:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:56.372 ************************************ 00:29:56.372 END TEST nvmf_target_extra 00:29:56.372 ************************************ 00:29:56.655 16:37:56 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:29:56.655 16:37:56 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:56.655 16:37:56 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:56.655 16:37:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:56.655 ************************************ 00:29:56.655 START TEST nvmf_host 00:29:56.655 ************************************ 00:29:56.655 16:37:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:29:56.655 * Looking for test storage... 00:29:56.655 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:29:56.655 16:37:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:56.655 16:37:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lcov --version 00:29:56.655 16:37:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:56.655 16:37:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:56.655 16:37:57 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:56.655 16:37:57 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:56.655 16:37:57 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:56.655 16:37:57 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:29:56.655 16:37:57 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:29:56.655 16:37:57 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:29:56.655 16:37:57 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:29:56.655 16:37:57 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:29:56.655 16:37:57 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:29:56.655 16:37:57 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:29:56.655 16:37:57 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:56.655 16:37:57 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:29:56.655 16:37:57 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:29:56.655 16:37:57 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:56.655 16:37:57 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:56.655 16:37:57 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:29:56.655 16:37:57 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:29:56.655 16:37:57 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:56.655 16:37:57 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:29:56.655 16:37:57 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:29:56.655 16:37:57 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:29:56.655 16:37:57 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:29:56.655 16:37:57 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:56.655 16:37:57 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:29:56.655 16:37:57 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:29:56.655 16:37:57 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:56.655 16:37:57 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:56.655 16:37:57 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:29:56.655 16:37:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:56.655 16:37:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:56.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:56.655 --rc genhtml_branch_coverage=1 00:29:56.655 --rc genhtml_function_coverage=1 00:29:56.655 --rc genhtml_legend=1 00:29:56.655 --rc geninfo_all_blocks=1 00:29:56.655 --rc geninfo_unexecuted_blocks=1 00:29:56.655 00:29:56.655 ' 00:29:56.655 16:37:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:56.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:56.655 --rc genhtml_branch_coverage=1 00:29:56.655 --rc genhtml_function_coverage=1 00:29:56.655 --rc genhtml_legend=1 00:29:56.655 --rc geninfo_all_blocks=1 00:29:56.655 --rc geninfo_unexecuted_blocks=1 00:29:56.655 00:29:56.655 ' 00:29:56.655 16:37:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:56.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:56.655 --rc genhtml_branch_coverage=1 00:29:56.655 --rc genhtml_function_coverage=1 00:29:56.655 --rc genhtml_legend=1 00:29:56.655 --rc geninfo_all_blocks=1 00:29:56.655 --rc geninfo_unexecuted_blocks=1 00:29:56.655 00:29:56.655 ' 00:29:56.655 16:37:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:56.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:56.655 --rc genhtml_branch_coverage=1 00:29:56.655 --rc genhtml_function_coverage=1 00:29:56.655 --rc genhtml_legend=1 00:29:56.655 --rc geninfo_all_blocks=1 00:29:56.655 --rc geninfo_unexecuted_blocks=1 00:29:56.655 00:29:56.655 ' 00:29:56.655 16:37:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:56.655 16:37:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:29:56.655 16:37:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:56.655 16:37:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:56.655 16:37:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:56.655 16:37:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:56.655 16:37:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:56.655 16:37:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:56.655 16:37:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:56.655 16:37:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:56.655 16:37:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:56.655 16:37:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:56.655 16:37:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:56.655 16:37:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:56.655 16:37:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:56.655 16:37:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:56.655 16:37:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:56.655 16:37:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:56.655 16:37:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:56.655 16:37:57 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:29:56.655 16:37:57 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:56.655 16:37:57 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:56.655 16:37:57 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:56.655 16:37:57 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.656 16:37:57 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.656 16:37:57 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.656 16:37:57 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:29:56.656 16:37:57 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.656 16:37:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:29:56.656 16:37:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:56.656 16:37:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:56.656 16:37:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:56.656 16:37:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:56.656 16:37:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:56.656 16:37:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:56.656 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:56.656 16:37:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:56.656 16:37:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:56.656 16:37:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:56.656 16:37:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:29:56.656 16:37:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:29:56.656 16:37:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:29:56.656 16:37:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:56.656 16:37:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:56.656 16:37:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:56.656 16:37:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.656 ************************************ 00:29:56.656 START TEST nvmf_multicontroller 00:29:56.656 ************************************ 00:29:56.656 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:56.656 * Looking for test storage... 00:29:56.656 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:56.656 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:56.656 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # lcov --version 00:29:56.656 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:56.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:56.915 --rc genhtml_branch_coverage=1 00:29:56.915 --rc genhtml_function_coverage=1 00:29:56.915 --rc genhtml_legend=1 00:29:56.915 --rc geninfo_all_blocks=1 00:29:56.915 --rc geninfo_unexecuted_blocks=1 00:29:56.915 00:29:56.915 ' 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:56.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:56.915 --rc genhtml_branch_coverage=1 00:29:56.915 --rc genhtml_function_coverage=1 00:29:56.915 --rc genhtml_legend=1 00:29:56.915 --rc geninfo_all_blocks=1 00:29:56.915 --rc geninfo_unexecuted_blocks=1 00:29:56.915 00:29:56.915 ' 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:56.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:56.915 --rc genhtml_branch_coverage=1 00:29:56.915 --rc genhtml_function_coverage=1 00:29:56.915 --rc genhtml_legend=1 00:29:56.915 --rc geninfo_all_blocks=1 00:29:56.915 --rc geninfo_unexecuted_blocks=1 00:29:56.915 00:29:56.915 ' 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:56.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:56.915 --rc genhtml_branch_coverage=1 00:29:56.915 --rc genhtml_function_coverage=1 00:29:56.915 --rc genhtml_legend=1 00:29:56.915 --rc geninfo_all_blocks=1 00:29:56.915 --rc geninfo_unexecuted_blocks=1 00:29:56.915 00:29:56.915 ' 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:56.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:56.915 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:29:56.916 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:29:56.916 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:29:56.916 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:56.916 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@472 -- # prepare_net_devs 00:29:56.916 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@434 -- # local -g is_hw=no 00:29:56.916 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@436 -- # remove_spdk_ns 00:29:56.916 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:56.916 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:56.916 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:56.916 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:29:56.916 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:29:56.916 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:29:56.916 16:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:58.818 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:58.818 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:29:58.818 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:58.818 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:58.818 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:58.818 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:58.818 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:58.818 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:29:58.818 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:58.818 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:29:58.818 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:29:58.818 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:29:58.818 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:29:58.819 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:29:58.819 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:29:58.819 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:58.819 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:58.819 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:58.819 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:58.819 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:58.819 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:58.819 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:58.819 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:58.819 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:58.819 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:58.819 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:58.819 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:29:58.819 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:29:58.819 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:29:58.819 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:29:58.819 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:29:58.819 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:29:58.819 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:58.819 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:58.819 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:58.819 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:58.819 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:58.819 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:58.819 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:58.819 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:58.819 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:58.819 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:58.819 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:58.819 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:58.819 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:58.819 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:58.819 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:58.819 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:58.819 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:29:58.819 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:29:58.819 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:29:58.819 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:58.819 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:58.819 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:58.819 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:58.819 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:58.819 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:58.819 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:58.819 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:58.819 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:58.819 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:58.819 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:58.819 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:58.819 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:58.819 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:58.819 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:58.819 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:58.819 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:58.819 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:58.819 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:58.819 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:58.819 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:29:58.819 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # is_hw=yes 00:29:58.819 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:29:58.819 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:29:58.819 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:29:58.819 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:58.819 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:58.819 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:58.819 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:58.819 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:58.819 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:58.819 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:58.819 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:58.819 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:58.819 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:58.819 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:58.819 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:58.819 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:58.819 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:58.819 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:59.077 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:59.077 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:59.077 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:59.077 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:59.077 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:59.077 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:59.077 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:59.077 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:59.077 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:59.077 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:29:59.077 00:29:59.077 --- 10.0.0.2 ping statistics --- 00:29:59.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:59.077 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:29:59.077 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:59.077 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:59.077 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:29:59.077 00:29:59.077 --- 10.0.0.1 ping statistics --- 00:29:59.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:59.077 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:29:59.077 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:59.077 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # return 0 00:29:59.077 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:29:59.077 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:59.077 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:29:59.077 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:29:59.077 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:59.077 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:29:59.077 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:29:59.077 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:29:59.077 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:29:59.077 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:59.077 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:59.077 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@505 -- # nvmfpid=3252052 00:29:59.077 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:59.077 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@506 -- # waitforlisten 3252052 00:29:59.077 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 3252052 ']' 00:29:59.077 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:59.077 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:59.077 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:59.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:59.077 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:59.077 16:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:59.077 [2024-09-29 16:37:59.584869] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:29:59.077 [2024-09-29 16:37:59.584999] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:59.336 [2024-09-29 16:37:59.726371] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:59.595 [2024-09-29 16:38:00.046909] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:59.595 [2024-09-29 16:38:00.047038] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:59.595 [2024-09-29 16:38:00.047067] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:59.595 [2024-09-29 16:38:00.047108] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:59.595 [2024-09-29 16:38:00.047131] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:59.595 [2024-09-29 16:38:00.047266] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:29:59.595 [2024-09-29 16:38:00.047295] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:59.595 [2024-09-29 16:38:00.047300] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:30:00.160 16:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:00.160 16:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:30:00.160 16:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:30:00.160 16:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:00.160 16:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:00.160 16:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:00.160 16:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:00.160 16:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.160 16:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:00.160 [2024-09-29 16:38:00.619380] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:00.160 16:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.160 16:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:00.160 16:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.160 16:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:00.160 Malloc0 00:30:00.160 16:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.419 16:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:00.419 16:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.419 16:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:00.419 16:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.419 16:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:00.419 16:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.419 16:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:00.419 16:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.419 16:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:00.419 16:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.419 16:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:00.419 [2024-09-29 16:38:00.742891] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:00.419 16:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.419 16:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:00.419 16:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.419 16:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:00.419 [2024-09-29 16:38:00.750711] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:00.419 16:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.419 16:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:30:00.419 16:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.419 16:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:00.419 Malloc1 00:30:00.419 16:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.419 16:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:30:00.419 16:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.419 16:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:00.419 16:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.419 16:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:30:00.419 16:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.419 16:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:00.419 16:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.419 16:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:00.419 16:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.419 16:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:00.419 16:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.419 16:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:30:00.419 16:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.419 16:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:00.419 16:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.419 16:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3252218 00:30:00.419 16:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:30:00.419 16:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:00.419 16:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3252218 /var/tmp/bdevperf.sock 00:30:00.419 16:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 3252218 ']' 00:30:00.419 16:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:00.419 16:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:00.419 16:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:00.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:00.419 16:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:00.419 16:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:01.793 16:38:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:01.793 16:38:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:30:01.793 16:38:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:30:01.793 16:38:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.793 16:38:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:01.793 NVMe0n1 00:30:01.793 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.793 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:01.793 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:30:01.793 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.793 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:01.793 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.793 1 00:30:01.793 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:30:01.793 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:30:01.793 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:30:01.793 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:01.793 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:01.793 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:01.793 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:01.793 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:30:01.793 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.793 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:01.793 request: 00:30:01.793 { 00:30:01.793 "name": "NVMe0", 00:30:01.793 "trtype": "tcp", 00:30:01.793 "traddr": "10.0.0.2", 00:30:01.793 "adrfam": "ipv4", 00:30:01.793 "trsvcid": "4420", 00:30:01.793 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:01.793 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:30:01.793 "hostaddr": "10.0.0.1", 00:30:01.793 "prchk_reftag": false, 00:30:01.793 "prchk_guard": false, 00:30:01.793 "hdgst": false, 00:30:01.793 "ddgst": false, 00:30:01.793 "allow_unrecognized_csi": false, 00:30:01.793 "method": "bdev_nvme_attach_controller", 00:30:01.793 "req_id": 1 00:30:01.793 } 00:30:01.793 Got JSON-RPC error response 00:30:01.793 response: 00:30:01.793 { 00:30:01.793 "code": -114, 00:30:01.793 "message": "A controller named NVMe0 already exists with the specified network path" 00:30:01.793 } 00:30:01.793 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:01.793 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:30:01.793 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:01.793 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:01.793 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:01.793 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:30:01.793 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:30:01.793 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:30:01.793 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:01.793 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:01.793 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:01.793 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:01.793 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:30:01.793 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.793 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:01.793 request: 00:30:01.793 { 00:30:01.793 "name": "NVMe0", 00:30:01.793 "trtype": "tcp", 00:30:01.793 "traddr": "10.0.0.2", 00:30:01.793 "adrfam": "ipv4", 00:30:01.793 "trsvcid": "4420", 00:30:01.793 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:01.793 "hostaddr": "10.0.0.1", 00:30:01.793 "prchk_reftag": false, 00:30:01.793 "prchk_guard": false, 00:30:01.793 "hdgst": false, 00:30:01.793 "ddgst": false, 00:30:01.793 "allow_unrecognized_csi": false, 00:30:01.793 "method": "bdev_nvme_attach_controller", 00:30:01.793 "req_id": 1 00:30:01.793 } 00:30:01.793 Got JSON-RPC error response 00:30:01.793 response: 00:30:01.793 { 00:30:01.793 "code": -114, 00:30:01.793 "message": "A controller named NVMe0 already exists with the specified network path" 00:30:01.793 } 00:30:01.793 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:01.793 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:30:01.793 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:01.793 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:01.793 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:01.793 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:30:01.793 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:30:01.793 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:30:01.793 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:01.793 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:01.793 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:01.793 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:01.793 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:30:01.793 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.793 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:01.793 request: 00:30:01.793 { 00:30:01.793 "name": "NVMe0", 00:30:01.793 "trtype": "tcp", 00:30:01.793 "traddr": "10.0.0.2", 00:30:01.793 "adrfam": "ipv4", 00:30:01.793 "trsvcid": "4420", 00:30:01.793 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:01.793 "hostaddr": "10.0.0.1", 00:30:01.793 "prchk_reftag": false, 00:30:01.793 "prchk_guard": false, 00:30:01.793 "hdgst": false, 00:30:01.793 "ddgst": false, 00:30:01.793 "multipath": "disable", 00:30:01.793 "allow_unrecognized_csi": false, 00:30:01.793 "method": "bdev_nvme_attach_controller", 00:30:01.793 "req_id": 1 00:30:01.793 } 00:30:01.793 Got JSON-RPC error response 00:30:01.793 response: 00:30:01.793 { 00:30:01.793 "code": -114, 00:30:01.793 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:30:01.793 } 00:30:01.793 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:01.793 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:30:01.793 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:01.793 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:01.793 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:01.793 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:30:01.793 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:30:01.793 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:30:01.794 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:01.794 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:01.794 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:01.794 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:01.794 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:30:01.794 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.794 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:01.794 request: 00:30:01.794 { 00:30:01.794 "name": "NVMe0", 00:30:01.794 "trtype": "tcp", 00:30:01.794 "traddr": "10.0.0.2", 00:30:01.794 "adrfam": "ipv4", 00:30:01.794 "trsvcid": "4420", 00:30:01.794 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:01.794 "hostaddr": "10.0.0.1", 00:30:01.794 "prchk_reftag": false, 00:30:01.794 "prchk_guard": false, 00:30:01.794 "hdgst": false, 00:30:01.794 "ddgst": false, 00:30:01.794 "multipath": "failover", 00:30:01.794 "allow_unrecognized_csi": false, 00:30:01.794 "method": "bdev_nvme_attach_controller", 00:30:01.794 "req_id": 1 00:30:01.794 } 00:30:01.794 Got JSON-RPC error response 00:30:01.794 response: 00:30:01.794 { 00:30:01.794 "code": -114, 00:30:01.794 "message": "A controller named NVMe0 already exists with the specified network path" 00:30:01.794 } 00:30:01.794 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:01.794 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:30:01.794 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:01.794 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:01.794 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:01.794 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:01.794 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.794 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:01.794 00:30:01.794 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.794 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:01.794 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.794 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:01.794 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.794 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:30:01.794 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.794 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:02.052 00:30:02.052 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:02.052 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:02.052 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:30:02.052 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:02.052 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:02.052 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:02.052 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:30:02.052 16:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:02.985 { 00:30:02.985 "results": [ 00:30:02.985 { 00:30:02.985 "job": "NVMe0n1", 00:30:02.985 "core_mask": "0x1", 00:30:02.985 "workload": "write", 00:30:02.985 "status": "finished", 00:30:02.985 "queue_depth": 128, 00:30:02.985 "io_size": 4096, 00:30:02.985 "runtime": 1.007009, 00:30:02.985 "iops": 12487.475285722372, 00:30:02.985 "mibps": 48.779200334853016, 00:30:02.985 "io_failed": 0, 00:30:02.985 "io_timeout": 0, 00:30:02.985 "avg_latency_us": 10231.217230189235, 00:30:02.985 "min_latency_us": 7864.32, 00:30:02.985 "max_latency_us": 19709.345185185186 00:30:02.985 } 00:30:02.985 ], 00:30:02.985 "core_count": 1 00:30:02.985 } 00:30:02.985 16:38:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:30:02.985 16:38:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:02.985 16:38:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:02.985 16:38:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:02.985 16:38:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:30:02.985 16:38:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 3252218 00:30:02.985 16:38:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 3252218 ']' 00:30:02.985 16:38:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 3252218 00:30:02.985 16:38:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:30:02.985 16:38:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:02.985 16:38:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3252218 00:30:03.242 16:38:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:03.242 16:38:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:03.242 16:38:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3252218' 00:30:03.242 killing process with pid 3252218 00:30:03.242 16:38:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 3252218 00:30:03.242 16:38:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 3252218 00:30:04.174 16:38:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:04.174 16:38:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:04.174 16:38:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:04.174 16:38:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:04.174 16:38:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:04.174 16:38:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:04.174 16:38:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:04.174 16:38:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:04.174 16:38:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:30:04.174 16:38:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:04.174 16:38:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:30:04.174 16:38:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:30:04.174 16:38:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:30:04.174 16:38:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:30:04.174 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:30:04.174 [2024-09-29 16:38:00.943954] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:30:04.174 [2024-09-29 16:38:00.944120] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3252218 ] 00:30:04.174 [2024-09-29 16:38:01.069919] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:04.174 [2024-09-29 16:38:01.302872] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:30:04.174 [2024-09-29 16:38:02.366705] bdev.c:4696:bdev_name_add: *ERROR*: Bdev name 25b9e436-2006-46b7-9074-2780fdbe00d5 already exists 00:30:04.174 [2024-09-29 16:38:02.366768] bdev.c:7837:bdev_register: *ERROR*: Unable to add uuid:25b9e436-2006-46b7-9074-2780fdbe00d5 alias for bdev NVMe1n1 00:30:04.174 [2024-09-29 16:38:02.366809] bdev_nvme.c:4481:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:30:04.174 Running I/O for 1 seconds... 00:30:04.174 12447.00 IOPS, 48.62 MiB/s 00:30:04.174 Latency(us) 00:30:04.174 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:04.174 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:30:04.174 NVMe0n1 : 1.01 12487.48 48.78 0.00 0.00 10231.22 7864.32 19709.35 00:30:04.174 =================================================================================================================== 00:30:04.174 Total : 12487.48 48.78 0.00 0.00 10231.22 7864.32 19709.35 00:30:04.174 Received shutdown signal, test time was about 1.000000 seconds 00:30:04.174 00:30:04.174 Latency(us) 00:30:04.174 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:04.174 =================================================================================================================== 00:30:04.174 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:04.174 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:30:04.174 16:38:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:04.174 16:38:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:30:04.174 16:38:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:30:04.174 16:38:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # nvmfcleanup 00:30:04.174 16:38:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:30:04.174 16:38:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:04.174 16:38:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:30:04.174 16:38:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:04.174 16:38:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:04.174 rmmod nvme_tcp 00:30:04.174 rmmod nvme_fabrics 00:30:04.174 rmmod nvme_keyring 00:30:04.174 16:38:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:04.174 16:38:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:30:04.174 16:38:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:30:04.174 16:38:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@513 -- # '[' -n 3252052 ']' 00:30:04.174 16:38:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@514 -- # killprocess 3252052 00:30:04.174 16:38:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 3252052 ']' 00:30:04.174 16:38:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 3252052 00:30:04.174 16:38:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:30:04.174 16:38:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:04.175 16:38:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3252052 00:30:04.175 16:38:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:04.175 16:38:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:04.175 16:38:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3252052' 00:30:04.175 killing process with pid 3252052 00:30:04.175 16:38:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 3252052 00:30:04.175 16:38:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 3252052 00:30:06.072 16:38:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:30:06.072 16:38:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:30:06.073 16:38:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:30:06.073 16:38:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:30:06.073 16:38:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@787 -- # iptables-save 00:30:06.073 16:38:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:30:06.073 16:38:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@787 -- # iptables-restore 00:30:06.073 16:38:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:06.073 16:38:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:06.073 16:38:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:06.073 16:38:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:06.073 16:38:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:07.976 16:38:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:07.976 00:30:07.976 real 0m11.143s 00:30:07.976 user 0m22.497s 00:30:07.976 sys 0m2.703s 00:30:07.976 16:38:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:07.976 16:38:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:07.976 ************************************ 00:30:07.976 END TEST nvmf_multicontroller 00:30:07.976 ************************************ 00:30:07.976 16:38:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:30:07.976 16:38:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:07.976 16:38:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:07.976 16:38:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.976 ************************************ 00:30:07.976 START TEST nvmf_aer 00:30:07.976 ************************************ 00:30:07.976 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:30:07.976 * Looking for test storage... 00:30:07.976 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:07.976 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:07.976 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # lcov --version 00:30:07.976 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:07.976 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:07.976 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:07.976 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:07.976 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:07.976 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:30:07.976 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:30:07.976 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:30:07.976 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:30:07.976 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:30:07.976 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:30:07.976 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:30:07.976 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:07.976 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:30:07.976 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:30:07.976 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:07.976 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:07.976 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:30:07.976 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:30:07.976 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:07.976 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:30:07.976 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:30:07.976 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:30:07.976 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:30:07.976 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:07.976 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:30:07.976 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:30:07.976 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:07.976 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:07.976 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:30:07.976 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:07.976 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:07.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:07.976 --rc genhtml_branch_coverage=1 00:30:07.976 --rc genhtml_function_coverage=1 00:30:07.976 --rc genhtml_legend=1 00:30:07.976 --rc geninfo_all_blocks=1 00:30:07.976 --rc geninfo_unexecuted_blocks=1 00:30:07.976 00:30:07.976 ' 00:30:07.976 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:07.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:07.976 --rc genhtml_branch_coverage=1 00:30:07.976 --rc genhtml_function_coverage=1 00:30:07.976 --rc genhtml_legend=1 00:30:07.976 --rc geninfo_all_blocks=1 00:30:07.976 --rc geninfo_unexecuted_blocks=1 00:30:07.976 00:30:07.976 ' 00:30:07.976 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:07.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:07.976 --rc genhtml_branch_coverage=1 00:30:07.976 --rc genhtml_function_coverage=1 00:30:07.976 --rc genhtml_legend=1 00:30:07.976 --rc geninfo_all_blocks=1 00:30:07.976 --rc geninfo_unexecuted_blocks=1 00:30:07.976 00:30:07.976 ' 00:30:07.976 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:07.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:07.976 --rc genhtml_branch_coverage=1 00:30:07.976 --rc genhtml_function_coverage=1 00:30:07.976 --rc genhtml_legend=1 00:30:07.976 --rc geninfo_all_blocks=1 00:30:07.976 --rc geninfo_unexecuted_blocks=1 00:30:07.976 00:30:07.976 ' 00:30:07.976 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:07.976 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:30:07.976 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:07.976 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:07.977 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:07.977 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:07.977 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:07.977 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:07.977 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:07.977 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:07.977 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:07.977 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:07.977 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:07.977 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:07.977 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:07.977 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:07.977 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:07.977 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:07.977 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:07.977 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:30:07.977 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:07.977 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:07.977 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:07.977 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.977 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.977 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.977 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:30:07.977 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.977 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:30:07.977 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:07.977 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:07.977 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:07.977 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:07.977 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:07.977 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:07.977 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:07.977 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:07.977 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:07.977 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:07.977 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:30:07.977 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:30:07.977 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:07.977 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@472 -- # prepare_net_devs 00:30:07.977 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@434 -- # local -g is_hw=no 00:30:07.977 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@436 -- # remove_spdk_ns 00:30:07.977 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:07.977 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:07.977 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:07.977 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:30:07.977 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:30:07.977 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:30:07.977 16:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:10.506 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:10.506 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:30:10.506 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:10.506 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:10.506 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:10.506 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:10.506 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:10.506 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:30:10.506 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:10.506 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:30:10.506 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:30:10.506 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:30:10.506 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:30:10.506 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:30:10.506 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:30:10.506 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:10.506 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:10.506 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:10.506 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:10.506 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:10.506 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:10.506 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:10.506 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:10.506 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:10.506 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:10.506 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:10.506 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:30:10.506 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:30:10.506 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:30:10.506 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:30:10.506 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:30:10.506 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:30:10.506 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:30:10.506 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:10.506 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:10.506 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:10.507 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ up == up ]] 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:10.507 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ up == up ]] 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:10.507 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # is_hw=yes 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:10.507 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:10.507 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:30:10.507 00:30:10.507 --- 10.0.0.2 ping statistics --- 00:30:10.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:10.507 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:10.507 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:10.507 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:30:10.507 00:30:10.507 --- 10.0.0.1 ping statistics --- 00:30:10.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:10.507 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # return 0 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@505 -- # nvmfpid=3254728 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@506 -- # waitforlisten 3254728 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 3254728 ']' 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:10.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:10.507 16:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:10.507 [2024-09-29 16:38:10.800399] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:30:10.507 [2024-09-29 16:38:10.800539] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:10.507 [2024-09-29 16:38:10.941696] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:10.765 [2024-09-29 16:38:11.203159] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:10.765 [2024-09-29 16:38:11.203249] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:10.765 [2024-09-29 16:38:11.203275] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:10.765 [2024-09-29 16:38:11.203299] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:10.765 [2024-09-29 16:38:11.203318] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:10.765 [2024-09-29 16:38:11.203450] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:30:10.765 [2024-09-29 16:38:11.203528] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:30:10.765 [2024-09-29 16:38:11.203608] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:30:10.765 [2024-09-29 16:38:11.203615] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:30:11.332 16:38:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:11.332 16:38:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:30:11.332 16:38:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:30:11.332 16:38:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:11.332 16:38:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:11.332 16:38:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:11.332 16:38:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:11.332 16:38:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:11.332 16:38:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:11.332 [2024-09-29 16:38:11.769825] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:11.332 16:38:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:11.332 16:38:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:30:11.332 16:38:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:11.332 16:38:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:11.332 Malloc0 00:30:11.332 16:38:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:11.332 16:38:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:30:11.332 16:38:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:11.332 16:38:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:11.332 16:38:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:11.332 16:38:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:11.332 16:38:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:11.332 16:38:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:11.332 16:38:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:11.332 16:38:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:11.332 16:38:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:11.332 16:38:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:11.332 [2024-09-29 16:38:11.876216] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:11.332 16:38:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:11.332 16:38:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:30:11.332 16:38:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:11.332 16:38:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:11.332 [ 00:30:11.332 { 00:30:11.332 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:11.332 "subtype": "Discovery", 00:30:11.332 "listen_addresses": [], 00:30:11.332 "allow_any_host": true, 00:30:11.332 "hosts": [] 00:30:11.332 }, 00:30:11.332 { 00:30:11.332 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:11.332 "subtype": "NVMe", 00:30:11.332 "listen_addresses": [ 00:30:11.332 { 00:30:11.332 "trtype": "TCP", 00:30:11.332 "adrfam": "IPv4", 00:30:11.332 "traddr": "10.0.0.2", 00:30:11.332 "trsvcid": "4420" 00:30:11.332 } 00:30:11.332 ], 00:30:11.332 "allow_any_host": true, 00:30:11.332 "hosts": [], 00:30:11.332 "serial_number": "SPDK00000000000001", 00:30:11.332 "model_number": "SPDK bdev Controller", 00:30:11.332 "max_namespaces": 2, 00:30:11.332 "min_cntlid": 1, 00:30:11.332 "max_cntlid": 65519, 00:30:11.332 "namespaces": [ 00:30:11.332 { 00:30:11.332 "nsid": 1, 00:30:11.332 "bdev_name": "Malloc0", 00:30:11.332 "name": "Malloc0", 00:30:11.332 "nguid": "75B53CCFBE744016ACE1FFA8D3C0FCC5", 00:30:11.332 "uuid": "75b53ccf-be74-4016-ace1-ffa8d3c0fcc5" 00:30:11.332 } 00:30:11.332 ] 00:30:11.332 } 00:30:11.332 ] 00:30:11.332 16:38:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:11.332 16:38:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:30:11.332 16:38:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:30:11.332 16:38:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3254929 00:30:11.332 16:38:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:30:11.332 16:38:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:30:11.591 16:38:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:30:11.591 16:38:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:11.591 16:38:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:30:11.591 16:38:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:30:11.591 16:38:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:30:11.591 16:38:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:11.591 16:38:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:30:11.591 16:38:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:30:11.591 16:38:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:30:11.591 16:38:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:11.591 16:38:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:30:11.591 16:38:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:30:11.591 16:38:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:30:11.849 16:38:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:11.849 16:38:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 3 -lt 200 ']' 00:30:11.849 16:38:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=4 00:30:11.849 16:38:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:30:11.849 16:38:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:11.849 16:38:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 4 -lt 200 ']' 00:30:11.849 16:38:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=5 00:30:11.849 16:38:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:30:11.849 16:38:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:11.849 16:38:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:11.849 16:38:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:30:11.849 16:38:12 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:30:11.849 16:38:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:11.849 16:38:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:12.107 Malloc1 00:30:12.107 16:38:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:12.107 16:38:12 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:30:12.107 16:38:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:12.107 16:38:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:12.107 16:38:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:12.107 16:38:12 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:30:12.107 16:38:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:12.107 16:38:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:12.107 [ 00:30:12.107 { 00:30:12.107 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:12.107 "subtype": "Discovery", 00:30:12.107 "listen_addresses": [], 00:30:12.107 "allow_any_host": true, 00:30:12.107 "hosts": [] 00:30:12.107 }, 00:30:12.107 { 00:30:12.107 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:12.107 "subtype": "NVMe", 00:30:12.107 "listen_addresses": [ 00:30:12.107 { 00:30:12.107 "trtype": "TCP", 00:30:12.107 "adrfam": "IPv4", 00:30:12.107 "traddr": "10.0.0.2", 00:30:12.107 "trsvcid": "4420" 00:30:12.107 } 00:30:12.107 ], 00:30:12.107 "allow_any_host": true, 00:30:12.107 "hosts": [], 00:30:12.107 "serial_number": "SPDK00000000000001", 00:30:12.107 "model_number": "SPDK bdev Controller", 00:30:12.107 "max_namespaces": 2, 00:30:12.107 "min_cntlid": 1, 00:30:12.107 "max_cntlid": 65519, 00:30:12.107 "namespaces": [ 00:30:12.107 { 00:30:12.107 "nsid": 1, 00:30:12.107 "bdev_name": "Malloc0", 00:30:12.107 "name": "Malloc0", 00:30:12.107 "nguid": "75B53CCFBE744016ACE1FFA8D3C0FCC5", 00:30:12.107 "uuid": "75b53ccf-be74-4016-ace1-ffa8d3c0fcc5" 00:30:12.107 }, 00:30:12.107 { 00:30:12.107 "nsid": 2, 00:30:12.107 "bdev_name": "Malloc1", 00:30:12.107 "name": "Malloc1", 00:30:12.107 "nguid": "1DBFF749139C485D8D63DCF20A88E8D4", 00:30:12.107 "uuid": "1dbff749-139c-485d-8d63-dcf20a88e8d4" 00:30:12.107 } 00:30:12.107 ] 00:30:12.107 } 00:30:12.107 ] 00:30:12.107 16:38:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:12.107 16:38:12 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3254929 00:30:12.107 Asynchronous Event Request test 00:30:12.107 Attaching to 10.0.0.2 00:30:12.107 Attached to 10.0.0.2 00:30:12.107 Registering asynchronous event callbacks... 00:30:12.107 Starting namespace attribute notice tests for all controllers... 00:30:12.107 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:30:12.107 aer_cb - Changed Namespace 00:30:12.107 Cleaning up... 00:30:12.107 16:38:12 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:30:12.107 16:38:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:12.107 16:38:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:12.366 16:38:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:12.366 16:38:12 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:30:12.366 16:38:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:12.366 16:38:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:12.624 16:38:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:12.624 16:38:12 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:12.624 16:38:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:12.624 16:38:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:12.624 16:38:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:12.624 16:38:12 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:30:12.624 16:38:12 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:30:12.624 16:38:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # nvmfcleanup 00:30:12.624 16:38:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:30:12.624 16:38:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:12.624 16:38:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:30:12.624 16:38:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:12.624 16:38:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:12.624 rmmod nvme_tcp 00:30:12.624 rmmod nvme_fabrics 00:30:12.624 rmmod nvme_keyring 00:30:12.624 16:38:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:12.624 16:38:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:30:12.624 16:38:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:30:12.624 16:38:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@513 -- # '[' -n 3254728 ']' 00:30:12.624 16:38:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@514 -- # killprocess 3254728 00:30:12.624 16:38:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 3254728 ']' 00:30:12.624 16:38:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 3254728 00:30:12.624 16:38:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:30:12.624 16:38:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:12.624 16:38:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3254728 00:30:12.624 16:38:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:12.624 16:38:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:12.624 16:38:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3254728' 00:30:12.624 killing process with pid 3254728 00:30:12.624 16:38:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 3254728 00:30:12.624 16:38:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 3254728 00:30:13.997 16:38:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:30:13.997 16:38:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:30:13.997 16:38:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:30:13.997 16:38:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:30:13.997 16:38:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@787 -- # iptables-save 00:30:13.997 16:38:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:30:13.997 16:38:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@787 -- # iptables-restore 00:30:13.997 16:38:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:13.997 16:38:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:13.997 16:38:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:13.997 16:38:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:13.997 16:38:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:15.899 16:38:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:15.899 00:30:15.899 real 0m8.083s 00:30:15.899 user 0m11.947s 00:30:15.899 sys 0m2.325s 00:30:15.899 16:38:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:15.899 16:38:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:15.899 ************************************ 00:30:15.899 END TEST nvmf_aer 00:30:15.899 ************************************ 00:30:15.899 16:38:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:30:15.899 16:38:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:15.899 16:38:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:15.899 16:38:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.158 ************************************ 00:30:16.158 START TEST nvmf_async_init 00:30:16.158 ************************************ 00:30:16.158 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:30:16.158 * Looking for test storage... 00:30:16.158 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:16.158 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:16.158 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # lcov --version 00:30:16.158 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:16.158 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:16.158 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:16.158 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:16.158 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:16.158 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:30:16.158 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:30:16.158 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:30:16.158 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:30:16.158 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:30:16.158 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:30:16.158 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:30:16.158 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:16.158 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:30:16.158 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:30:16.158 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:16.158 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:16.158 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:30:16.158 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:30:16.158 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:16.158 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:30:16.158 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:30:16.158 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:30:16.158 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:30:16.158 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:16.158 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:30:16.158 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:30:16.158 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:16.158 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:16.158 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:30:16.158 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:16.158 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:16.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:16.159 --rc genhtml_branch_coverage=1 00:30:16.159 --rc genhtml_function_coverage=1 00:30:16.159 --rc genhtml_legend=1 00:30:16.159 --rc geninfo_all_blocks=1 00:30:16.159 --rc geninfo_unexecuted_blocks=1 00:30:16.159 00:30:16.159 ' 00:30:16.159 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:16.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:16.159 --rc genhtml_branch_coverage=1 00:30:16.159 --rc genhtml_function_coverage=1 00:30:16.159 --rc genhtml_legend=1 00:30:16.159 --rc geninfo_all_blocks=1 00:30:16.159 --rc geninfo_unexecuted_blocks=1 00:30:16.159 00:30:16.159 ' 00:30:16.159 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:16.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:16.159 --rc genhtml_branch_coverage=1 00:30:16.159 --rc genhtml_function_coverage=1 00:30:16.159 --rc genhtml_legend=1 00:30:16.159 --rc geninfo_all_blocks=1 00:30:16.159 --rc geninfo_unexecuted_blocks=1 00:30:16.159 00:30:16.159 ' 00:30:16.159 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:16.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:16.159 --rc genhtml_branch_coverage=1 00:30:16.159 --rc genhtml_function_coverage=1 00:30:16.159 --rc genhtml_legend=1 00:30:16.159 --rc geninfo_all_blocks=1 00:30:16.159 --rc geninfo_unexecuted_blocks=1 00:30:16.159 00:30:16.159 ' 00:30:16.159 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:16.159 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:30:16.159 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:16.159 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:16.159 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:16.159 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:16.159 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:16.159 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:16.159 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:16.159 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:16.159 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:16.159 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:16.159 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:16.159 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:16.159 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:16.159 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:16.159 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:16.159 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:16.159 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:16.159 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:30:16.159 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:16.159 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:16.159 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:16.159 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:16.159 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:16.159 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:16.159 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:30:16.159 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:16.159 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:30:16.159 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:16.159 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:16.159 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:16.159 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:16.159 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:16.159 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:16.159 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:16.159 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:16.159 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:16.159 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:16.159 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:30:16.159 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:30:16.159 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:30:16.159 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:30:16.159 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:30:16.159 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:30:16.159 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=56be7e15ea0144ffa06ad83dd75cbe91 00:30:16.159 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:30:16.159 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:30:16.159 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:16.159 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@472 -- # prepare_net_devs 00:30:16.159 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@434 -- # local -g is_hw=no 00:30:16.159 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@436 -- # remove_spdk_ns 00:30:16.159 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:16.159 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:16.159 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:16.159 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:30:16.160 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:30:16.160 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:30:16.160 16:38:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:18.062 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:18.062 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:30:18.062 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:18.062 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:18.062 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:18.062 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:18.062 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:18.062 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:30:18.062 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:18.062 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:30:18.062 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:30:18.062 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:30:18.062 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:30:18.062 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:30:18.062 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:30:18.062 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:18.062 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:18.062 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:18.062 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:18.062 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:18.062 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:18.062 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:18.062 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:18.062 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:18.062 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:18.062 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:18.062 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:30:18.062 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:30:18.062 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:30:18.062 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:30:18.062 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:30:18.062 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:30:18.062 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:30:18.062 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:18.062 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:18.063 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:30:18.063 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:30:18.063 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:18.063 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:18.063 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:30:18.063 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:30:18.063 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:18.063 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:18.063 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:30:18.063 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:30:18.063 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:18.063 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:18.063 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:30:18.063 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:30:18.063 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:30:18.063 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:30:18.063 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:30:18.063 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:18.063 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:30:18.063 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:18.063 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ up == up ]] 00:30:18.063 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:30:18.063 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:18.063 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:18.063 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:18.063 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:30:18.063 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:30:18.063 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:18.063 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:30:18.063 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:18.063 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ up == up ]] 00:30:18.063 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:30:18.063 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:18.063 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:18.063 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:18.063 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:30:18.063 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:30:18.063 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # is_hw=yes 00:30:18.063 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:30:18.063 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:30:18.063 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:30:18.063 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:18.063 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:18.063 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:18.063 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:18.063 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:18.063 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:18.063 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:18.063 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:18.063 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:18.063 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:18.063 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:18.063 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:18.063 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:18.063 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:18.063 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:18.063 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:18.063 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:18.063 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:18.063 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:18.321 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:18.321 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:18.321 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:18.322 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:18.322 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:18.322 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:30:18.322 00:30:18.322 --- 10.0.0.2 ping statistics --- 00:30:18.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:18.322 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:30:18.322 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:18.322 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:18.322 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:30:18.322 00:30:18.322 --- 10.0.0.1 ping statistics --- 00:30:18.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:18.322 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:30:18.322 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:18.322 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # return 0 00:30:18.322 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:30:18.322 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:18.322 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:30:18.322 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:30:18.322 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:18.322 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:30:18.322 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:30:18.322 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:30:18.322 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:30:18.322 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:18.322 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:18.322 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@505 -- # nvmfpid=3257071 00:30:18.322 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:30:18.322 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@506 -- # waitforlisten 3257071 00:30:18.322 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 3257071 ']' 00:30:18.322 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:18.322 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:18.322 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:18.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:18.322 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:18.322 16:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:18.322 [2024-09-29 16:38:18.791740] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:30:18.322 [2024-09-29 16:38:18.791879] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:18.580 [2024-09-29 16:38:18.936445] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:18.837 [2024-09-29 16:38:19.191889] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:18.837 [2024-09-29 16:38:19.191988] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:18.837 [2024-09-29 16:38:19.192012] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:18.837 [2024-09-29 16:38:19.192034] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:18.837 [2024-09-29 16:38:19.192078] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:18.837 [2024-09-29 16:38:19.192138] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:30:19.403 16:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:19.403 16:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:30:19.403 16:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:30:19.403 16:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:19.403 16:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:19.403 16:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:19.403 16:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:19.403 16:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.403 16:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:19.403 [2024-09-29 16:38:19.823530] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:19.403 16:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.403 16:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:30:19.403 16:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.403 16:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:19.403 null0 00:30:19.403 16:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.403 16:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:30:19.403 16:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.403 16:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:19.403 16:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.403 16:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:30:19.403 16:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.403 16:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:19.403 16:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.403 16:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 56be7e15ea0144ffa06ad83dd75cbe91 00:30:19.403 16:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.403 16:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:19.403 16:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.403 16:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:19.403 16:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.403 16:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:19.403 [2024-09-29 16:38:19.863874] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:19.403 16:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.403 16:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:30:19.403 16:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.403 16:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:19.661 nvme0n1 00:30:19.661 16:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.661 16:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:19.661 16:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.661 16:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:19.661 [ 00:30:19.661 { 00:30:19.661 "name": "nvme0n1", 00:30:19.661 "aliases": [ 00:30:19.661 "56be7e15-ea01-44ff-a06a-d83dd75cbe91" 00:30:19.661 ], 00:30:19.661 "product_name": "NVMe disk", 00:30:19.661 "block_size": 512, 00:30:19.661 "num_blocks": 2097152, 00:30:19.661 "uuid": "56be7e15-ea01-44ff-a06a-d83dd75cbe91", 00:30:19.661 "numa_id": 0, 00:30:19.661 "assigned_rate_limits": { 00:30:19.661 "rw_ios_per_sec": 0, 00:30:19.661 "rw_mbytes_per_sec": 0, 00:30:19.661 "r_mbytes_per_sec": 0, 00:30:19.661 "w_mbytes_per_sec": 0 00:30:19.661 }, 00:30:19.661 "claimed": false, 00:30:19.661 "zoned": false, 00:30:19.661 "supported_io_types": { 00:30:19.661 "read": true, 00:30:19.661 "write": true, 00:30:19.661 "unmap": false, 00:30:19.661 "flush": true, 00:30:19.661 "reset": true, 00:30:19.661 "nvme_admin": true, 00:30:19.661 "nvme_io": true, 00:30:19.661 "nvme_io_md": false, 00:30:19.661 "write_zeroes": true, 00:30:19.661 "zcopy": false, 00:30:19.661 "get_zone_info": false, 00:30:19.661 "zone_management": false, 00:30:19.661 "zone_append": false, 00:30:19.661 "compare": true, 00:30:19.661 "compare_and_write": true, 00:30:19.661 "abort": true, 00:30:19.661 "seek_hole": false, 00:30:19.661 "seek_data": false, 00:30:19.661 "copy": true, 00:30:19.661 "nvme_iov_md": false 00:30:19.661 }, 00:30:19.661 "memory_domains": [ 00:30:19.661 { 00:30:19.661 "dma_device_id": "system", 00:30:19.661 "dma_device_type": 1 00:30:19.661 } 00:30:19.661 ], 00:30:19.661 "driver_specific": { 00:30:19.661 "nvme": [ 00:30:19.661 { 00:30:19.661 "trid": { 00:30:19.661 "trtype": "TCP", 00:30:19.661 "adrfam": "IPv4", 00:30:19.661 "traddr": "10.0.0.2", 00:30:19.661 "trsvcid": "4420", 00:30:19.661 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:19.661 }, 00:30:19.661 "ctrlr_data": { 00:30:19.661 "cntlid": 1, 00:30:19.661 "vendor_id": "0x8086", 00:30:19.661 "model_number": "SPDK bdev Controller", 00:30:19.661 "serial_number": "00000000000000000000", 00:30:19.661 "firmware_revision": "25.01", 00:30:19.661 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:19.661 "oacs": { 00:30:19.661 "security": 0, 00:30:19.661 "format": 0, 00:30:19.661 "firmware": 0, 00:30:19.661 "ns_manage": 0 00:30:19.661 }, 00:30:19.661 "multi_ctrlr": true, 00:30:19.661 "ana_reporting": false 00:30:19.661 }, 00:30:19.661 "vs": { 00:30:19.661 "nvme_version": "1.3" 00:30:19.661 }, 00:30:19.661 "ns_data": { 00:30:19.661 "id": 1, 00:30:19.661 "can_share": true 00:30:19.661 } 00:30:19.661 } 00:30:19.661 ], 00:30:19.661 "mp_policy": "active_passive" 00:30:19.661 } 00:30:19.661 } 00:30:19.661 ] 00:30:19.661 16:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.661 16:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:30:19.661 16:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.661 16:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:19.661 [2024-09-29 16:38:20.133990] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:19.661 [2024-09-29 16:38:20.134158] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:30:19.919 [2024-09-29 16:38:20.276924] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:19.919 16:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.919 16:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:19.919 16:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.919 16:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:19.919 [ 00:30:19.919 { 00:30:19.919 "name": "nvme0n1", 00:30:19.919 "aliases": [ 00:30:19.919 "56be7e15-ea01-44ff-a06a-d83dd75cbe91" 00:30:19.919 ], 00:30:19.919 "product_name": "NVMe disk", 00:30:19.919 "block_size": 512, 00:30:19.919 "num_blocks": 2097152, 00:30:19.919 "uuid": "56be7e15-ea01-44ff-a06a-d83dd75cbe91", 00:30:19.919 "numa_id": 0, 00:30:19.919 "assigned_rate_limits": { 00:30:19.919 "rw_ios_per_sec": 0, 00:30:19.919 "rw_mbytes_per_sec": 0, 00:30:19.919 "r_mbytes_per_sec": 0, 00:30:19.919 "w_mbytes_per_sec": 0 00:30:19.919 }, 00:30:19.919 "claimed": false, 00:30:19.919 "zoned": false, 00:30:19.919 "supported_io_types": { 00:30:19.919 "read": true, 00:30:19.919 "write": true, 00:30:19.919 "unmap": false, 00:30:19.919 "flush": true, 00:30:19.919 "reset": true, 00:30:19.919 "nvme_admin": true, 00:30:19.919 "nvme_io": true, 00:30:19.919 "nvme_io_md": false, 00:30:19.919 "write_zeroes": true, 00:30:19.919 "zcopy": false, 00:30:19.919 "get_zone_info": false, 00:30:19.919 "zone_management": false, 00:30:19.919 "zone_append": false, 00:30:19.919 "compare": true, 00:30:19.919 "compare_and_write": true, 00:30:19.919 "abort": true, 00:30:19.919 "seek_hole": false, 00:30:19.919 "seek_data": false, 00:30:19.919 "copy": true, 00:30:19.919 "nvme_iov_md": false 00:30:19.919 }, 00:30:19.919 "memory_domains": [ 00:30:19.919 { 00:30:19.919 "dma_device_id": "system", 00:30:19.919 "dma_device_type": 1 00:30:19.919 } 00:30:19.919 ], 00:30:19.919 "driver_specific": { 00:30:19.919 "nvme": [ 00:30:19.919 { 00:30:19.919 "trid": { 00:30:19.919 "trtype": "TCP", 00:30:19.919 "adrfam": "IPv4", 00:30:19.919 "traddr": "10.0.0.2", 00:30:19.919 "trsvcid": "4420", 00:30:19.919 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:19.919 }, 00:30:19.919 "ctrlr_data": { 00:30:19.919 "cntlid": 2, 00:30:19.919 "vendor_id": "0x8086", 00:30:19.919 "model_number": "SPDK bdev Controller", 00:30:19.919 "serial_number": "00000000000000000000", 00:30:19.919 "firmware_revision": "25.01", 00:30:19.919 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:19.919 "oacs": { 00:30:19.919 "security": 0, 00:30:19.919 "format": 0, 00:30:19.919 "firmware": 0, 00:30:19.919 "ns_manage": 0 00:30:19.919 }, 00:30:19.919 "multi_ctrlr": true, 00:30:19.919 "ana_reporting": false 00:30:19.919 }, 00:30:19.919 "vs": { 00:30:19.919 "nvme_version": "1.3" 00:30:19.920 }, 00:30:19.920 "ns_data": { 00:30:19.920 "id": 1, 00:30:19.920 "can_share": true 00:30:19.920 } 00:30:19.920 } 00:30:19.920 ], 00:30:19.920 "mp_policy": "active_passive" 00:30:19.920 } 00:30:19.920 } 00:30:19.920 ] 00:30:19.920 16:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.920 16:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:19.920 16:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.920 16:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:19.920 16:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.920 16:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:30:19.920 16:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.0qgQVfNSFS 00:30:19.920 16:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:30:19.920 16:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.0qgQVfNSFS 00:30:19.920 16:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.0qgQVfNSFS 00:30:19.920 16:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.920 16:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:19.920 16:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.920 16:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:30:19.920 16:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.920 16:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:19.920 16:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.920 16:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:30:19.920 16:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.920 16:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:19.920 [2024-09-29 16:38:20.338904] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:19.920 [2024-09-29 16:38:20.339207] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:19.920 16:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.920 16:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:30:19.920 16:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.920 16:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:19.920 16:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.920 16:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:30:19.920 16:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.920 16:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:19.920 [2024-09-29 16:38:20.354989] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:19.920 nvme0n1 00:30:19.920 16:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.920 16:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:19.920 16:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.920 16:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:19.920 [ 00:30:19.920 { 00:30:19.920 "name": "nvme0n1", 00:30:19.920 "aliases": [ 00:30:19.920 "56be7e15-ea01-44ff-a06a-d83dd75cbe91" 00:30:19.920 ], 00:30:19.920 "product_name": "NVMe disk", 00:30:19.920 "block_size": 512, 00:30:19.920 "num_blocks": 2097152, 00:30:19.920 "uuid": "56be7e15-ea01-44ff-a06a-d83dd75cbe91", 00:30:19.920 "numa_id": 0, 00:30:19.920 "assigned_rate_limits": { 00:30:19.920 "rw_ios_per_sec": 0, 00:30:19.920 "rw_mbytes_per_sec": 0, 00:30:19.920 "r_mbytes_per_sec": 0, 00:30:19.920 "w_mbytes_per_sec": 0 00:30:19.920 }, 00:30:19.920 "claimed": false, 00:30:19.920 "zoned": false, 00:30:19.920 "supported_io_types": { 00:30:19.920 "read": true, 00:30:19.920 "write": true, 00:30:19.920 "unmap": false, 00:30:19.920 "flush": true, 00:30:19.920 "reset": true, 00:30:19.920 "nvme_admin": true, 00:30:19.920 "nvme_io": true, 00:30:19.920 "nvme_io_md": false, 00:30:19.920 "write_zeroes": true, 00:30:19.920 "zcopy": false, 00:30:19.920 "get_zone_info": false, 00:30:19.920 "zone_management": false, 00:30:19.920 "zone_append": false, 00:30:19.920 "compare": true, 00:30:19.920 "compare_and_write": true, 00:30:19.920 "abort": true, 00:30:19.920 "seek_hole": false, 00:30:19.920 "seek_data": false, 00:30:19.920 "copy": true, 00:30:19.920 "nvme_iov_md": false 00:30:19.920 }, 00:30:19.920 "memory_domains": [ 00:30:19.920 { 00:30:19.920 "dma_device_id": "system", 00:30:19.920 "dma_device_type": 1 00:30:19.920 } 00:30:19.920 ], 00:30:19.920 "driver_specific": { 00:30:19.920 "nvme": [ 00:30:19.920 { 00:30:19.920 "trid": { 00:30:19.920 "trtype": "TCP", 00:30:19.920 "adrfam": "IPv4", 00:30:19.920 "traddr": "10.0.0.2", 00:30:19.920 "trsvcid": "4421", 00:30:19.920 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:19.920 }, 00:30:19.920 "ctrlr_data": { 00:30:19.920 "cntlid": 3, 00:30:19.920 "vendor_id": "0x8086", 00:30:19.920 "model_number": "SPDK bdev Controller", 00:30:19.920 "serial_number": "00000000000000000000", 00:30:19.920 "firmware_revision": "25.01", 00:30:19.920 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:19.920 "oacs": { 00:30:19.920 "security": 0, 00:30:19.920 "format": 0, 00:30:19.920 "firmware": 0, 00:30:19.920 "ns_manage": 0 00:30:19.920 }, 00:30:19.920 "multi_ctrlr": true, 00:30:19.920 "ana_reporting": false 00:30:19.920 }, 00:30:19.920 "vs": { 00:30:19.920 "nvme_version": "1.3" 00:30:19.920 }, 00:30:19.920 "ns_data": { 00:30:19.920 "id": 1, 00:30:19.920 "can_share": true 00:30:19.920 } 00:30:19.920 } 00:30:19.920 ], 00:30:19.920 "mp_policy": "active_passive" 00:30:19.920 } 00:30:19.920 } 00:30:19.920 ] 00:30:19.920 16:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.920 16:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:19.920 16:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.920 16:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:19.920 16:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.920 16:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.0qgQVfNSFS 00:30:19.920 16:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:30:19.920 16:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:30:19.920 16:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # nvmfcleanup 00:30:19.920 16:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:30:19.920 16:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:19.920 16:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:30:19.920 16:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:19.920 16:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:19.920 rmmod nvme_tcp 00:30:20.178 rmmod nvme_fabrics 00:30:20.178 rmmod nvme_keyring 00:30:20.178 16:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:20.178 16:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:30:20.178 16:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:30:20.178 16:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@513 -- # '[' -n 3257071 ']' 00:30:20.178 16:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@514 -- # killprocess 3257071 00:30:20.178 16:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 3257071 ']' 00:30:20.178 16:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 3257071 00:30:20.178 16:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:30:20.178 16:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:20.178 16:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3257071 00:30:20.178 16:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:20.178 16:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:20.178 16:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3257071' 00:30:20.178 killing process with pid 3257071 00:30:20.178 16:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 3257071 00:30:20.178 16:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 3257071 00:30:21.550 16:38:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:30:21.551 16:38:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:30:21.551 16:38:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:30:21.551 16:38:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:30:21.551 16:38:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@787 -- # iptables-save 00:30:21.551 16:38:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:30:21.551 16:38:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@787 -- # iptables-restore 00:30:21.551 16:38:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:21.551 16:38:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:21.551 16:38:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:21.551 16:38:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:21.551 16:38:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:23.453 16:38:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:23.453 00:30:23.453 real 0m7.508s 00:30:23.453 user 0m4.259s 00:30:23.453 sys 0m1.956s 00:30:23.453 16:38:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:23.453 16:38:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:23.453 ************************************ 00:30:23.453 END TEST nvmf_async_init 00:30:23.453 ************************************ 00:30:23.453 16:38:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:30:23.453 16:38:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:23.453 16:38:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:23.453 16:38:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:23.712 ************************************ 00:30:23.713 START TEST dma 00:30:23.713 ************************************ 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:30:23.713 * Looking for test storage... 00:30:23.713 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # lcov --version 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:23.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:23.713 --rc genhtml_branch_coverage=1 00:30:23.713 --rc genhtml_function_coverage=1 00:30:23.713 --rc genhtml_legend=1 00:30:23.713 --rc geninfo_all_blocks=1 00:30:23.713 --rc geninfo_unexecuted_blocks=1 00:30:23.713 00:30:23.713 ' 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:23.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:23.713 --rc genhtml_branch_coverage=1 00:30:23.713 --rc genhtml_function_coverage=1 00:30:23.713 --rc genhtml_legend=1 00:30:23.713 --rc geninfo_all_blocks=1 00:30:23.713 --rc geninfo_unexecuted_blocks=1 00:30:23.713 00:30:23.713 ' 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:23.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:23.713 --rc genhtml_branch_coverage=1 00:30:23.713 --rc genhtml_function_coverage=1 00:30:23.713 --rc genhtml_legend=1 00:30:23.713 --rc geninfo_all_blocks=1 00:30:23.713 --rc geninfo_unexecuted_blocks=1 00:30:23.713 00:30:23.713 ' 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:23.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:23.713 --rc genhtml_branch_coverage=1 00:30:23.713 --rc genhtml_function_coverage=1 00:30:23.713 --rc genhtml_legend=1 00:30:23.713 --rc geninfo_all_blocks=1 00:30:23.713 --rc geninfo_unexecuted_blocks=1 00:30:23.713 00:30:23.713 ' 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:23.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:30:23.713 00:30:23.713 real 0m0.148s 00:30:23.713 user 0m0.105s 00:30:23.713 sys 0m0.053s 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:30:23.713 ************************************ 00:30:23.713 END TEST dma 00:30:23.713 ************************************ 00:30:23.713 16:38:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:30:23.714 16:38:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:23.714 16:38:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:23.714 16:38:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:23.714 ************************************ 00:30:23.714 START TEST nvmf_identify 00:30:23.714 ************************************ 00:30:23.714 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:30:23.714 * Looking for test storage... 00:30:23.714 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:23.714 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:23.714 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lcov --version 00:30:23.714 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:23.972 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:23.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:23.973 --rc genhtml_branch_coverage=1 00:30:23.973 --rc genhtml_function_coverage=1 00:30:23.973 --rc genhtml_legend=1 00:30:23.973 --rc geninfo_all_blocks=1 00:30:23.973 --rc geninfo_unexecuted_blocks=1 00:30:23.973 00:30:23.973 ' 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:23.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:23.973 --rc genhtml_branch_coverage=1 00:30:23.973 --rc genhtml_function_coverage=1 00:30:23.973 --rc genhtml_legend=1 00:30:23.973 --rc geninfo_all_blocks=1 00:30:23.973 --rc geninfo_unexecuted_blocks=1 00:30:23.973 00:30:23.973 ' 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:23.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:23.973 --rc genhtml_branch_coverage=1 00:30:23.973 --rc genhtml_function_coverage=1 00:30:23.973 --rc genhtml_legend=1 00:30:23.973 --rc geninfo_all_blocks=1 00:30:23.973 --rc geninfo_unexecuted_blocks=1 00:30:23.973 00:30:23.973 ' 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:23.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:23.973 --rc genhtml_branch_coverage=1 00:30:23.973 --rc genhtml_function_coverage=1 00:30:23.973 --rc genhtml_legend=1 00:30:23.973 --rc geninfo_all_blocks=1 00:30:23.973 --rc geninfo_unexecuted_blocks=1 00:30:23.973 00:30:23.973 ' 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:23.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # prepare_net_devs 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@434 -- # local -g is_hw=no 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # remove_spdk_ns 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:23.973 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:30:23.974 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:30:23.974 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:30:23.974 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:25.872 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:25.872 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:30:25.872 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:25.872 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:25.872 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:25.872 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:25.872 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:25.872 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:30:25.872 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:25.872 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:30:25.872 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:30:25.872 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:30:25.872 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:30:25.872 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:30:25.872 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:30:25.872 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:25.872 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:25.872 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:25.872 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:25.872 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:25.872 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:25.872 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:25.872 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:25.872 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:25.872 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:25.872 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:25.872 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:30:25.872 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:30:25.872 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:30:25.872 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:30:25.872 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:30:25.872 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:30:25.872 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:30:25.872 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:25.872 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:25.872 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:30:25.872 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:30:25.872 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:25.872 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:25.872 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:30:25.872 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:30:25.872 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:25.872 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:25.872 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:30:25.872 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:30:25.872 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:25.872 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:25.872 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:30:25.872 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:30:25.872 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:30:25.872 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:30:25.872 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:30:25.872 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:25.872 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:30:25.872 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:25.872 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ up == up ]] 00:30:25.872 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:30:25.872 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:25.872 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:25.872 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:25.872 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:30:25.872 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:30:25.872 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:25.873 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:30:25.873 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:25.873 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ up == up ]] 00:30:25.873 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:30:25.873 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:25.873 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:25.873 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:25.873 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:30:25.873 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:30:25.873 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # is_hw=yes 00:30:25.873 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:30:25.873 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:30:25.873 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:30:25.873 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:25.873 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:25.873 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:25.873 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:25.873 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:25.873 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:25.873 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:25.873 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:25.873 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:25.873 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:25.873 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:25.873 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:25.873 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:25.873 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:25.873 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:25.873 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:25.873 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:26.137 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:26.137 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:26.137 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:26.137 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:26.137 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:26.137 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:26.137 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:26.137 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:30:26.137 00:30:26.137 --- 10.0.0.2 ping statistics --- 00:30:26.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:26.137 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:30:26.137 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:26.137 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:26.137 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.061 ms 00:30:26.137 00:30:26.137 --- 10.0.0.1 ping statistics --- 00:30:26.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:26.137 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:30:26.137 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:26.137 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # return 0 00:30:26.137 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:30:26.137 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:26.137 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:30:26.137 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:30:26.137 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:26.137 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:30:26.137 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:30:26.137 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:30:26.137 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:26.137 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:26.137 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3259461 00:30:26.137 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:26.137 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:26.137 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3259461 00:30:26.137 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 3259461 ']' 00:30:26.137 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:26.137 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:26.137 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:26.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:26.137 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:26.137 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:26.137 [2024-09-29 16:38:26.615394] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:30:26.137 [2024-09-29 16:38:26.615535] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:26.451 [2024-09-29 16:38:26.756999] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:26.733 [2024-09-29 16:38:27.029875] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:26.733 [2024-09-29 16:38:27.029954] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:26.733 [2024-09-29 16:38:27.029980] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:26.733 [2024-09-29 16:38:27.030004] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:26.733 [2024-09-29 16:38:27.030023] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:26.733 [2024-09-29 16:38:27.030154] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:30:26.733 [2024-09-29 16:38:27.030223] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:30:26.733 [2024-09-29 16:38:27.030325] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:30:26.733 [2024-09-29 16:38:27.030331] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:30:27.299 16:38:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:27.299 16:38:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:30:27.299 16:38:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:27.299 16:38:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.299 16:38:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:27.299 [2024-09-29 16:38:27.621736] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:27.299 16:38:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.299 16:38:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:30:27.299 16:38:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:27.299 16:38:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:27.299 16:38:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:27.299 16:38:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.299 16:38:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:27.299 Malloc0 00:30:27.299 16:38:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.299 16:38:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:27.299 16:38:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.299 16:38:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:27.299 16:38:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.299 16:38:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:30:27.299 16:38:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.299 16:38:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:27.299 16:38:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.299 16:38:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:27.299 16:38:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.299 16:38:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:27.299 [2024-09-29 16:38:27.756284] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:27.299 16:38:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.299 16:38:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:27.299 16:38:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.299 16:38:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:27.299 16:38:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.299 16:38:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:30:27.299 16:38:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.299 16:38:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:27.299 [ 00:30:27.299 { 00:30:27.299 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:27.299 "subtype": "Discovery", 00:30:27.299 "listen_addresses": [ 00:30:27.299 { 00:30:27.299 "trtype": "TCP", 00:30:27.299 "adrfam": "IPv4", 00:30:27.299 "traddr": "10.0.0.2", 00:30:27.299 "trsvcid": "4420" 00:30:27.299 } 00:30:27.299 ], 00:30:27.299 "allow_any_host": true, 00:30:27.299 "hosts": [] 00:30:27.299 }, 00:30:27.299 { 00:30:27.299 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:27.299 "subtype": "NVMe", 00:30:27.299 "listen_addresses": [ 00:30:27.299 { 00:30:27.299 "trtype": "TCP", 00:30:27.299 "adrfam": "IPv4", 00:30:27.299 "traddr": "10.0.0.2", 00:30:27.299 "trsvcid": "4420" 00:30:27.299 } 00:30:27.299 ], 00:30:27.299 "allow_any_host": true, 00:30:27.299 "hosts": [], 00:30:27.299 "serial_number": "SPDK00000000000001", 00:30:27.299 "model_number": "SPDK bdev Controller", 00:30:27.299 "max_namespaces": 32, 00:30:27.299 "min_cntlid": 1, 00:30:27.299 "max_cntlid": 65519, 00:30:27.299 "namespaces": [ 00:30:27.299 { 00:30:27.299 "nsid": 1, 00:30:27.299 "bdev_name": "Malloc0", 00:30:27.299 "name": "Malloc0", 00:30:27.299 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:30:27.299 "eui64": "ABCDEF0123456789", 00:30:27.299 "uuid": "0355c035-fb19-4801-8f4c-5aa1671be84f" 00:30:27.299 } 00:30:27.299 ] 00:30:27.299 } 00:30:27.299 ] 00:30:27.299 16:38:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.299 16:38:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:30:27.299 [2024-09-29 16:38:27.822485] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:30:27.299 [2024-09-29 16:38:27.822574] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3259621 ] 00:30:27.561 [2024-09-29 16:38:27.885926] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:30:27.561 [2024-09-29 16:38:27.886057] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:30:27.561 [2024-09-29 16:38:27.886079] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:30:27.561 [2024-09-29 16:38:27.886112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:30:27.561 [2024-09-29 16:38:27.886137] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:30:27.561 [2024-09-29 16:38:27.890252] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:30:27.561 [2024-09-29 16:38:27.890344] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x615000015700 0 00:30:27.561 [2024-09-29 16:38:27.890596] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:30:27.561 [2024-09-29 16:38:27.890636] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:30:27.561 [2024-09-29 16:38:27.890655] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:30:27.561 [2024-09-29 16:38:27.890668] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:30:27.561 [2024-09-29 16:38:27.890771] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.561 [2024-09-29 16:38:27.890795] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.561 [2024-09-29 16:38:27.890813] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:27.561 [2024-09-29 16:38:27.890853] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:27.561 [2024-09-29 16:38:27.890900] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:27.561 [2024-09-29 16:38:27.897708] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.561 [2024-09-29 16:38:27.897740] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.561 [2024-09-29 16:38:27.897756] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.561 [2024-09-29 16:38:27.897775] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:27.561 [2024-09-29 16:38:27.897805] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:30:27.561 [2024-09-29 16:38:27.897849] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:30:27.561 [2024-09-29 16:38:27.897867] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:30:27.561 [2024-09-29 16:38:27.897910] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.561 [2024-09-29 16:38:27.897926] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.561 [2024-09-29 16:38:27.897986] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:27.561 [2024-09-29 16:38:27.898010] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.561 [2024-09-29 16:38:27.898064] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:27.561 [2024-09-29 16:38:27.898254] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.561 [2024-09-29 16:38:27.898282] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.561 [2024-09-29 16:38:27.898300] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.561 [2024-09-29 16:38:27.898315] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:27.561 [2024-09-29 16:38:27.898344] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:30:27.561 [2024-09-29 16:38:27.898368] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:30:27.561 [2024-09-29 16:38:27.898395] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.561 [2024-09-29 16:38:27.898411] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.561 [2024-09-29 16:38:27.898428] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:27.561 [2024-09-29 16:38:27.898454] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.561 [2024-09-29 16:38:27.898507] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:27.561 [2024-09-29 16:38:27.898697] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.561 [2024-09-29 16:38:27.898720] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.561 [2024-09-29 16:38:27.898737] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.561 [2024-09-29 16:38:27.898750] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:27.561 [2024-09-29 16:38:27.898772] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:30:27.561 [2024-09-29 16:38:27.898797] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:30:27.561 [2024-09-29 16:38:27.898819] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.561 [2024-09-29 16:38:27.898839] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.561 [2024-09-29 16:38:27.898851] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:27.561 [2024-09-29 16:38:27.898872] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.561 [2024-09-29 16:38:27.898905] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:27.561 [2024-09-29 16:38:27.899022] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.561 [2024-09-29 16:38:27.899043] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.561 [2024-09-29 16:38:27.899056] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.561 [2024-09-29 16:38:27.899067] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:27.562 [2024-09-29 16:38:27.899083] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:30:27.562 [2024-09-29 16:38:27.899116] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.562 [2024-09-29 16:38:27.899139] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.562 [2024-09-29 16:38:27.899154] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:27.562 [2024-09-29 16:38:27.899192] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.562 [2024-09-29 16:38:27.899241] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:27.562 [2024-09-29 16:38:27.899354] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.562 [2024-09-29 16:38:27.899376] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.562 [2024-09-29 16:38:27.899388] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.562 [2024-09-29 16:38:27.899399] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:27.562 [2024-09-29 16:38:27.899415] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:30:27.562 [2024-09-29 16:38:27.899430] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:30:27.562 [2024-09-29 16:38:27.899452] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:30:27.562 [2024-09-29 16:38:27.899570] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:30:27.562 [2024-09-29 16:38:27.899585] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:30:27.562 [2024-09-29 16:38:27.899609] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.562 [2024-09-29 16:38:27.899623] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.562 [2024-09-29 16:38:27.899635] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:27.562 [2024-09-29 16:38:27.899655] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.562 [2024-09-29 16:38:27.899721] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:27.562 [2024-09-29 16:38:27.899875] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.562 [2024-09-29 16:38:27.899897] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.562 [2024-09-29 16:38:27.899910] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.562 [2024-09-29 16:38:27.899921] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:27.562 [2024-09-29 16:38:27.899937] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:30:27.562 [2024-09-29 16:38:27.899970] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.562 [2024-09-29 16:38:27.899996] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.562 [2024-09-29 16:38:27.900008] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:27.562 [2024-09-29 16:38:27.900028] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.562 [2024-09-29 16:38:27.900060] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:27.562 [2024-09-29 16:38:27.900192] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.562 [2024-09-29 16:38:27.900213] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.562 [2024-09-29 16:38:27.900226] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.562 [2024-09-29 16:38:27.900237] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:27.562 [2024-09-29 16:38:27.900252] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:30:27.562 [2024-09-29 16:38:27.900267] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:30:27.562 [2024-09-29 16:38:27.900288] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:30:27.562 [2024-09-29 16:38:27.900317] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:30:27.562 [2024-09-29 16:38:27.900361] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.562 [2024-09-29 16:38:27.900377] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:27.562 [2024-09-29 16:38:27.900406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.562 [2024-09-29 16:38:27.900452] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:27.562 [2024-09-29 16:38:27.900735] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:27.562 [2024-09-29 16:38:27.900759] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:27.562 [2024-09-29 16:38:27.900772] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:27.562 [2024-09-29 16:38:27.900784] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=0 00:30:27.562 [2024-09-29 16:38:27.900799] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:27.562 [2024-09-29 16:38:27.900813] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.562 [2024-09-29 16:38:27.900842] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:27.562 [2024-09-29 16:38:27.900860] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:27.562 [2024-09-29 16:38:27.945718] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.562 [2024-09-29 16:38:27.945747] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.562 [2024-09-29 16:38:27.945760] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.562 [2024-09-29 16:38:27.945776] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:27.562 [2024-09-29 16:38:27.945809] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:30:27.562 [2024-09-29 16:38:27.945830] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:30:27.562 [2024-09-29 16:38:27.945844] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:30:27.562 [2024-09-29 16:38:27.945864] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:30:27.562 [2024-09-29 16:38:27.945878] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:30:27.562 [2024-09-29 16:38:27.945891] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:30:27.562 [2024-09-29 16:38:27.945936] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:30:27.562 [2024-09-29 16:38:27.945961] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.562 [2024-09-29 16:38:27.945982] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.562 [2024-09-29 16:38:27.946010] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:27.562 [2024-09-29 16:38:27.946051] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:27.562 [2024-09-29 16:38:27.946088] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:27.562 [2024-09-29 16:38:27.946251] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.562 [2024-09-29 16:38:27.946278] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.562 [2024-09-29 16:38:27.946292] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.562 [2024-09-29 16:38:27.946304] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:27.562 [2024-09-29 16:38:27.946331] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.562 [2024-09-29 16:38:27.946347] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.562 [2024-09-29 16:38:27.946359] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:27.562 [2024-09-29 16:38:27.946379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:27.562 [2024-09-29 16:38:27.946397] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.562 [2024-09-29 16:38:27.946409] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.562 [2024-09-29 16:38:27.946420] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x615000015700) 00:30:27.562 [2024-09-29 16:38:27.946436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:27.562 [2024-09-29 16:38:27.946452] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.562 [2024-09-29 16:38:27.946471] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.562 [2024-09-29 16:38:27.946499] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x615000015700) 00:30:27.562 [2024-09-29 16:38:27.946517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:27.562 [2024-09-29 16:38:27.946534] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.562 [2024-09-29 16:38:27.946545] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.563 [2024-09-29 16:38:27.946571] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:27.563 [2024-09-29 16:38:27.946586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:27.563 [2024-09-29 16:38:27.946604] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:30:27.563 [2024-09-29 16:38:27.946646] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:30:27.563 [2024-09-29 16:38:27.946670] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.563 [2024-09-29 16:38:27.946707] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:27.563 [2024-09-29 16:38:27.946735] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.563 [2024-09-29 16:38:27.946775] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:27.563 [2024-09-29 16:38:27.946795] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:30:27.563 [2024-09-29 16:38:27.946808] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:30:27.563 [2024-09-29 16:38:27.946821] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:27.563 [2024-09-29 16:38:27.946834] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:27.563 [2024-09-29 16:38:27.947023] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.563 [2024-09-29 16:38:27.947047] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.563 [2024-09-29 16:38:27.947059] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.563 [2024-09-29 16:38:27.947071] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:27.563 [2024-09-29 16:38:27.947088] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:30:27.563 [2024-09-29 16:38:27.947104] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:30:27.563 [2024-09-29 16:38:27.947142] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.563 [2024-09-29 16:38:27.947160] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:27.563 [2024-09-29 16:38:27.947181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.563 [2024-09-29 16:38:27.947213] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:27.563 [2024-09-29 16:38:27.947364] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:27.563 [2024-09-29 16:38:27.947387] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:27.563 [2024-09-29 16:38:27.947407] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:27.563 [2024-09-29 16:38:27.947420] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:30:27.563 [2024-09-29 16:38:27.947433] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:27.563 [2024-09-29 16:38:27.947446] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.563 [2024-09-29 16:38:27.947465] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:27.563 [2024-09-29 16:38:27.947479] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:27.563 [2024-09-29 16:38:27.947506] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.563 [2024-09-29 16:38:27.947524] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.563 [2024-09-29 16:38:27.947536] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.563 [2024-09-29 16:38:27.947549] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:27.563 [2024-09-29 16:38:27.947586] nvme_ctrlr.c:4189:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:30:27.563 [2024-09-29 16:38:27.947659] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.563 [2024-09-29 16:38:27.947695] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:27.563 [2024-09-29 16:38:27.947721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.563 [2024-09-29 16:38:27.947742] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.563 [2024-09-29 16:38:27.947755] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.563 [2024-09-29 16:38:27.947771] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:27.563 [2024-09-29 16:38:27.947791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:30:27.563 [2024-09-29 16:38:27.947825] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:27.563 [2024-09-29 16:38:27.947844] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:27.563 [2024-09-29 16:38:27.948142] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:27.563 [2024-09-29 16:38:27.948165] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:27.563 [2024-09-29 16:38:27.948178] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:27.563 [2024-09-29 16:38:27.948196] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=1024, cccid=4 00:30:27.563 [2024-09-29 16:38:27.948210] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=1024 00:30:27.563 [2024-09-29 16:38:27.948223] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.563 [2024-09-29 16:38:27.948244] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:27.563 [2024-09-29 16:38:27.948259] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:27.563 [2024-09-29 16:38:27.948275] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.563 [2024-09-29 16:38:27.948291] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.563 [2024-09-29 16:38:27.948302] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.563 [2024-09-29 16:38:27.948314] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:27.563 [2024-09-29 16:38:27.988814] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.563 [2024-09-29 16:38:27.988844] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.563 [2024-09-29 16:38:27.988857] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.563 [2024-09-29 16:38:27.988870] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:27.563 [2024-09-29 16:38:27.988918] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.563 [2024-09-29 16:38:27.988938] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:27.563 [2024-09-29 16:38:27.988962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.563 [2024-09-29 16:38:27.989014] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:27.563 [2024-09-29 16:38:27.989179] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:27.563 [2024-09-29 16:38:27.989200] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:27.563 [2024-09-29 16:38:27.989212] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:27.563 [2024-09-29 16:38:27.989224] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=3072, cccid=4 00:30:27.563 [2024-09-29 16:38:27.989237] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=3072 00:30:27.563 [2024-09-29 16:38:27.989249] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.563 [2024-09-29 16:38:27.989271] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:27.563 [2024-09-29 16:38:27.989285] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:27.563 [2024-09-29 16:38:27.989304] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.563 [2024-09-29 16:38:27.989321] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.563 [2024-09-29 16:38:27.989345] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.563 [2024-09-29 16:38:27.989358] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:27.563 [2024-09-29 16:38:27.989387] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.563 [2024-09-29 16:38:27.989404] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:27.563 [2024-09-29 16:38:27.989425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.563 [2024-09-29 16:38:27.989467] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:27.564 [2024-09-29 16:38:27.989625] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:27.564 [2024-09-29 16:38:27.989645] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:27.564 [2024-09-29 16:38:27.989657] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:27.564 [2024-09-29 16:38:27.989668] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=8, cccid=4 00:30:27.564 [2024-09-29 16:38:27.993699] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=8 00:30:27.564 [2024-09-29 16:38:27.993713] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.564 [2024-09-29 16:38:27.993737] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:27.564 [2024-09-29 16:38:27.993752] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:27.564 [2024-09-29 16:38:28.033717] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.564 [2024-09-29 16:38:28.033745] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.564 [2024-09-29 16:38:28.033758] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.564 [2024-09-29 16:38:28.033770] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:27.564 ===================================================== 00:30:27.564 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:30:27.564 ===================================================== 00:30:27.564 Controller Capabilities/Features 00:30:27.564 ================================ 00:30:27.564 Vendor ID: 0000 00:30:27.564 Subsystem Vendor ID: 0000 00:30:27.564 Serial Number: .................... 00:30:27.564 Model Number: ........................................ 00:30:27.564 Firmware Version: 25.01 00:30:27.564 Recommended Arb Burst: 0 00:30:27.564 IEEE OUI Identifier: 00 00 00 00:30:27.564 Multi-path I/O 00:30:27.564 May have multiple subsystem ports: No 00:30:27.564 May have multiple controllers: No 00:30:27.564 Associated with SR-IOV VF: No 00:30:27.564 Max Data Transfer Size: 131072 00:30:27.564 Max Number of Namespaces: 0 00:30:27.564 Max Number of I/O Queues: 1024 00:30:27.564 NVMe Specification Version (VS): 1.3 00:30:27.564 NVMe Specification Version (Identify): 1.3 00:30:27.564 Maximum Queue Entries: 128 00:30:27.564 Contiguous Queues Required: Yes 00:30:27.564 Arbitration Mechanisms Supported 00:30:27.564 Weighted Round Robin: Not Supported 00:30:27.564 Vendor Specific: Not Supported 00:30:27.564 Reset Timeout: 15000 ms 00:30:27.564 Doorbell Stride: 4 bytes 00:30:27.564 NVM Subsystem Reset: Not Supported 00:30:27.564 Command Sets Supported 00:30:27.564 NVM Command Set: Supported 00:30:27.564 Boot Partition: Not Supported 00:30:27.564 Memory Page Size Minimum: 4096 bytes 00:30:27.564 Memory Page Size Maximum: 4096 bytes 00:30:27.564 Persistent Memory Region: Not Supported 00:30:27.564 Optional Asynchronous Events Supported 00:30:27.564 Namespace Attribute Notices: Not Supported 00:30:27.564 Firmware Activation Notices: Not Supported 00:30:27.564 ANA Change Notices: Not Supported 00:30:27.564 PLE Aggregate Log Change Notices: Not Supported 00:30:27.564 LBA Status Info Alert Notices: Not Supported 00:30:27.564 EGE Aggregate Log Change Notices: Not Supported 00:30:27.564 Normal NVM Subsystem Shutdown event: Not Supported 00:30:27.564 Zone Descriptor Change Notices: Not Supported 00:30:27.564 Discovery Log Change Notices: Supported 00:30:27.564 Controller Attributes 00:30:27.564 128-bit Host Identifier: Not Supported 00:30:27.564 Non-Operational Permissive Mode: Not Supported 00:30:27.564 NVM Sets: Not Supported 00:30:27.564 Read Recovery Levels: Not Supported 00:30:27.564 Endurance Groups: Not Supported 00:30:27.564 Predictable Latency Mode: Not Supported 00:30:27.564 Traffic Based Keep ALive: Not Supported 00:30:27.564 Namespace Granularity: Not Supported 00:30:27.564 SQ Associations: Not Supported 00:30:27.564 UUID List: Not Supported 00:30:27.564 Multi-Domain Subsystem: Not Supported 00:30:27.564 Fixed Capacity Management: Not Supported 00:30:27.564 Variable Capacity Management: Not Supported 00:30:27.564 Delete Endurance Group: Not Supported 00:30:27.564 Delete NVM Set: Not Supported 00:30:27.564 Extended LBA Formats Supported: Not Supported 00:30:27.564 Flexible Data Placement Supported: Not Supported 00:30:27.564 00:30:27.564 Controller Memory Buffer Support 00:30:27.564 ================================ 00:30:27.564 Supported: No 00:30:27.564 00:30:27.564 Persistent Memory Region Support 00:30:27.564 ================================ 00:30:27.564 Supported: No 00:30:27.564 00:30:27.564 Admin Command Set Attributes 00:30:27.564 ============================ 00:30:27.564 Security Send/Receive: Not Supported 00:30:27.564 Format NVM: Not Supported 00:30:27.564 Firmware Activate/Download: Not Supported 00:30:27.564 Namespace Management: Not Supported 00:30:27.564 Device Self-Test: Not Supported 00:30:27.564 Directives: Not Supported 00:30:27.564 NVMe-MI: Not Supported 00:30:27.564 Virtualization Management: Not Supported 00:30:27.564 Doorbell Buffer Config: Not Supported 00:30:27.564 Get LBA Status Capability: Not Supported 00:30:27.564 Command & Feature Lockdown Capability: Not Supported 00:30:27.564 Abort Command Limit: 1 00:30:27.564 Async Event Request Limit: 4 00:30:27.564 Number of Firmware Slots: N/A 00:30:27.564 Firmware Slot 1 Read-Only: N/A 00:30:27.564 Firmware Activation Without Reset: N/A 00:30:27.564 Multiple Update Detection Support: N/A 00:30:27.564 Firmware Update Granularity: No Information Provided 00:30:27.564 Per-Namespace SMART Log: No 00:30:27.564 Asymmetric Namespace Access Log Page: Not Supported 00:30:27.564 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:30:27.564 Command Effects Log Page: Not Supported 00:30:27.564 Get Log Page Extended Data: Supported 00:30:27.564 Telemetry Log Pages: Not Supported 00:30:27.564 Persistent Event Log Pages: Not Supported 00:30:27.564 Supported Log Pages Log Page: May Support 00:30:27.564 Commands Supported & Effects Log Page: Not Supported 00:30:27.564 Feature Identifiers & Effects Log Page:May Support 00:30:27.564 NVMe-MI Commands & Effects Log Page: May Support 00:30:27.564 Data Area 4 for Telemetry Log: Not Supported 00:30:27.564 Error Log Page Entries Supported: 128 00:30:27.564 Keep Alive: Not Supported 00:30:27.564 00:30:27.564 NVM Command Set Attributes 00:30:27.564 ========================== 00:30:27.564 Submission Queue Entry Size 00:30:27.564 Max: 1 00:30:27.564 Min: 1 00:30:27.564 Completion Queue Entry Size 00:30:27.564 Max: 1 00:30:27.564 Min: 1 00:30:27.564 Number of Namespaces: 0 00:30:27.564 Compare Command: Not Supported 00:30:27.564 Write Uncorrectable Command: Not Supported 00:30:27.564 Dataset Management Command: Not Supported 00:30:27.564 Write Zeroes Command: Not Supported 00:30:27.564 Set Features Save Field: Not Supported 00:30:27.564 Reservations: Not Supported 00:30:27.564 Timestamp: Not Supported 00:30:27.564 Copy: Not Supported 00:30:27.564 Volatile Write Cache: Not Present 00:30:27.564 Atomic Write Unit (Normal): 1 00:30:27.564 Atomic Write Unit (PFail): 1 00:30:27.564 Atomic Compare & Write Unit: 1 00:30:27.564 Fused Compare & Write: Supported 00:30:27.564 Scatter-Gather List 00:30:27.564 SGL Command Set: Supported 00:30:27.564 SGL Keyed: Supported 00:30:27.564 SGL Bit Bucket Descriptor: Not Supported 00:30:27.564 SGL Metadata Pointer: Not Supported 00:30:27.564 Oversized SGL: Not Supported 00:30:27.565 SGL Metadata Address: Not Supported 00:30:27.565 SGL Offset: Supported 00:30:27.565 Transport SGL Data Block: Not Supported 00:30:27.565 Replay Protected Memory Block: Not Supported 00:30:27.565 00:30:27.565 Firmware Slot Information 00:30:27.565 ========================= 00:30:27.565 Active slot: 0 00:30:27.565 00:30:27.565 00:30:27.565 Error Log 00:30:27.565 ========= 00:30:27.565 00:30:27.565 Active Namespaces 00:30:27.565 ================= 00:30:27.565 Discovery Log Page 00:30:27.565 ================== 00:30:27.565 Generation Counter: 2 00:30:27.565 Number of Records: 2 00:30:27.565 Record Format: 0 00:30:27.565 00:30:27.565 Discovery Log Entry 0 00:30:27.565 ---------------------- 00:30:27.565 Transport Type: 3 (TCP) 00:30:27.565 Address Family: 1 (IPv4) 00:30:27.565 Subsystem Type: 3 (Current Discovery Subsystem) 00:30:27.565 Entry Flags: 00:30:27.565 Duplicate Returned Information: 1 00:30:27.565 Explicit Persistent Connection Support for Discovery: 1 00:30:27.565 Transport Requirements: 00:30:27.565 Secure Channel: Not Required 00:30:27.565 Port ID: 0 (0x0000) 00:30:27.565 Controller ID: 65535 (0xffff) 00:30:27.565 Admin Max SQ Size: 128 00:30:27.565 Transport Service Identifier: 4420 00:30:27.565 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:30:27.565 Transport Address: 10.0.0.2 00:30:27.565 Discovery Log Entry 1 00:30:27.565 ---------------------- 00:30:27.565 Transport Type: 3 (TCP) 00:30:27.565 Address Family: 1 (IPv4) 00:30:27.565 Subsystem Type: 2 (NVM Subsystem) 00:30:27.565 Entry Flags: 00:30:27.565 Duplicate Returned Information: 0 00:30:27.565 Explicit Persistent Connection Support for Discovery: 0 00:30:27.565 Transport Requirements: 00:30:27.565 Secure Channel: Not Required 00:30:27.565 Port ID: 0 (0x0000) 00:30:27.565 Controller ID: 65535 (0xffff) 00:30:27.565 Admin Max SQ Size: 128 00:30:27.565 Transport Service Identifier: 4420 00:30:27.565 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:30:27.565 Transport Address: 10.0.0.2 [2024-09-29 16:38:28.033957] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:30:27.565 [2024-09-29 16:38:28.033991] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:27.565 [2024-09-29 16:38:28.034014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.565 [2024-09-29 16:38:28.034044] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x615000015700 00:30:27.565 [2024-09-29 16:38:28.034060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.565 [2024-09-29 16:38:28.034073] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x615000015700 00:30:27.565 [2024-09-29 16:38:28.034087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.565 [2024-09-29 16:38:28.034099] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:27.565 [2024-09-29 16:38:28.034113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.565 [2024-09-29 16:38:28.034140] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.565 [2024-09-29 16:38:28.034156] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.565 [2024-09-29 16:38:28.034168] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:27.565 [2024-09-29 16:38:28.034199] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.565 [2024-09-29 16:38:28.034252] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:27.565 [2024-09-29 16:38:28.034401] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.565 [2024-09-29 16:38:28.034424] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.565 [2024-09-29 16:38:28.034437] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.565 [2024-09-29 16:38:28.034450] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:27.565 [2024-09-29 16:38:28.034471] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.565 [2024-09-29 16:38:28.034486] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.565 [2024-09-29 16:38:28.034498] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:27.565 [2024-09-29 16:38:28.034530] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.565 [2024-09-29 16:38:28.034573] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:27.565 [2024-09-29 16:38:28.034723] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.565 [2024-09-29 16:38:28.034745] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.565 [2024-09-29 16:38:28.034757] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.565 [2024-09-29 16:38:28.034769] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:27.565 [2024-09-29 16:38:28.034790] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:30:27.565 [2024-09-29 16:38:28.034806] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:30:27.565 [2024-09-29 16:38:28.034834] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.565 [2024-09-29 16:38:28.034850] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.565 [2024-09-29 16:38:28.034862] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:27.565 [2024-09-29 16:38:28.034882] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.565 [2024-09-29 16:38:28.034914] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:27.565 [2024-09-29 16:38:28.035021] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.565 [2024-09-29 16:38:28.035043] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.565 [2024-09-29 16:38:28.035055] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.565 [2024-09-29 16:38:28.035067] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:27.565 [2024-09-29 16:38:28.035095] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.565 [2024-09-29 16:38:28.035111] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.565 [2024-09-29 16:38:28.035123] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:27.565 [2024-09-29 16:38:28.035141] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.565 [2024-09-29 16:38:28.035172] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:27.565 [2024-09-29 16:38:28.035288] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.565 [2024-09-29 16:38:28.035308] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.565 [2024-09-29 16:38:28.035321] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.565 [2024-09-29 16:38:28.035332] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:27.565 [2024-09-29 16:38:28.035359] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.565 [2024-09-29 16:38:28.035379] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.565 [2024-09-29 16:38:28.035391] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:27.565 [2024-09-29 16:38:28.035410] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.565 [2024-09-29 16:38:28.035441] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:27.565 [2024-09-29 16:38:28.035551] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.565 [2024-09-29 16:38:28.035572] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.565 [2024-09-29 16:38:28.035584] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.565 [2024-09-29 16:38:28.035596] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:27.565 [2024-09-29 16:38:28.035622] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.565 [2024-09-29 16:38:28.035638] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.565 [2024-09-29 16:38:28.035649] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:27.565 [2024-09-29 16:38:28.035667] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.565 [2024-09-29 16:38:28.035714] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:27.565 [2024-09-29 16:38:28.035834] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.565 [2024-09-29 16:38:28.035864] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.565 [2024-09-29 16:38:28.035878] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.566 [2024-09-29 16:38:28.035890] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:27.566 [2024-09-29 16:38:28.035917] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.566 [2024-09-29 16:38:28.035933] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.566 [2024-09-29 16:38:28.035944] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:27.566 [2024-09-29 16:38:28.035963] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.566 [2024-09-29 16:38:28.035993] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:27.566 [2024-09-29 16:38:28.036103] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.566 [2024-09-29 16:38:28.036125] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.566 [2024-09-29 16:38:28.036137] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.566 [2024-09-29 16:38:28.036149] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:27.566 [2024-09-29 16:38:28.036176] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.566 [2024-09-29 16:38:28.036191] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.566 [2024-09-29 16:38:28.036202] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:27.566 [2024-09-29 16:38:28.036221] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.566 [2024-09-29 16:38:28.036251] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:27.566 [2024-09-29 16:38:28.036362] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.566 [2024-09-29 16:38:28.036382] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.566 [2024-09-29 16:38:28.036395] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.566 [2024-09-29 16:38:28.036406] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:27.566 [2024-09-29 16:38:28.036433] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.566 [2024-09-29 16:38:28.036453] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.566 [2024-09-29 16:38:28.036465] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:27.566 [2024-09-29 16:38:28.036484] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.566 [2024-09-29 16:38:28.036514] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:27.566 [2024-09-29 16:38:28.036632] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.566 [2024-09-29 16:38:28.036654] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.566 [2024-09-29 16:38:28.036667] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.566 [2024-09-29 16:38:28.036687] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:27.566 [2024-09-29 16:38:28.036716] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.566 [2024-09-29 16:38:28.036731] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.566 [2024-09-29 16:38:28.036742] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:27.566 [2024-09-29 16:38:28.036766] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.566 [2024-09-29 16:38:28.036799] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:27.566 [2024-09-29 16:38:28.036911] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.566 [2024-09-29 16:38:28.036931] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.566 [2024-09-29 16:38:28.036943] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.566 [2024-09-29 16:38:28.036955] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:27.566 [2024-09-29 16:38:28.036981] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.566 [2024-09-29 16:38:28.036997] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.566 [2024-09-29 16:38:28.037008] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:27.566 [2024-09-29 16:38:28.037026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.566 [2024-09-29 16:38:28.037056] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:27.566 [2024-09-29 16:38:28.037169] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.566 [2024-09-29 16:38:28.037191] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.566 [2024-09-29 16:38:28.037203] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.566 [2024-09-29 16:38:28.037228] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:27.566 [2024-09-29 16:38:28.037257] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.566 [2024-09-29 16:38:28.037273] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.566 [2024-09-29 16:38:28.037284] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:27.566 [2024-09-29 16:38:28.037302] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.566 [2024-09-29 16:38:28.037332] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:27.566 [2024-09-29 16:38:28.037445] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.566 [2024-09-29 16:38:28.037466] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.566 [2024-09-29 16:38:28.037478] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.566 [2024-09-29 16:38:28.037489] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:27.566 [2024-09-29 16:38:28.037516] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.566 [2024-09-29 16:38:28.037536] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.566 [2024-09-29 16:38:28.037548] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:27.566 [2024-09-29 16:38:28.037567] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.566 [2024-09-29 16:38:28.037597] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:27.566 [2024-09-29 16:38:28.041690] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.566 [2024-09-29 16:38:28.041715] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.566 [2024-09-29 16:38:28.041727] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.566 [2024-09-29 16:38:28.041739] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:27.566 [2024-09-29 16:38:28.041782] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.566 [2024-09-29 16:38:28.041798] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.566 [2024-09-29 16:38:28.041809] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:27.566 [2024-09-29 16:38:28.041827] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.566 [2024-09-29 16:38:28.041859] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:27.566 [2024-09-29 16:38:28.042002] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.566 [2024-09-29 16:38:28.042028] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.566 [2024-09-29 16:38:28.042042] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.566 [2024-09-29 16:38:28.042054] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:27.566 [2024-09-29 16:38:28.042076] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:30:27.566 00:30:27.566 16:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:30:27.829 [2024-09-29 16:38:28.144215] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:30:27.829 [2024-09-29 16:38:28.144319] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3259739 ] 00:30:27.829 [2024-09-29 16:38:28.199520] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:30:27.829 [2024-09-29 16:38:28.199644] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:30:27.829 [2024-09-29 16:38:28.203711] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:30:27.829 [2024-09-29 16:38:28.203752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:30:27.829 [2024-09-29 16:38:28.203777] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:30:27.829 [2024-09-29 16:38:28.204576] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:30:27.829 [2024-09-29 16:38:28.204659] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x615000015700 0 00:30:27.829 [2024-09-29 16:38:28.214697] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:30:27.829 [2024-09-29 16:38:28.214728] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:30:27.829 [2024-09-29 16:38:28.214750] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:30:27.829 [2024-09-29 16:38:28.214763] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:30:27.829 [2024-09-29 16:38:28.214853] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.829 [2024-09-29 16:38:28.214877] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.829 [2024-09-29 16:38:28.214891] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:27.829 [2024-09-29 16:38:28.214931] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:27.829 [2024-09-29 16:38:28.214977] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:27.829 [2024-09-29 16:38:28.222699] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.829 [2024-09-29 16:38:28.222727] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.829 [2024-09-29 16:38:28.222741] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.829 [2024-09-29 16:38:28.222755] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:27.829 [2024-09-29 16:38:28.222789] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:30:27.829 [2024-09-29 16:38:28.222815] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:30:27.829 [2024-09-29 16:38:28.222832] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:30:27.829 [2024-09-29 16:38:28.222863] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.829 [2024-09-29 16:38:28.222878] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.829 [2024-09-29 16:38:28.222895] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:27.829 [2024-09-29 16:38:28.222916] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.829 [2024-09-29 16:38:28.222952] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:27.829 [2024-09-29 16:38:28.223092] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.829 [2024-09-29 16:38:28.223114] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.829 [2024-09-29 16:38:28.223127] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.829 [2024-09-29 16:38:28.223139] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:27.829 [2024-09-29 16:38:28.223161] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:30:27.829 [2024-09-29 16:38:28.223186] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:30:27.829 [2024-09-29 16:38:28.223208] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.829 [2024-09-29 16:38:28.223227] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.829 [2024-09-29 16:38:28.223239] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:27.829 [2024-09-29 16:38:28.223286] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.829 [2024-09-29 16:38:28.223320] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:27.829 [2024-09-29 16:38:28.223479] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.829 [2024-09-29 16:38:28.223502] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.829 [2024-09-29 16:38:28.223519] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.829 [2024-09-29 16:38:28.223532] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:27.829 [2024-09-29 16:38:28.223549] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:30:27.829 [2024-09-29 16:38:28.223578] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:30:27.829 [2024-09-29 16:38:28.223599] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.829 [2024-09-29 16:38:28.223613] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.829 [2024-09-29 16:38:28.223624] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:27.829 [2024-09-29 16:38:28.223658] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.829 [2024-09-29 16:38:28.223724] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:27.829 [2024-09-29 16:38:28.223853] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.829 [2024-09-29 16:38:28.223874] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.829 [2024-09-29 16:38:28.223886] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.829 [2024-09-29 16:38:28.223898] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:27.829 [2024-09-29 16:38:28.223913] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:30:27.829 [2024-09-29 16:38:28.223941] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.829 [2024-09-29 16:38:28.223961] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.829 [2024-09-29 16:38:28.223975] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:27.829 [2024-09-29 16:38:28.223994] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.829 [2024-09-29 16:38:28.224026] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:27.829 [2024-09-29 16:38:28.224177] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.829 [2024-09-29 16:38:28.224199] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.829 [2024-09-29 16:38:28.224211] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.829 [2024-09-29 16:38:28.224223] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:27.829 [2024-09-29 16:38:28.224237] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:30:27.829 [2024-09-29 16:38:28.224252] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:30:27.829 [2024-09-29 16:38:28.224274] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:30:27.829 [2024-09-29 16:38:28.224393] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:30:27.829 [2024-09-29 16:38:28.224413] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:30:27.829 [2024-09-29 16:38:28.224440] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.829 [2024-09-29 16:38:28.224454] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.829 [2024-09-29 16:38:28.224466] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:27.829 [2024-09-29 16:38:28.224484] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.829 [2024-09-29 16:38:28.224539] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:27.829 [2024-09-29 16:38:28.224694] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.829 [2024-09-29 16:38:28.224721] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.829 [2024-09-29 16:38:28.224735] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.829 [2024-09-29 16:38:28.224751] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:27.829 [2024-09-29 16:38:28.224767] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:30:27.829 [2024-09-29 16:38:28.224795] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.829 [2024-09-29 16:38:28.224810] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.829 [2024-09-29 16:38:28.224828] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:27.829 [2024-09-29 16:38:28.224848] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.829 [2024-09-29 16:38:28.224880] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:27.829 [2024-09-29 16:38:28.225011] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.829 [2024-09-29 16:38:28.225034] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.829 [2024-09-29 16:38:28.225046] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.829 [2024-09-29 16:38:28.225058] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:27.829 [2024-09-29 16:38:28.225073] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:30:27.829 [2024-09-29 16:38:28.225087] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:30:27.829 [2024-09-29 16:38:28.225109] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:30:27.829 [2024-09-29 16:38:28.225131] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:30:27.829 [2024-09-29 16:38:28.225184] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.829 [2024-09-29 16:38:28.225200] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:27.830 [2024-09-29 16:38:28.225220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.830 [2024-09-29 16:38:28.225251] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:27.830 [2024-09-29 16:38:28.225482] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:27.830 [2024-09-29 16:38:28.225505] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:27.830 [2024-09-29 16:38:28.225518] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:27.830 [2024-09-29 16:38:28.225537] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=0 00:30:27.830 [2024-09-29 16:38:28.225550] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:27.830 [2024-09-29 16:38:28.225564] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.830 [2024-09-29 16:38:28.225595] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:27.830 [2024-09-29 16:38:28.225617] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:27.830 [2024-09-29 16:38:28.265781] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.830 [2024-09-29 16:38:28.265811] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.830 [2024-09-29 16:38:28.265825] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.830 [2024-09-29 16:38:28.265837] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:27.830 [2024-09-29 16:38:28.265863] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:30:27.830 [2024-09-29 16:38:28.265884] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:30:27.830 [2024-09-29 16:38:28.265903] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:30:27.830 [2024-09-29 16:38:28.265918] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:30:27.830 [2024-09-29 16:38:28.265932] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:30:27.830 [2024-09-29 16:38:28.265954] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:30:27.830 [2024-09-29 16:38:28.265987] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:30:27.830 [2024-09-29 16:38:28.266011] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.830 [2024-09-29 16:38:28.266025] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.830 [2024-09-29 16:38:28.266037] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:27.830 [2024-09-29 16:38:28.266079] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:27.830 [2024-09-29 16:38:28.266130] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:27.830 [2024-09-29 16:38:28.266263] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.830 [2024-09-29 16:38:28.266285] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.830 [2024-09-29 16:38:28.266297] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.830 [2024-09-29 16:38:28.266309] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:27.830 [2024-09-29 16:38:28.266330] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.830 [2024-09-29 16:38:28.266349] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.830 [2024-09-29 16:38:28.266362] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:27.830 [2024-09-29 16:38:28.266386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:27.830 [2024-09-29 16:38:28.266404] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.830 [2024-09-29 16:38:28.266416] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.830 [2024-09-29 16:38:28.266442] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x615000015700) 00:30:27.830 [2024-09-29 16:38:28.266458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:27.830 [2024-09-29 16:38:28.266473] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.830 [2024-09-29 16:38:28.266484] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.830 [2024-09-29 16:38:28.266494] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x615000015700) 00:30:27.830 [2024-09-29 16:38:28.266509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:27.830 [2024-09-29 16:38:28.266544] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.830 [2024-09-29 16:38:28.266557] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.830 [2024-09-29 16:38:28.266567] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:27.830 [2024-09-29 16:38:28.266582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:27.830 [2024-09-29 16:38:28.266596] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:30:27.830 [2024-09-29 16:38:28.266637] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:30:27.830 [2024-09-29 16:38:28.266658] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.830 [2024-09-29 16:38:28.266711] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:27.830 [2024-09-29 16:38:28.266734] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.830 [2024-09-29 16:38:28.266770] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:27.830 [2024-09-29 16:38:28.266789] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:30:27.830 [2024-09-29 16:38:28.266801] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:30:27.830 [2024-09-29 16:38:28.266814] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:27.830 [2024-09-29 16:38:28.266826] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:27.830 [2024-09-29 16:38:28.266990] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.830 [2024-09-29 16:38:28.267018] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.830 [2024-09-29 16:38:28.267032] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.830 [2024-09-29 16:38:28.267043] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:27.830 [2024-09-29 16:38:28.267060] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:30:27.830 [2024-09-29 16:38:28.267092] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:30:27.830 [2024-09-29 16:38:28.267116] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:30:27.830 [2024-09-29 16:38:28.267134] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:30:27.830 [2024-09-29 16:38:28.267157] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.830 [2024-09-29 16:38:28.267171] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.830 [2024-09-29 16:38:28.267182] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:27.830 [2024-09-29 16:38:28.267201] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:27.830 [2024-09-29 16:38:28.267233] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:27.830 [2024-09-29 16:38:28.267397] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.830 [2024-09-29 16:38:28.267419] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.830 [2024-09-29 16:38:28.267432] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.830 [2024-09-29 16:38:28.267443] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:27.830 [2024-09-29 16:38:28.267561] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:30:27.830 [2024-09-29 16:38:28.267606] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:30:27.830 [2024-09-29 16:38:28.267634] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.830 [2024-09-29 16:38:28.267648] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:27.830 [2024-09-29 16:38:28.267670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.830 [2024-09-29 16:38:28.267728] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:27.830 [2024-09-29 16:38:28.267902] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:27.830 [2024-09-29 16:38:28.267923] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:27.830 [2024-09-29 16:38:28.267940] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:27.830 [2024-09-29 16:38:28.267952] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:30:27.830 [2024-09-29 16:38:28.267965] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:27.830 [2024-09-29 16:38:28.267984] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.830 [2024-09-29 16:38:28.268007] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:27.830 [2024-09-29 16:38:28.268023] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:27.830 [2024-09-29 16:38:28.268042] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.830 [2024-09-29 16:38:28.268060] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.830 [2024-09-29 16:38:28.268072] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.830 [2024-09-29 16:38:28.268083] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:27.830 [2024-09-29 16:38:28.268128] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:30:27.830 [2024-09-29 16:38:28.268162] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:30:27.830 [2024-09-29 16:38:28.268218] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:30:27.830 [2024-09-29 16:38:28.268246] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.830 [2024-09-29 16:38:28.268261] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:27.830 [2024-09-29 16:38:28.268281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.830 [2024-09-29 16:38:28.268312] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:27.830 [2024-09-29 16:38:28.268527] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:27.831 [2024-09-29 16:38:28.268550] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:27.831 [2024-09-29 16:38:28.268562] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:27.831 [2024-09-29 16:38:28.268573] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:30:27.831 [2024-09-29 16:38:28.268586] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:27.831 [2024-09-29 16:38:28.268597] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.831 [2024-09-29 16:38:28.268615] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:27.831 [2024-09-29 16:38:28.268628] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:27.831 [2024-09-29 16:38:28.268647] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.831 [2024-09-29 16:38:28.268670] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.831 [2024-09-29 16:38:28.272706] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.831 [2024-09-29 16:38:28.272719] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:27.831 [2024-09-29 16:38:28.272762] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:30:27.831 [2024-09-29 16:38:28.272809] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:30:27.831 [2024-09-29 16:38:28.272838] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.831 [2024-09-29 16:38:28.272853] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:27.831 [2024-09-29 16:38:28.272880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.831 [2024-09-29 16:38:28.272920] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:27.831 [2024-09-29 16:38:28.273086] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:27.831 [2024-09-29 16:38:28.273109] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:27.831 [2024-09-29 16:38:28.273134] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:27.831 [2024-09-29 16:38:28.273146] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:30:27.831 [2024-09-29 16:38:28.273159] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:27.831 [2024-09-29 16:38:28.273171] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.831 [2024-09-29 16:38:28.273190] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:27.831 [2024-09-29 16:38:28.273203] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:27.831 [2024-09-29 16:38:28.273223] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.831 [2024-09-29 16:38:28.273240] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.831 [2024-09-29 16:38:28.273252] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.831 [2024-09-29 16:38:28.273263] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:27.831 [2024-09-29 16:38:28.273292] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:30:27.831 [2024-09-29 16:38:28.273317] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:30:27.831 [2024-09-29 16:38:28.273342] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:30:27.831 [2024-09-29 16:38:28.273375] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:30:27.831 [2024-09-29 16:38:28.273391] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:30:27.831 [2024-09-29 16:38:28.273406] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:30:27.831 [2024-09-29 16:38:28.273420] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:30:27.831 [2024-09-29 16:38:28.273432] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:30:27.831 [2024-09-29 16:38:28.273446] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:30:27.831 [2024-09-29 16:38:28.273494] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.831 [2024-09-29 16:38:28.273513] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:27.831 [2024-09-29 16:38:28.273533] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.831 [2024-09-29 16:38:28.273558] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.831 [2024-09-29 16:38:28.273572] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.831 [2024-09-29 16:38:28.273583] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:27.831 [2024-09-29 16:38:28.273600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:30:27.831 [2024-09-29 16:38:28.273647] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:27.831 [2024-09-29 16:38:28.273682] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:27.831 [2024-09-29 16:38:28.273845] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.831 [2024-09-29 16:38:28.273868] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.831 [2024-09-29 16:38:28.273881] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.831 [2024-09-29 16:38:28.273894] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:27.831 [2024-09-29 16:38:28.273921] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.831 [2024-09-29 16:38:28.273939] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.831 [2024-09-29 16:38:28.273951] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.831 [2024-09-29 16:38:28.273962] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:27.831 [2024-09-29 16:38:28.274013] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.831 [2024-09-29 16:38:28.274029] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:27.831 [2024-09-29 16:38:28.274047] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.831 [2024-09-29 16:38:28.274078] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:27.831 [2024-09-29 16:38:28.274227] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.831 [2024-09-29 16:38:28.274248] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.831 [2024-09-29 16:38:28.274260] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.831 [2024-09-29 16:38:28.274272] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:27.831 [2024-09-29 16:38:28.274297] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.831 [2024-09-29 16:38:28.274313] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:27.831 [2024-09-29 16:38:28.274336] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.831 [2024-09-29 16:38:28.274369] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:27.831 [2024-09-29 16:38:28.274488] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.831 [2024-09-29 16:38:28.274511] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.831 [2024-09-29 16:38:28.274523] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.831 [2024-09-29 16:38:28.274535] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:27.831 [2024-09-29 16:38:28.274561] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.831 [2024-09-29 16:38:28.274577] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:27.831 [2024-09-29 16:38:28.274596] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.831 [2024-09-29 16:38:28.274627] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:27.831 [2024-09-29 16:38:28.274765] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.831 [2024-09-29 16:38:28.274788] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.831 [2024-09-29 16:38:28.274801] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.831 [2024-09-29 16:38:28.274813] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:27.831 [2024-09-29 16:38:28.274857] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.831 [2024-09-29 16:38:28.274875] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:27.831 [2024-09-29 16:38:28.274896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.831 [2024-09-29 16:38:28.274923] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.831 [2024-09-29 16:38:28.274938] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:27.831 [2024-09-29 16:38:28.274967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.831 [2024-09-29 16:38:28.274989] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.831 [2024-09-29 16:38:28.275003] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x615000015700) 00:30:27.831 [2024-09-29 16:38:28.275030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.831 [2024-09-29 16:38:28.275060] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.831 [2024-09-29 16:38:28.275075] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x615000015700) 00:30:27.831 [2024-09-29 16:38:28.275099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.831 [2024-09-29 16:38:28.275149] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:27.831 [2024-09-29 16:38:28.275168] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:27.831 [2024-09-29 16:38:28.275197] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001ba00, cid 6, qid 0 00:30:27.831 [2024-09-29 16:38:28.275210] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:30:27.831 [2024-09-29 16:38:28.275526] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:27.831 [2024-09-29 16:38:28.275563] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:27.831 [2024-09-29 16:38:28.275577] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:27.832 [2024-09-29 16:38:28.275589] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=8192, cccid=5 00:30:27.832 [2024-09-29 16:38:28.275602] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b880) on tqpair(0x615000015700): expected_datao=0, payload_size=8192 00:30:27.832 [2024-09-29 16:38:28.275614] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.832 [2024-09-29 16:38:28.275647] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:27.832 [2024-09-29 16:38:28.275684] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:27.832 [2024-09-29 16:38:28.275707] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:27.832 [2024-09-29 16:38:28.275726] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:27.832 [2024-09-29 16:38:28.275737] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:27.832 [2024-09-29 16:38:28.275748] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=512, cccid=4 00:30:27.832 [2024-09-29 16:38:28.275761] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=512 00:30:27.832 [2024-09-29 16:38:28.275772] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.832 [2024-09-29 16:38:28.275799] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:27.832 [2024-09-29 16:38:28.275814] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:27.832 [2024-09-29 16:38:28.275828] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:27.832 [2024-09-29 16:38:28.275844] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:27.832 [2024-09-29 16:38:28.275856] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:27.832 [2024-09-29 16:38:28.275867] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=512, cccid=6 00:30:27.832 [2024-09-29 16:38:28.275879] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001ba00) on tqpair(0x615000015700): expected_datao=0, payload_size=512 00:30:27.832 [2024-09-29 16:38:28.275894] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.832 [2024-09-29 16:38:28.275912] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:27.832 [2024-09-29 16:38:28.275925] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:27.832 [2024-09-29 16:38:28.275966] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:27.832 [2024-09-29 16:38:28.275982] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:27.832 [2024-09-29 16:38:28.275993] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:27.832 [2024-09-29 16:38:28.276028] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=7 00:30:27.832 [2024-09-29 16:38:28.276039] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001bb80) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:27.832 [2024-09-29 16:38:28.276050] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.832 [2024-09-29 16:38:28.276066] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:27.832 [2024-09-29 16:38:28.276078] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:27.832 [2024-09-29 16:38:28.276091] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.832 [2024-09-29 16:38:28.276106] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.832 [2024-09-29 16:38:28.276116] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.832 [2024-09-29 16:38:28.276128] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:27.832 [2024-09-29 16:38:28.276168] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.832 [2024-09-29 16:38:28.276186] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.832 [2024-09-29 16:38:28.276197] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.832 [2024-09-29 16:38:28.276208] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:27.832 [2024-09-29 16:38:28.276234] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.832 [2024-09-29 16:38:28.276252] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.832 [2024-09-29 16:38:28.276263] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.832 [2024-09-29 16:38:28.276273] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001ba00) on tqpair=0x615000015700 00:30:27.832 [2024-09-29 16:38:28.276292] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.832 [2024-09-29 16:38:28.276309] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.832 [2024-09-29 16:38:28.276320] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.832 [2024-09-29 16:38:28.276330] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x615000015700 00:30:27.832 ===================================================== 00:30:27.832 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:27.832 ===================================================== 00:30:27.832 Controller Capabilities/Features 00:30:27.832 ================================ 00:30:27.832 Vendor ID: 8086 00:30:27.832 Subsystem Vendor ID: 8086 00:30:27.832 Serial Number: SPDK00000000000001 00:30:27.832 Model Number: SPDK bdev Controller 00:30:27.832 Firmware Version: 25.01 00:30:27.832 Recommended Arb Burst: 6 00:30:27.832 IEEE OUI Identifier: e4 d2 5c 00:30:27.832 Multi-path I/O 00:30:27.832 May have multiple subsystem ports: Yes 00:30:27.832 May have multiple controllers: Yes 00:30:27.832 Associated with SR-IOV VF: No 00:30:27.832 Max Data Transfer Size: 131072 00:30:27.832 Max Number of Namespaces: 32 00:30:27.832 Max Number of I/O Queues: 127 00:30:27.832 NVMe Specification Version (VS): 1.3 00:30:27.832 NVMe Specification Version (Identify): 1.3 00:30:27.832 Maximum Queue Entries: 128 00:30:27.832 Contiguous Queues Required: Yes 00:30:27.832 Arbitration Mechanisms Supported 00:30:27.832 Weighted Round Robin: Not Supported 00:30:27.832 Vendor Specific: Not Supported 00:30:27.832 Reset Timeout: 15000 ms 00:30:27.832 Doorbell Stride: 4 bytes 00:30:27.832 NVM Subsystem Reset: Not Supported 00:30:27.832 Command Sets Supported 00:30:27.832 NVM Command Set: Supported 00:30:27.832 Boot Partition: Not Supported 00:30:27.832 Memory Page Size Minimum: 4096 bytes 00:30:27.832 Memory Page Size Maximum: 4096 bytes 00:30:27.832 Persistent Memory Region: Not Supported 00:30:27.832 Optional Asynchronous Events Supported 00:30:27.832 Namespace Attribute Notices: Supported 00:30:27.832 Firmware Activation Notices: Not Supported 00:30:27.832 ANA Change Notices: Not Supported 00:30:27.832 PLE Aggregate Log Change Notices: Not Supported 00:30:27.832 LBA Status Info Alert Notices: Not Supported 00:30:27.832 EGE Aggregate Log Change Notices: Not Supported 00:30:27.832 Normal NVM Subsystem Shutdown event: Not Supported 00:30:27.832 Zone Descriptor Change Notices: Not Supported 00:30:27.832 Discovery Log Change Notices: Not Supported 00:30:27.832 Controller Attributes 00:30:27.832 128-bit Host Identifier: Supported 00:30:27.832 Non-Operational Permissive Mode: Not Supported 00:30:27.832 NVM Sets: Not Supported 00:30:27.832 Read Recovery Levels: Not Supported 00:30:27.832 Endurance Groups: Not Supported 00:30:27.832 Predictable Latency Mode: Not Supported 00:30:27.832 Traffic Based Keep ALive: Not Supported 00:30:27.832 Namespace Granularity: Not Supported 00:30:27.832 SQ Associations: Not Supported 00:30:27.832 UUID List: Not Supported 00:30:27.832 Multi-Domain Subsystem: Not Supported 00:30:27.832 Fixed Capacity Management: Not Supported 00:30:27.832 Variable Capacity Management: Not Supported 00:30:27.832 Delete Endurance Group: Not Supported 00:30:27.832 Delete NVM Set: Not Supported 00:30:27.832 Extended LBA Formats Supported: Not Supported 00:30:27.832 Flexible Data Placement Supported: Not Supported 00:30:27.832 00:30:27.832 Controller Memory Buffer Support 00:30:27.832 ================================ 00:30:27.832 Supported: No 00:30:27.832 00:30:27.832 Persistent Memory Region Support 00:30:27.832 ================================ 00:30:27.832 Supported: No 00:30:27.832 00:30:27.832 Admin Command Set Attributes 00:30:27.832 ============================ 00:30:27.832 Security Send/Receive: Not Supported 00:30:27.832 Format NVM: Not Supported 00:30:27.832 Firmware Activate/Download: Not Supported 00:30:27.832 Namespace Management: Not Supported 00:30:27.832 Device Self-Test: Not Supported 00:30:27.832 Directives: Not Supported 00:30:27.832 NVMe-MI: Not Supported 00:30:27.832 Virtualization Management: Not Supported 00:30:27.832 Doorbell Buffer Config: Not Supported 00:30:27.832 Get LBA Status Capability: Not Supported 00:30:27.832 Command & Feature Lockdown Capability: Not Supported 00:30:27.832 Abort Command Limit: 4 00:30:27.832 Async Event Request Limit: 4 00:30:27.832 Number of Firmware Slots: N/A 00:30:27.832 Firmware Slot 1 Read-Only: N/A 00:30:27.832 Firmware Activation Without Reset: N/A 00:30:27.832 Multiple Update Detection Support: N/A 00:30:27.832 Firmware Update Granularity: No Information Provided 00:30:27.832 Per-Namespace SMART Log: No 00:30:27.832 Asymmetric Namespace Access Log Page: Not Supported 00:30:27.832 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:30:27.832 Command Effects Log Page: Supported 00:30:27.832 Get Log Page Extended Data: Supported 00:30:27.832 Telemetry Log Pages: Not Supported 00:30:27.832 Persistent Event Log Pages: Not Supported 00:30:27.832 Supported Log Pages Log Page: May Support 00:30:27.832 Commands Supported & Effects Log Page: Not Supported 00:30:27.832 Feature Identifiers & Effects Log Page:May Support 00:30:27.832 NVMe-MI Commands & Effects Log Page: May Support 00:30:27.832 Data Area 4 for Telemetry Log: Not Supported 00:30:27.832 Error Log Page Entries Supported: 128 00:30:27.832 Keep Alive: Supported 00:30:27.832 Keep Alive Granularity: 10000 ms 00:30:27.832 00:30:27.832 NVM Command Set Attributes 00:30:27.832 ========================== 00:30:27.832 Submission Queue Entry Size 00:30:27.832 Max: 64 00:30:27.833 Min: 64 00:30:27.833 Completion Queue Entry Size 00:30:27.833 Max: 16 00:30:27.833 Min: 16 00:30:27.833 Number of Namespaces: 32 00:30:27.833 Compare Command: Supported 00:30:27.833 Write Uncorrectable Command: Not Supported 00:30:27.833 Dataset Management Command: Supported 00:30:27.833 Write Zeroes Command: Supported 00:30:27.833 Set Features Save Field: Not Supported 00:30:27.833 Reservations: Supported 00:30:27.833 Timestamp: Not Supported 00:30:27.833 Copy: Supported 00:30:27.833 Volatile Write Cache: Present 00:30:27.833 Atomic Write Unit (Normal): 1 00:30:27.833 Atomic Write Unit (PFail): 1 00:30:27.833 Atomic Compare & Write Unit: 1 00:30:27.833 Fused Compare & Write: Supported 00:30:27.833 Scatter-Gather List 00:30:27.833 SGL Command Set: Supported 00:30:27.833 SGL Keyed: Supported 00:30:27.833 SGL Bit Bucket Descriptor: Not Supported 00:30:27.833 SGL Metadata Pointer: Not Supported 00:30:27.833 Oversized SGL: Not Supported 00:30:27.833 SGL Metadata Address: Not Supported 00:30:27.833 SGL Offset: Supported 00:30:27.833 Transport SGL Data Block: Not Supported 00:30:27.833 Replay Protected Memory Block: Not Supported 00:30:27.833 00:30:27.833 Firmware Slot Information 00:30:27.833 ========================= 00:30:27.833 Active slot: 1 00:30:27.833 Slot 1 Firmware Revision: 25.01 00:30:27.833 00:30:27.833 00:30:27.833 Commands Supported and Effects 00:30:27.833 ============================== 00:30:27.833 Admin Commands 00:30:27.833 -------------- 00:30:27.833 Get Log Page (02h): Supported 00:30:27.833 Identify (06h): Supported 00:30:27.833 Abort (08h): Supported 00:30:27.833 Set Features (09h): Supported 00:30:27.833 Get Features (0Ah): Supported 00:30:27.833 Asynchronous Event Request (0Ch): Supported 00:30:27.833 Keep Alive (18h): Supported 00:30:27.833 I/O Commands 00:30:27.833 ------------ 00:30:27.833 Flush (00h): Supported LBA-Change 00:30:27.833 Write (01h): Supported LBA-Change 00:30:27.833 Read (02h): Supported 00:30:27.833 Compare (05h): Supported 00:30:27.833 Write Zeroes (08h): Supported LBA-Change 00:30:27.833 Dataset Management (09h): Supported LBA-Change 00:30:27.833 Copy (19h): Supported LBA-Change 00:30:27.833 00:30:27.833 Error Log 00:30:27.833 ========= 00:30:27.833 00:30:27.833 Arbitration 00:30:27.833 =========== 00:30:27.833 Arbitration Burst: 1 00:30:27.833 00:30:27.833 Power Management 00:30:27.833 ================ 00:30:27.833 Number of Power States: 1 00:30:27.833 Current Power State: Power State #0 00:30:27.833 Power State #0: 00:30:27.833 Max Power: 0.00 W 00:30:27.833 Non-Operational State: Operational 00:30:27.833 Entry Latency: Not Reported 00:30:27.833 Exit Latency: Not Reported 00:30:27.833 Relative Read Throughput: 0 00:30:27.833 Relative Read Latency: 0 00:30:27.833 Relative Write Throughput: 0 00:30:27.833 Relative Write Latency: 0 00:30:27.833 Idle Power: Not Reported 00:30:27.833 Active Power: Not Reported 00:30:27.833 Non-Operational Permissive Mode: Not Supported 00:30:27.833 00:30:27.833 Health Information 00:30:27.833 ================== 00:30:27.833 Critical Warnings: 00:30:27.833 Available Spare Space: OK 00:30:27.833 Temperature: OK 00:30:27.833 Device Reliability: OK 00:30:27.833 Read Only: No 00:30:27.833 Volatile Memory Backup: OK 00:30:27.833 Current Temperature: 0 Kelvin (-273 Celsius) 00:30:27.833 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:30:27.833 Available Spare: 0% 00:30:27.833 Available Spare Threshold: 0% 00:30:27.833 Life Percentage Used:[2024-09-29 16:38:28.276539] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.833 [2024-09-29 16:38:28.276557] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x615000015700) 00:30:27.833 [2024-09-29 16:38:28.276577] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.833 [2024-09-29 16:38:28.276610] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:30:27.833 [2024-09-29 16:38:28.280705] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.833 [2024-09-29 16:38:28.280731] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.833 [2024-09-29 16:38:28.280745] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.833 [2024-09-29 16:38:28.280757] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x615000015700 00:30:27.833 [2024-09-29 16:38:28.280849] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:30:27.833 [2024-09-29 16:38:28.280883] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:27.833 [2024-09-29 16:38:28.280910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.833 [2024-09-29 16:38:28.280932] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x615000015700 00:30:27.833 [2024-09-29 16:38:28.280948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.833 [2024-09-29 16:38:28.280971] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x615000015700 00:30:27.833 [2024-09-29 16:38:28.281001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.833 [2024-09-29 16:38:28.281015] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:27.833 [2024-09-29 16:38:28.281037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.833 [2024-09-29 16:38:28.281058] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.833 [2024-09-29 16:38:28.281072] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.833 [2024-09-29 16:38:28.281083] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:27.833 [2024-09-29 16:38:28.281103] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.833 [2024-09-29 16:38:28.281138] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:27.833 [2024-09-29 16:38:28.281309] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.833 [2024-09-29 16:38:28.281332] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.833 [2024-09-29 16:38:28.281345] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.833 [2024-09-29 16:38:28.281357] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:27.833 [2024-09-29 16:38:28.281384] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.833 [2024-09-29 16:38:28.281400] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.833 [2024-09-29 16:38:28.281412] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:27.833 [2024-09-29 16:38:28.281431] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.833 [2024-09-29 16:38:28.281486] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:27.833 [2024-09-29 16:38:28.281724] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.833 [2024-09-29 16:38:28.281748] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.833 [2024-09-29 16:38:28.281761] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.833 [2024-09-29 16:38:28.281773] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:27.833 [2024-09-29 16:38:28.281788] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:30:27.833 [2024-09-29 16:38:28.281802] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:30:27.833 [2024-09-29 16:38:28.281829] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.833 [2024-09-29 16:38:28.281845] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.833 [2024-09-29 16:38:28.281867] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:27.833 [2024-09-29 16:38:28.281887] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.833 [2024-09-29 16:38:28.281919] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:27.833 [2024-09-29 16:38:28.282025] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.834 [2024-09-29 16:38:28.282050] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.834 [2024-09-29 16:38:28.282064] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.834 [2024-09-29 16:38:28.282075] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:27.834 [2024-09-29 16:38:28.282104] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.834 [2024-09-29 16:38:28.282120] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.834 [2024-09-29 16:38:28.282131] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:27.834 [2024-09-29 16:38:28.282150] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.834 [2024-09-29 16:38:28.282181] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:27.834 [2024-09-29 16:38:28.282334] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.834 [2024-09-29 16:38:28.282355] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.834 [2024-09-29 16:38:28.282368] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.834 [2024-09-29 16:38:28.282380] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:27.834 [2024-09-29 16:38:28.282406] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.834 [2024-09-29 16:38:28.282422] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.834 [2024-09-29 16:38:28.282433] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:27.834 [2024-09-29 16:38:28.282457] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.834 [2024-09-29 16:38:28.282489] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:27.834 [2024-09-29 16:38:28.282611] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.834 [2024-09-29 16:38:28.282633] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.834 [2024-09-29 16:38:28.282645] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.834 [2024-09-29 16:38:28.282666] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:27.834 [2024-09-29 16:38:28.282704] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.834 [2024-09-29 16:38:28.282720] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.834 [2024-09-29 16:38:28.282732] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:27.834 [2024-09-29 16:38:28.282751] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.834 [2024-09-29 16:38:28.282782] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:27.834 [2024-09-29 16:38:28.282899] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.834 [2024-09-29 16:38:28.282922] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.834 [2024-09-29 16:38:28.282934] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.834 [2024-09-29 16:38:28.282946] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:27.834 [2024-09-29 16:38:28.282973] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.834 [2024-09-29 16:38:28.282988] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.834 [2024-09-29 16:38:28.283000] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:27.834 [2024-09-29 16:38:28.283018] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.834 [2024-09-29 16:38:28.283049] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:27.834 [2024-09-29 16:38:28.283179] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.834 [2024-09-29 16:38:28.283200] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.834 [2024-09-29 16:38:28.283216] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.834 [2024-09-29 16:38:28.283229] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:27.834 [2024-09-29 16:38:28.283256] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.834 [2024-09-29 16:38:28.283272] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.834 [2024-09-29 16:38:28.283283] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:27.834 [2024-09-29 16:38:28.283302] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.834 [2024-09-29 16:38:28.283332] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:27.834 [2024-09-29 16:38:28.283442] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.834 [2024-09-29 16:38:28.283463] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.834 [2024-09-29 16:38:28.283476] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.834 [2024-09-29 16:38:28.283487] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:27.834 [2024-09-29 16:38:28.283514] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.834 [2024-09-29 16:38:28.283530] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.834 [2024-09-29 16:38:28.283541] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:27.834 [2024-09-29 16:38:28.283559] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.834 [2024-09-29 16:38:28.283589] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:27.834 [2024-09-29 16:38:28.283725] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.834 [2024-09-29 16:38:28.283748] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.834 [2024-09-29 16:38:28.283761] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.834 [2024-09-29 16:38:28.283772] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:27.834 [2024-09-29 16:38:28.283799] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.834 [2024-09-29 16:38:28.283814] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.834 [2024-09-29 16:38:28.283825] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:27.834 [2024-09-29 16:38:28.283844] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.834 [2024-09-29 16:38:28.283875] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:27.834 [2024-09-29 16:38:28.283976] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.834 [2024-09-29 16:38:28.284003] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.834 [2024-09-29 16:38:28.284016] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.834 [2024-09-29 16:38:28.284027] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:27.834 [2024-09-29 16:38:28.284054] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.834 [2024-09-29 16:38:28.284069] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.834 [2024-09-29 16:38:28.284081] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:27.834 [2024-09-29 16:38:28.284099] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.834 [2024-09-29 16:38:28.284130] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:27.834 [2024-09-29 16:38:28.284255] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.834 [2024-09-29 16:38:28.284278] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.834 [2024-09-29 16:38:28.284295] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.834 [2024-09-29 16:38:28.284307] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:27.834 [2024-09-29 16:38:28.284335] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.834 [2024-09-29 16:38:28.284350] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.834 [2024-09-29 16:38:28.284362] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:27.834 [2024-09-29 16:38:28.284380] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.834 [2024-09-29 16:38:28.284410] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:27.834 [2024-09-29 16:38:28.284554] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.834 [2024-09-29 16:38:28.284575] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.834 [2024-09-29 16:38:28.284587] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.834 [2024-09-29 16:38:28.284599] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:27.834 [2024-09-29 16:38:28.284626] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.834 [2024-09-29 16:38:28.284641] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.834 [2024-09-29 16:38:28.284652] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:27.834 [2024-09-29 16:38:28.288697] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.834 [2024-09-29 16:38:28.288758] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:27.834 [2024-09-29 16:38:28.288978] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.834 [2024-09-29 16:38:28.289001] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.834 [2024-09-29 16:38:28.289014] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.834 [2024-09-29 16:38:28.289026] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:27.834 [2024-09-29 16:38:28.289049] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:30:27.834 0% 00:30:27.834 Data Units Read: 0 00:30:27.834 Data Units Written: 0 00:30:27.834 Host Read Commands: 0 00:30:27.834 Host Write Commands: 0 00:30:27.834 Controller Busy Time: 0 minutes 00:30:27.834 Power Cycles: 0 00:30:27.834 Power On Hours: 0 hours 00:30:27.834 Unsafe Shutdowns: 0 00:30:27.834 Unrecoverable Media Errors: 0 00:30:27.834 Lifetime Error Log Entries: 0 00:30:27.834 Warning Temperature Time: 0 minutes 00:30:27.834 Critical Temperature Time: 0 minutes 00:30:27.834 00:30:27.834 Number of Queues 00:30:27.834 ================ 00:30:27.834 Number of I/O Submission Queues: 127 00:30:27.834 Number of I/O Completion Queues: 127 00:30:27.834 00:30:27.834 Active Namespaces 00:30:27.834 ================= 00:30:27.834 Namespace ID:1 00:30:27.834 Error Recovery Timeout: Unlimited 00:30:27.834 Command Set Identifier: NVM (00h) 00:30:27.834 Deallocate: Supported 00:30:27.834 Deallocated/Unwritten Error: Not Supported 00:30:27.834 Deallocated Read Value: Unknown 00:30:27.834 Deallocate in Write Zeroes: Not Supported 00:30:27.834 Deallocated Guard Field: 0xFFFF 00:30:27.834 Flush: Supported 00:30:27.835 Reservation: Supported 00:30:27.835 Namespace Sharing Capabilities: Multiple Controllers 00:30:27.835 Size (in LBAs): 131072 (0GiB) 00:30:27.835 Capacity (in LBAs): 131072 (0GiB) 00:30:27.835 Utilization (in LBAs): 131072 (0GiB) 00:30:27.835 NGUID: ABCDEF0123456789ABCDEF0123456789 00:30:27.835 EUI64: ABCDEF0123456789 00:30:27.835 UUID: 0355c035-fb19-4801-8f4c-5aa1671be84f 00:30:27.835 Thin Provisioning: Not Supported 00:30:27.835 Per-NS Atomic Units: Yes 00:30:27.835 Atomic Boundary Size (Normal): 0 00:30:27.835 Atomic Boundary Size (PFail): 0 00:30:27.835 Atomic Boundary Offset: 0 00:30:27.835 Maximum Single Source Range Length: 65535 00:30:27.835 Maximum Copy Length: 65535 00:30:27.835 Maximum Source Range Count: 1 00:30:27.835 NGUID/EUI64 Never Reused: No 00:30:27.835 Namespace Write Protected: No 00:30:27.835 Number of LBA Formats: 1 00:30:27.835 Current LBA Format: LBA Format #00 00:30:27.835 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:27.835 00:30:27.835 16:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:30:27.835 16:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:27.835 16:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.835 16:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:27.835 16:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.835 16:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:30:27.835 16:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:30:27.835 16:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@512 -- # nvmfcleanup 00:30:27.835 16:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:30:27.835 16:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:27.835 16:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:30:27.835 16:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:27.835 16:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:27.835 rmmod nvme_tcp 00:30:27.835 rmmod nvme_fabrics 00:30:28.093 rmmod nvme_keyring 00:30:28.093 16:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:28.093 16:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:30:28.093 16:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:30:28.093 16:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@513 -- # '[' -n 3259461 ']' 00:30:28.093 16:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # killprocess 3259461 00:30:28.093 16:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 3259461 ']' 00:30:28.093 16:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 3259461 00:30:28.093 16:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:30:28.093 16:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:28.093 16:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3259461 00:30:28.093 16:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:28.093 16:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:28.093 16:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3259461' 00:30:28.093 killing process with pid 3259461 00:30:28.093 16:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 3259461 00:30:28.093 16:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 3259461 00:30:29.470 16:38:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:30:29.470 16:38:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:30:29.470 16:38:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:30:29.470 16:38:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:30:29.470 16:38:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # iptables-save 00:30:29.470 16:38:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:30:29.470 16:38:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # iptables-restore 00:30:29.470 16:38:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:29.470 16:38:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:29.470 16:38:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:29.470 16:38:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:29.470 16:38:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:31.372 16:38:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:31.372 00:30:31.372 real 0m7.714s 00:30:31.372 user 0m10.884s 00:30:31.372 sys 0m2.227s 00:30:31.372 16:38:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:31.372 16:38:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:31.372 ************************************ 00:30:31.372 END TEST nvmf_identify 00:30:31.372 ************************************ 00:30:31.631 16:38:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:30:31.631 16:38:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:31.631 16:38:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:31.631 16:38:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:31.631 ************************************ 00:30:31.631 START TEST nvmf_perf 00:30:31.631 ************************************ 00:30:31.631 16:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:30:31.631 * Looking for test storage... 00:30:31.631 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:31.631 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:31.631 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lcov --version 00:30:31.631 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:31.631 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:31.631 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:31.631 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:31.631 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:31.631 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:30:31.631 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:30:31.631 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:30:31.631 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:30:31.631 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:30:31.631 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:30:31.631 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:30:31.631 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:31.631 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:30:31.631 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:30:31.631 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:31.631 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:31.631 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:30:31.631 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:30:31.631 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:31.631 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:30:31.631 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:30:31.631 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:30:31.631 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:30:31.631 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:31.631 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:30:31.631 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:30:31.631 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:31.631 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:31.631 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:30:31.631 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:31.631 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:31.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:31.631 --rc genhtml_branch_coverage=1 00:30:31.631 --rc genhtml_function_coverage=1 00:30:31.631 --rc genhtml_legend=1 00:30:31.631 --rc geninfo_all_blocks=1 00:30:31.631 --rc geninfo_unexecuted_blocks=1 00:30:31.631 00:30:31.631 ' 00:30:31.631 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:31.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:31.631 --rc genhtml_branch_coverage=1 00:30:31.631 --rc genhtml_function_coverage=1 00:30:31.631 --rc genhtml_legend=1 00:30:31.631 --rc geninfo_all_blocks=1 00:30:31.632 --rc geninfo_unexecuted_blocks=1 00:30:31.632 00:30:31.632 ' 00:30:31.632 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:31.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:31.632 --rc genhtml_branch_coverage=1 00:30:31.632 --rc genhtml_function_coverage=1 00:30:31.632 --rc genhtml_legend=1 00:30:31.632 --rc geninfo_all_blocks=1 00:30:31.632 --rc geninfo_unexecuted_blocks=1 00:30:31.632 00:30:31.632 ' 00:30:31.632 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:31.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:31.632 --rc genhtml_branch_coverage=1 00:30:31.632 --rc genhtml_function_coverage=1 00:30:31.632 --rc genhtml_legend=1 00:30:31.632 --rc geninfo_all_blocks=1 00:30:31.632 --rc geninfo_unexecuted_blocks=1 00:30:31.632 00:30:31.632 ' 00:30:31.632 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:31.632 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:30:31.632 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:31.632 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:31.632 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:31.632 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:31.632 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:31.632 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:31.632 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:31.632 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:31.632 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:31.632 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:31.632 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:31.632 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:31.632 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:31.632 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:31.632 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:31.632 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:31.632 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:31.632 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:30:31.632 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:31.632 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:31.632 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:31.632 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.632 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.632 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.632 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:30:31.632 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.632 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:30:31.632 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:31.632 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:31.632 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:31.632 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:31.632 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:31.632 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:31.632 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:31.632 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:31.632 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:31.632 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:31.632 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:30:31.632 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:30:31.632 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:31.632 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:30:31.632 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:30:31.632 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:31.632 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # prepare_net_devs 00:30:31.632 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:30:31.632 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:30:31.632 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:31.632 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:31.632 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:31.632 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:30:31.632 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:30:31.632 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:30:31.632 16:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:33.534 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:33.534 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:30:33.534 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:33.534 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:33.534 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:33.534 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:33.534 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:33.534 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:33.535 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:33.535 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:33.535 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:33.535 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # is_hw=yes 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:33.535 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:33.793 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:33.793 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:33.793 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:33.793 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:33.793 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:33.793 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:33.793 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:33.793 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:33.793 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:33.793 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.314 ms 00:30:33.793 00:30:33.793 --- 10.0.0.2 ping statistics --- 00:30:33.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:33.793 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:30:33.793 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:33.793 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:33.793 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:30:33.793 00:30:33.793 --- 10.0.0.1 ping statistics --- 00:30:33.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:33.793 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:30:33.793 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:33.793 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # return 0 00:30:33.793 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:30:33.793 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:33.793 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:30:33.793 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:30:33.794 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:33.794 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:30:33.794 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:30:33.794 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:30:33.794 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:30:33.794 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:33.794 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:33.794 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # nvmfpid=3261813 00:30:33.794 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:33.794 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # waitforlisten 3261813 00:30:33.794 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 3261813 ']' 00:30:33.794 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:33.794 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:33.794 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:33.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:33.794 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:33.794 16:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:33.794 [2024-09-29 16:38:34.303311] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:30:33.794 [2024-09-29 16:38:34.303456] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:34.052 [2024-09-29 16:38:34.438630] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:34.310 [2024-09-29 16:38:34.675889] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:34.310 [2024-09-29 16:38:34.675963] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:34.310 [2024-09-29 16:38:34.675985] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:34.310 [2024-09-29 16:38:34.676006] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:34.310 [2024-09-29 16:38:34.676022] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:34.310 [2024-09-29 16:38:34.676155] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:30:34.310 [2024-09-29 16:38:34.676205] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:30:34.310 [2024-09-29 16:38:34.676235] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:30:34.310 [2024-09-29 16:38:34.676245] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:30:34.876 16:38:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:34.876 16:38:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:30:34.876 16:38:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:30:34.876 16:38:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:34.876 16:38:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:34.876 16:38:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:34.876 16:38:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:34.876 16:38:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:30:38.155 16:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:30:38.155 16:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:30:38.413 16:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:30:38.413 16:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:38.670 16:38:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:30:38.670 16:38:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:30:38.670 16:38:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:30:38.670 16:38:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:30:38.670 16:38:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:30:38.926 [2024-09-29 16:38:39.408099] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:38.926 16:38:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:39.184 16:38:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:39.184 16:38:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:39.442 16:38:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:39.442 16:38:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:39.700 16:38:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:40.266 [2024-09-29 16:38:40.526388] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:40.266 16:38:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:40.266 16:38:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:30:40.266 16:38:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:30:40.266 16:38:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:30:40.266 16:38:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:30:42.164 Initializing NVMe Controllers 00:30:42.164 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:30:42.164 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:30:42.164 Initialization complete. Launching workers. 00:30:42.164 ======================================================== 00:30:42.164 Latency(us) 00:30:42.164 Device Information : IOPS MiB/s Average min max 00:30:42.164 PCIE (0000:88:00.0) NSID 1 from core 0: 74488.24 290.97 428.98 48.71 7340.57 00:30:42.164 ======================================================== 00:30:42.164 Total : 74488.24 290.97 428.98 48.71 7340.57 00:30:42.164 00:30:42.164 16:38:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:43.097 Initializing NVMe Controllers 00:30:43.097 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:43.097 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:43.097 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:43.097 Initialization complete. Launching workers. 00:30:43.097 ======================================================== 00:30:43.097 Latency(us) 00:30:43.097 Device Information : IOPS MiB/s Average min max 00:30:43.097 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 103.95 0.41 9850.04 217.11 45141.14 00:30:43.097 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 66.97 0.26 15051.08 4007.99 47909.59 00:30:43.097 ======================================================== 00:30:43.097 Total : 170.91 0.67 11887.88 217.11 47909.59 00:30:43.097 00:30:43.354 16:38:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:44.729 Initializing NVMe Controllers 00:30:44.729 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:44.729 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:44.729 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:44.729 Initialization complete. Launching workers. 00:30:44.729 ======================================================== 00:30:44.729 Latency(us) 00:30:44.729 Device Information : IOPS MiB/s Average min max 00:30:44.729 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5376.73 21.00 5963.23 846.80 12551.35 00:30:44.729 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3793.40 14.82 8482.99 4636.32 23524.69 00:30:44.729 ======================================================== 00:30:44.729 Total : 9170.13 35.82 7005.58 846.80 23524.69 00:30:44.729 00:30:44.729 16:38:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:30:44.729 16:38:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:30:44.729 16:38:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:48.012 Initializing NVMe Controllers 00:30:48.012 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:48.012 Controller IO queue size 128, less than required. 00:30:48.012 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:48.012 Controller IO queue size 128, less than required. 00:30:48.012 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:48.012 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:48.012 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:48.012 Initialization complete. Launching workers. 00:30:48.012 ======================================================== 00:30:48.012 Latency(us) 00:30:48.012 Device Information : IOPS MiB/s Average min max 00:30:48.012 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1337.52 334.38 100061.33 62362.68 301625.62 00:30:48.012 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 514.12 128.53 264684.73 122293.87 544318.71 00:30:48.012 ======================================================== 00:30:48.012 Total : 1851.64 462.91 145770.31 62362.68 544318.71 00:30:48.012 00:30:48.012 16:38:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:30:48.270 No valid NVMe controllers or AIO or URING devices found 00:30:48.270 Initializing NVMe Controllers 00:30:48.270 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:48.270 Controller IO queue size 128, less than required. 00:30:48.270 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:48.270 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:30:48.270 Controller IO queue size 128, less than required. 00:30:48.270 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:48.270 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:30:48.270 WARNING: Some requested NVMe devices were skipped 00:30:48.270 16:38:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:30:51.550 Initializing NVMe Controllers 00:30:51.550 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:51.550 Controller IO queue size 128, less than required. 00:30:51.550 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:51.550 Controller IO queue size 128, less than required. 00:30:51.550 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:51.550 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:51.550 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:51.550 Initialization complete. Launching workers. 00:30:51.550 00:30:51.550 ==================== 00:30:51.550 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:30:51.550 TCP transport: 00:30:51.550 polls: 8237 00:30:51.550 idle_polls: 5169 00:30:51.550 sock_completions: 3068 00:30:51.550 nvme_completions: 5273 00:30:51.550 submitted_requests: 8012 00:30:51.550 queued_requests: 1 00:30:51.550 00:30:51.550 ==================== 00:30:51.550 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:30:51.550 TCP transport: 00:30:51.550 polls: 8626 00:30:51.550 idle_polls: 6063 00:30:51.550 sock_completions: 2563 00:30:51.550 nvme_completions: 3515 00:30:51.550 submitted_requests: 5330 00:30:51.550 queued_requests: 1 00:30:51.550 ======================================================== 00:30:51.550 Latency(us) 00:30:51.550 Device Information : IOPS MiB/s Average min max 00:30:51.550 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1315.74 328.93 99762.75 63068.78 247290.03 00:30:51.550 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 876.99 219.25 156751.42 70964.81 421122.74 00:30:51.550 ======================================================== 00:30:51.550 Total : 2192.73 548.18 122555.63 63068.78 421122.74 00:30:51.550 00:30:51.550 16:38:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:30:51.550 16:38:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:51.807 16:38:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:30:51.807 16:38:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:30:51.807 16:38:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:30:55.081 16:38:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=3c621b90-9bf5-4bb9-bbbf-5d65f3579d5b 00:30:55.081 16:38:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 3c621b90-9bf5-4bb9-bbbf-5d65f3579d5b 00:30:55.081 16:38:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=3c621b90-9bf5-4bb9-bbbf-5d65f3579d5b 00:30:55.081 16:38:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:30:55.081 16:38:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:30:55.081 16:38:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:30:55.081 16:38:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:55.339 16:38:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:30:55.339 { 00:30:55.339 "uuid": "3c621b90-9bf5-4bb9-bbbf-5d65f3579d5b", 00:30:55.339 "name": "lvs_0", 00:30:55.339 "base_bdev": "Nvme0n1", 00:30:55.339 "total_data_clusters": 238234, 00:30:55.339 "free_clusters": 238234, 00:30:55.339 "block_size": 512, 00:30:55.339 "cluster_size": 4194304 00:30:55.339 } 00:30:55.339 ]' 00:30:55.339 16:38:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="3c621b90-9bf5-4bb9-bbbf-5d65f3579d5b") .free_clusters' 00:30:55.339 16:38:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=238234 00:30:55.339 16:38:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="3c621b90-9bf5-4bb9-bbbf-5d65f3579d5b") .cluster_size' 00:30:55.339 16:38:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:30:55.339 16:38:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=952936 00:30:55.339 16:38:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 952936 00:30:55.339 952936 00:30:55.339 16:38:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:30:55.339 16:38:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:30:55.339 16:38:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3c621b90-9bf5-4bb9-bbbf-5d65f3579d5b lbd_0 20480 00:30:56.272 16:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=4dd35975-1e29-4cb8-a190-6e1172041229 00:30:56.272 16:38:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 4dd35975-1e29-4cb8-a190-6e1172041229 lvs_n_0 00:30:56.837 16:38:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=60d1b345-bfbd-46f7-8bf4-c0db46c3f9fd 00:30:56.837 16:38:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 60d1b345-bfbd-46f7-8bf4-c0db46c3f9fd 00:30:56.837 16:38:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=60d1b345-bfbd-46f7-8bf4-c0db46c3f9fd 00:30:56.837 16:38:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:30:56.837 16:38:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:30:56.837 16:38:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:30:56.837 16:38:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:57.402 16:38:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:30:57.402 { 00:30:57.402 "uuid": "3c621b90-9bf5-4bb9-bbbf-5d65f3579d5b", 00:30:57.402 "name": "lvs_0", 00:30:57.402 "base_bdev": "Nvme0n1", 00:30:57.402 "total_data_clusters": 238234, 00:30:57.402 "free_clusters": 233114, 00:30:57.402 "block_size": 512, 00:30:57.402 "cluster_size": 4194304 00:30:57.402 }, 00:30:57.402 { 00:30:57.402 "uuid": "60d1b345-bfbd-46f7-8bf4-c0db46c3f9fd", 00:30:57.402 "name": "lvs_n_0", 00:30:57.402 "base_bdev": "4dd35975-1e29-4cb8-a190-6e1172041229", 00:30:57.402 "total_data_clusters": 5114, 00:30:57.402 "free_clusters": 5114, 00:30:57.402 "block_size": 512, 00:30:57.402 "cluster_size": 4194304 00:30:57.402 } 00:30:57.402 ]' 00:30:57.402 16:38:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="60d1b345-bfbd-46f7-8bf4-c0db46c3f9fd") .free_clusters' 00:30:57.402 16:38:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=5114 00:30:57.402 16:38:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="60d1b345-bfbd-46f7-8bf4-c0db46c3f9fd") .cluster_size' 00:30:57.402 16:38:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:30:57.402 16:38:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=20456 00:30:57.402 16:38:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 20456 00:30:57.402 20456 00:30:57.402 16:38:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:30:57.402 16:38:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 60d1b345-bfbd-46f7-8bf4-c0db46c3f9fd lbd_nest_0 20456 00:30:57.662 16:38:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=0d1979d9-4c53-438a-9836-81fee164d1ed 00:30:57.662 16:38:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:57.956 16:38:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:30:57.956 16:38:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 0d1979d9-4c53-438a-9836-81fee164d1ed 00:30:58.237 16:38:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:58.495 16:38:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:30:58.495 16:38:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:30:58.495 16:38:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:58.495 16:38:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:58.495 16:38:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:10.689 Initializing NVMe Controllers 00:31:10.689 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:10.689 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:10.689 Initialization complete. Launching workers. 00:31:10.689 ======================================================== 00:31:10.689 Latency(us) 00:31:10.689 Device Information : IOPS MiB/s Average min max 00:31:10.689 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 48.99 0.02 20465.54 248.98 46166.80 00:31:10.689 ======================================================== 00:31:10.689 Total : 48.99 0.02 20465.54 248.98 46166.80 00:31:10.689 00:31:10.689 16:39:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:10.689 16:39:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:20.653 Initializing NVMe Controllers 00:31:20.653 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:20.653 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:20.653 Initialization complete. Launching workers. 00:31:20.653 ======================================================== 00:31:20.653 Latency(us) 00:31:20.653 Device Information : IOPS MiB/s Average min max 00:31:20.653 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 77.90 9.74 12846.74 4086.01 50815.13 00:31:20.653 ======================================================== 00:31:20.653 Total : 77.90 9.74 12846.74 4086.01 50815.13 00:31:20.653 00:31:20.653 16:39:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:20.653 16:39:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:20.654 16:39:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:30.619 Initializing NVMe Controllers 00:31:30.619 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:30.619 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:30.619 Initialization complete. Launching workers. 00:31:30.619 ======================================================== 00:31:30.619 Latency(us) 00:31:30.619 Device Information : IOPS MiB/s Average min max 00:31:30.619 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4670.60 2.28 6850.49 630.48 15062.49 00:31:30.619 ======================================================== 00:31:30.619 Total : 4670.60 2.28 6850.49 630.48 15062.49 00:31:30.619 00:31:30.619 16:39:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:30.619 16:39:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:40.584 Initializing NVMe Controllers 00:31:40.584 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:40.584 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:40.584 Initialization complete. Launching workers. 00:31:40.584 ======================================================== 00:31:40.584 Latency(us) 00:31:40.584 Device Information : IOPS MiB/s Average min max 00:31:40.584 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3454.10 431.76 9265.52 801.25 22074.09 00:31:40.584 ======================================================== 00:31:40.584 Total : 3454.10 431.76 9265.52 801.25 22074.09 00:31:40.584 00:31:40.584 16:39:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:40.584 16:39:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:40.584 16:39:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:50.546 Initializing NVMe Controllers 00:31:50.546 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:50.546 Controller IO queue size 128, less than required. 00:31:50.546 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:50.546 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:50.546 Initialization complete. Launching workers. 00:31:50.546 ======================================================== 00:31:50.546 Latency(us) 00:31:50.546 Device Information : IOPS MiB/s Average min max 00:31:50.546 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8119.58 3.96 15781.89 1847.68 48100.65 00:31:50.546 ======================================================== 00:31:50.546 Total : 8119.58 3.96 15781.89 1847.68 48100.65 00:31:50.546 00:31:50.805 16:39:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:50.805 16:39:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:03.024 Initializing NVMe Controllers 00:32:03.024 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:03.024 Controller IO queue size 128, less than required. 00:32:03.024 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:03.024 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:03.024 Initialization complete. Launching workers. 00:32:03.024 ======================================================== 00:32:03.024 Latency(us) 00:32:03.024 Device Information : IOPS MiB/s Average min max 00:32:03.024 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1131.07 141.38 113400.71 15546.32 254382.75 00:32:03.024 ======================================================== 00:32:03.024 Total : 1131.07 141.38 113400.71 15546.32 254382.75 00:32:03.024 00:32:03.024 16:40:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:03.024 16:40:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0d1979d9-4c53-438a-9836-81fee164d1ed 00:32:03.024 16:40:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:32:03.024 16:40:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4dd35975-1e29-4cb8-a190-6e1172041229 00:32:03.024 16:40:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:32:03.024 16:40:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:32:03.024 16:40:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:32:03.024 16:40:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # nvmfcleanup 00:32:03.024 16:40:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:32:03.024 16:40:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:03.024 16:40:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:32:03.025 16:40:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:03.025 16:40:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:03.025 rmmod nvme_tcp 00:32:03.025 rmmod nvme_fabrics 00:32:03.025 rmmod nvme_keyring 00:32:03.025 16:40:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:03.025 16:40:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:32:03.025 16:40:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:32:03.025 16:40:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@513 -- # '[' -n 3261813 ']' 00:32:03.025 16:40:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # killprocess 3261813 00:32:03.025 16:40:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 3261813 ']' 00:32:03.025 16:40:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 3261813 00:32:03.025 16:40:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:32:03.025 16:40:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:03.025 16:40:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3261813 00:32:03.281 16:40:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:03.281 16:40:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:03.281 16:40:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3261813' 00:32:03.281 killing process with pid 3261813 00:32:03.281 16:40:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 3261813 00:32:03.281 16:40:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 3261813 00:32:05.805 16:40:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:32:05.805 16:40:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:32:05.805 16:40:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:32:05.805 16:40:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:32:05.805 16:40:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # iptables-save 00:32:05.805 16:40:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:32:05.805 16:40:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # iptables-restore 00:32:05.805 16:40:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:05.805 16:40:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:05.805 16:40:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:05.805 16:40:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:05.805 16:40:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:07.704 16:40:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:07.704 00:32:07.704 real 1m36.249s 00:32:07.704 user 5m56.287s 00:32:07.704 sys 0m15.381s 00:32:07.704 16:40:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:07.704 16:40:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:32:07.704 ************************************ 00:32:07.704 END TEST nvmf_perf 00:32:07.704 ************************************ 00:32:07.704 16:40:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:32:07.704 16:40:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:07.704 16:40:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:07.704 16:40:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.988 ************************************ 00:32:07.988 START TEST nvmf_fio_host 00:32:07.988 ************************************ 00:32:07.988 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:32:07.988 * Looking for test storage... 00:32:07.988 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:07.988 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:07.988 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lcov --version 00:32:07.988 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:07.988 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:07.988 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:07.988 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:07.988 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:07.988 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:32:07.988 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:32:07.988 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:32:07.988 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:32:07.988 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:32:07.988 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:32:07.988 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:32:07.988 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:07.988 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:32:07.988 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:32:07.988 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:07.988 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:07.988 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:32:07.988 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:32:07.988 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:07.988 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:07.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:07.989 --rc genhtml_branch_coverage=1 00:32:07.989 --rc genhtml_function_coverage=1 00:32:07.989 --rc genhtml_legend=1 00:32:07.989 --rc geninfo_all_blocks=1 00:32:07.989 --rc geninfo_unexecuted_blocks=1 00:32:07.989 00:32:07.989 ' 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:07.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:07.989 --rc genhtml_branch_coverage=1 00:32:07.989 --rc genhtml_function_coverage=1 00:32:07.989 --rc genhtml_legend=1 00:32:07.989 --rc geninfo_all_blocks=1 00:32:07.989 --rc geninfo_unexecuted_blocks=1 00:32:07.989 00:32:07.989 ' 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:07.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:07.989 --rc genhtml_branch_coverage=1 00:32:07.989 --rc genhtml_function_coverage=1 00:32:07.989 --rc genhtml_legend=1 00:32:07.989 --rc geninfo_all_blocks=1 00:32:07.989 --rc geninfo_unexecuted_blocks=1 00:32:07.989 00:32:07.989 ' 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:07.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:07.989 --rc genhtml_branch_coverage=1 00:32:07.989 --rc genhtml_function_coverage=1 00:32:07.989 --rc genhtml_legend=1 00:32:07.989 --rc geninfo_all_blocks=1 00:32:07.989 --rc geninfo_unexecuted_blocks=1 00:32:07.989 00:32:07.989 ' 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:07.989 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # prepare_net_devs 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@434 -- # local -g is_hw=no 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # remove_spdk_ns 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:32:07.989 16:40:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.922 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:09.922 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:32:09.922 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:09.922 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:09.922 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:09.922 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:09.922 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:09.922 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:32:09.922 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:09.922 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:32:09.922 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:32:09.922 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:32:09.922 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:32:09.922 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:32:09.922 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:32:09.922 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:09.922 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:09.922 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:09.922 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:09.922 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:09.922 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:09.922 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:09.922 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:09.922 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:09.922 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:09.922 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:09.922 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:32:09.922 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:32:09.922 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:32:09.922 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:32:09.922 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:32:09.922 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:32:09.922 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:32:09.922 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:09.922 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:09.922 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:32:09.923 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:32:09.923 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:09.923 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:09.923 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:32:09.923 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:32:09.923 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:09.923 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:09.923 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:32:09.923 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:32:09.923 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:09.923 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:09.923 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:32:09.923 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:32:09.923 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:32:09.923 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:32:09.923 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:32:09.923 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:09.923 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:32:09.923 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:09.923 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ up == up ]] 00:32:09.923 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:32:09.923 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:09.923 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:09.923 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:09.923 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:32:09.923 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:32:09.923 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:09.923 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:32:09.923 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:09.923 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ up == up ]] 00:32:09.923 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:32:09.923 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:09.923 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:09.923 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:09.923 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:32:09.923 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:32:09.923 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # is_hw=yes 00:32:09.923 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:32:09.923 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:32:09.923 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:32:09.923 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:09.923 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:09.923 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:09.923 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:09.923 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:09.923 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:09.923 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:09.923 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:09.923 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:09.923 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:09.923 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:09.923 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:09.923 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:09.923 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:09.923 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:10.182 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:10.182 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:10.182 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:10.182 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:10.182 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:10.182 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:10.182 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:10.182 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:10.182 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:10.182 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:32:10.182 00:32:10.182 --- 10.0.0.2 ping statistics --- 00:32:10.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:10.182 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:32:10.182 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:10.182 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:10.182 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.054 ms 00:32:10.182 00:32:10.182 --- 10.0.0.1 ping statistics --- 00:32:10.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:10.182 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:32:10.182 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:10.182 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # return 0 00:32:10.182 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:32:10.182 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:10.182 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:32:10.182 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:32:10.182 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:10.182 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:32:10.182 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:32:10.182 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:32:10.182 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:32:10.182 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:10.182 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.182 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3275067 00:32:10.182 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:32:10.182 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:10.182 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3275067 00:32:10.182 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 3275067 ']' 00:32:10.182 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:10.182 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:10.182 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:10.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:10.182 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:10.182 16:40:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.182 [2024-09-29 16:40:10.697967] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:32:10.182 [2024-09-29 16:40:10.698121] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:10.441 [2024-09-29 16:40:10.843468] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:10.699 [2024-09-29 16:40:11.108368] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:10.699 [2024-09-29 16:40:11.108453] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:10.699 [2024-09-29 16:40:11.108480] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:10.699 [2024-09-29 16:40:11.108503] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:10.699 [2024-09-29 16:40:11.108522] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:10.699 [2024-09-29 16:40:11.108636] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:32:10.699 [2024-09-29 16:40:11.108722] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:32:10.699 [2024-09-29 16:40:11.108764] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:32:10.699 [2024-09-29 16:40:11.108776] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:32:11.264 16:40:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:11.264 16:40:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:32:11.264 16:40:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:11.522 [2024-09-29 16:40:11.914353] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:11.522 16:40:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:32:11.522 16:40:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:11.522 16:40:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.522 16:40:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:32:12.088 Malloc1 00:32:12.088 16:40:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:12.345 16:40:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:32:12.603 16:40:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:12.861 [2024-09-29 16:40:13.177223] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:12.861 16:40:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:13.118 16:40:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:32:13.118 16:40:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:13.118 16:40:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:13.118 16:40:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:13.118 16:40:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:13.118 16:40:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:13.118 16:40:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:13.118 16:40:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:32:13.118 16:40:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:13.118 16:40:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:13.118 16:40:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:13.118 16:40:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:32:13.118 16:40:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:13.119 16:40:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:13.119 16:40:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:13.119 16:40:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:32:13.119 16:40:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:13.119 16:40:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:13.376 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:13.376 fio-3.35 00:32:13.376 Starting 1 thread 00:32:15.904 00:32:15.904 test: (groupid=0, jobs=1): err= 0: pid=3275553: Sun Sep 29 16:40:16 2024 00:32:15.904 read: IOPS=6365, BW=24.9MiB/s (26.1MB/s)(50.0MiB/2009msec) 00:32:15.904 slat (usec): min=3, max=124, avg= 3.75, stdev= 2.02 00:32:15.904 clat (usec): min=3525, max=19809, avg=10937.27, stdev=959.66 00:32:15.904 lat (usec): min=3555, max=19812, avg=10941.02, stdev=959.61 00:32:15.904 clat percentiles (usec): 00:32:15.904 | 1.00th=[ 8848], 5.00th=[ 9503], 10.00th=[ 9896], 20.00th=[10159], 00:32:15.904 | 30.00th=[10421], 40.00th=[10683], 50.00th=[10945], 60.00th=[11207], 00:32:15.904 | 70.00th=[11338], 80.00th=[11731], 90.00th=[11994], 95.00th=[12387], 00:32:15.904 | 99.00th=[13042], 99.50th=[13698], 99.90th=[17433], 99.95th=[19006], 00:32:15.904 | 99.99th=[19792] 00:32:15.904 bw ( KiB/s): min=24424, max=26152, per=99.92%, avg=25444.00, stdev=735.55, samples=4 00:32:15.904 iops : min= 6106, max= 6538, avg=6361.00, stdev=183.89, samples=4 00:32:15.904 write: IOPS=6366, BW=24.9MiB/s (26.1MB/s)(50.0MiB/2009msec); 0 zone resets 00:32:15.904 slat (usec): min=3, max=114, avg= 3.81, stdev= 1.59 00:32:15.904 clat (usec): min=1302, max=17602, avg=9038.86, stdev=797.47 00:32:15.904 lat (usec): min=1310, max=17606, avg=9042.68, stdev=797.46 00:32:15.904 clat percentiles (usec): 00:32:15.904 | 1.00th=[ 7308], 5.00th=[ 7898], 10.00th=[ 8160], 20.00th=[ 8455], 00:32:15.904 | 30.00th=[ 8717], 40.00th=[ 8848], 50.00th=[ 8979], 60.00th=[ 9241], 00:32:15.904 | 70.00th=[ 9372], 80.00th=[ 9634], 90.00th=[ 9896], 95.00th=[10159], 00:32:15.904 | 99.00th=[10683], 99.50th=[10945], 99.90th=[15533], 99.95th=[16319], 00:32:15.904 | 99.99th=[17695] 00:32:15.904 bw ( KiB/s): min=25088, max=25664, per=99.98%, avg=25462.00, stdev=255.45, samples=4 00:32:15.904 iops : min= 6272, max= 6416, avg=6365.50, stdev=63.86, samples=4 00:32:15.904 lat (msec) : 2=0.01%, 4=0.08%, 10=53.18%, 20=46.72% 00:32:15.904 cpu : usr=67.65%, sys=30.01%, ctx=81, majf=0, minf=1543 00:32:15.904 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:32:15.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:15.904 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:15.904 issued rwts: total=12789,12791,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:15.904 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:15.904 00:32:15.904 Run status group 0 (all jobs): 00:32:15.904 READ: bw=24.9MiB/s (26.1MB/s), 24.9MiB/s-24.9MiB/s (26.1MB/s-26.1MB/s), io=50.0MiB (52.4MB), run=2009-2009msec 00:32:15.905 WRITE: bw=24.9MiB/s (26.1MB/s), 24.9MiB/s-24.9MiB/s (26.1MB/s-26.1MB/s), io=50.0MiB (52.4MB), run=2009-2009msec 00:32:15.905 ----------------------------------------------------- 00:32:15.905 Suppressions used: 00:32:15.905 count bytes template 00:32:15.905 1 57 /usr/src/fio/parse.c 00:32:15.905 1 8 libtcmalloc_minimal.so 00:32:15.905 ----------------------------------------------------- 00:32:15.905 00:32:15.905 16:40:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:15.905 16:40:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:15.905 16:40:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:15.905 16:40:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:15.905 16:40:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:15.905 16:40:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:15.905 16:40:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:32:15.905 16:40:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:15.905 16:40:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:15.905 16:40:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:15.905 16:40:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:32:15.905 16:40:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:15.905 16:40:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:15.905 16:40:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:15.905 16:40:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:32:15.905 16:40:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:15.905 16:40:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:16.163 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:32:16.163 fio-3.35 00:32:16.163 Starting 1 thread 00:32:18.692 00:32:18.692 test: (groupid=0, jobs=1): err= 0: pid=3275889: Sun Sep 29 16:40:19 2024 00:32:18.692 read: IOPS=5657, BW=88.4MiB/s (92.7MB/s)(177MiB/2007msec) 00:32:18.692 slat (usec): min=3, max=104, avg= 5.19, stdev= 2.06 00:32:18.692 clat (usec): min=4094, max=24575, avg=12664.28, stdev=3062.78 00:32:18.692 lat (usec): min=4099, max=24579, avg=12669.47, stdev=3062.91 00:32:18.692 clat percentiles (usec): 00:32:18.692 | 1.00th=[ 6783], 5.00th=[ 8029], 10.00th=[ 8979], 20.00th=[10159], 00:32:18.692 | 30.00th=[10945], 40.00th=[11600], 50.00th=[12387], 60.00th=[13173], 00:32:18.692 | 70.00th=[14091], 80.00th=[15008], 90.00th=[16712], 95.00th=[18220], 00:32:18.692 | 99.00th=[20841], 99.50th=[21890], 99.90th=[23987], 99.95th=[24249], 00:32:18.692 | 99.99th=[24511] 00:32:18.692 bw ( KiB/s): min=41472, max=57440, per=53.79%, avg=48696.00, stdev=6701.15, samples=4 00:32:18.692 iops : min= 2592, max= 3590, avg=3043.50, stdev=418.82, samples=4 00:32:18.692 write: IOPS=3461, BW=54.1MiB/s (56.7MB/s)(99.2MiB/1834msec); 0 zone resets 00:32:18.692 slat (usec): min=33, max=211, avg=36.71, stdev= 6.94 00:32:18.692 clat (usec): min=8038, max=30341, avg=17089.14, stdev=2953.05 00:32:18.692 lat (usec): min=8072, max=30376, avg=17125.85, stdev=2953.13 00:32:18.692 clat percentiles (usec): 00:32:18.692 | 1.00th=[10683], 5.00th=[12387], 10.00th=[13173], 20.00th=[14615], 00:32:18.692 | 30.00th=[15401], 40.00th=[16188], 50.00th=[17171], 60.00th=[17957], 00:32:18.692 | 70.00th=[18744], 80.00th=[19530], 90.00th=[20841], 95.00th=[21890], 00:32:18.692 | 99.00th=[23725], 99.50th=[25035], 99.90th=[28705], 99.95th=[28967], 00:32:18.692 | 99.99th=[30278] 00:32:18.692 bw ( KiB/s): min=41184, max=62240, per=91.70%, avg=50784.00, stdev=8673.99, samples=4 00:32:18.692 iops : min= 2574, max= 3890, avg=3174.00, stdev=542.12, samples=4 00:32:18.692 lat (msec) : 10=11.87%, 20=81.00%, 50=7.13% 00:32:18.692 cpu : usr=74.89%, sys=23.02%, ctx=34, majf=0, minf=2064 00:32:18.692 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:32:18.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:18.692 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:18.692 issued rwts: total=11355,6348,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:18.692 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:18.692 00:32:18.692 Run status group 0 (all jobs): 00:32:18.692 READ: bw=88.4MiB/s (92.7MB/s), 88.4MiB/s-88.4MiB/s (92.7MB/s-92.7MB/s), io=177MiB (186MB), run=2007-2007msec 00:32:18.692 WRITE: bw=54.1MiB/s (56.7MB/s), 54.1MiB/s-54.1MiB/s (56.7MB/s-56.7MB/s), io=99.2MiB (104MB), run=1834-1834msec 00:32:18.949 ----------------------------------------------------- 00:32:18.949 Suppressions used: 00:32:18.949 count bytes template 00:32:18.949 1 57 /usr/src/fio/parse.c 00:32:18.949 1080 103680 /usr/src/fio/iolog.c 00:32:18.949 1 8 libtcmalloc_minimal.so 00:32:18.949 ----------------------------------------------------- 00:32:18.949 00:32:18.949 16:40:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:19.206 16:40:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:32:19.206 16:40:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:32:19.206 16:40:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:32:19.206 16:40:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # bdfs=() 00:32:19.206 16:40:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # local bdfs 00:32:19.206 16:40:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:19.206 16:40:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:32:19.206 16:40:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:32:19.206 16:40:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:32:19.206 16:40:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:88:00.0 00:32:19.206 16:40:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:32:22.485 Nvme0n1 00:32:22.485 16:40:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:32:25.759 16:40:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=1e209417-c443-4d82-bf22-baad8ab56d6a 00:32:25.759 16:40:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 1e209417-c443-4d82-bf22-baad8ab56d6a 00:32:25.759 16:40:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=1e209417-c443-4d82-bf22-baad8ab56d6a 00:32:25.759 16:40:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:32:25.759 16:40:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:32:25.759 16:40:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:32:25.759 16:40:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:25.759 16:40:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:32:25.759 { 00:32:25.759 "uuid": "1e209417-c443-4d82-bf22-baad8ab56d6a", 00:32:25.759 "name": "lvs_0", 00:32:25.759 "base_bdev": "Nvme0n1", 00:32:25.759 "total_data_clusters": 930, 00:32:25.759 "free_clusters": 930, 00:32:25.759 "block_size": 512, 00:32:25.759 "cluster_size": 1073741824 00:32:25.759 } 00:32:25.759 ]' 00:32:25.759 16:40:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="1e209417-c443-4d82-bf22-baad8ab56d6a") .free_clusters' 00:32:25.759 16:40:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=930 00:32:25.760 16:40:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="1e209417-c443-4d82-bf22-baad8ab56d6a") .cluster_size' 00:32:25.760 16:40:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:32:25.760 16:40:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=952320 00:32:25.760 16:40:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 952320 00:32:25.760 952320 00:32:25.760 16:40:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:32:26.017 f38b3ffe-48fc-4847-abac-90651bf1d04a 00:32:26.017 16:40:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:32:26.274 16:40:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:32:26.531 16:40:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:26.788 16:40:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:26.788 16:40:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:26.788 16:40:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:26.788 16:40:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:26.788 16:40:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:26.788 16:40:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:26.788 16:40:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:32:26.788 16:40:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:26.788 16:40:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:27.046 16:40:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:27.046 16:40:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:32:27.046 16:40:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:27.046 16:40:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:27.046 16:40:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:27.046 16:40:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:32:27.046 16:40:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:27.046 16:40:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:27.046 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:27.046 fio-3.35 00:32:27.046 Starting 1 thread 00:32:29.571 00:32:29.571 test: (groupid=0, jobs=1): err= 0: pid=3277282: Sun Sep 29 16:40:30 2024 00:32:29.571 read: IOPS=4359, BW=17.0MiB/s (17.9MB/s)(34.3MiB/2012msec) 00:32:29.571 slat (usec): min=3, max=169, avg= 3.83, stdev= 2.55 00:32:29.571 clat (usec): min=1412, max=173041, avg=15921.80, stdev=13270.39 00:32:29.571 lat (usec): min=1416, max=173090, avg=15925.63, stdev=13270.74 00:32:29.571 clat percentiles (msec): 00:32:29.571 | 1.00th=[ 12], 5.00th=[ 13], 10.00th=[ 14], 20.00th=[ 14], 00:32:29.571 | 30.00th=[ 15], 40.00th=[ 15], 50.00th=[ 15], 60.00th=[ 16], 00:32:29.571 | 70.00th=[ 16], 80.00th=[ 16], 90.00th=[ 17], 95.00th=[ 18], 00:32:29.571 | 99.00th=[ 22], 99.50th=[ 157], 99.90th=[ 174], 99.95th=[ 174], 00:32:29.571 | 99.99th=[ 174] 00:32:29.571 bw ( KiB/s): min=12272, max=19120, per=99.75%, avg=17394.00, stdev=3414.69, samples=4 00:32:29.571 iops : min= 3068, max= 4780, avg=4348.50, stdev=853.67, samples=4 00:32:29.571 write: IOPS=4353, BW=17.0MiB/s (17.8MB/s)(34.2MiB/2012msec); 0 zone resets 00:32:29.571 slat (usec): min=3, max=120, avg= 3.91, stdev= 1.95 00:32:29.571 clat (usec): min=457, max=170439, avg=13179.42, stdev=12487.30 00:32:29.571 lat (usec): min=461, max=170462, avg=13183.33, stdev=12487.62 00:32:29.571 clat percentiles (msec): 00:32:29.571 | 1.00th=[ 9], 5.00th=[ 11], 10.00th=[ 11], 20.00th=[ 12], 00:32:29.571 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 13], 60.00th=[ 13], 00:32:29.571 | 70.00th=[ 13], 80.00th=[ 14], 90.00th=[ 14], 95.00th=[ 14], 00:32:29.571 | 99.00th=[ 19], 99.50th=[ 159], 99.90th=[ 171], 99.95th=[ 171], 00:32:29.571 | 99.99th=[ 171] 00:32:29.571 bw ( KiB/s): min=12976, max=19128, per=99.99%, avg=17412.00, stdev=2963.04, samples=4 00:32:29.571 iops : min= 3244, max= 4782, avg=4353.00, stdev=740.76, samples=4 00:32:29.571 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:32:29.571 lat (msec) : 2=0.03%, 4=0.09%, 10=1.69%, 20=97.24%, 50=0.21% 00:32:29.571 lat (msec) : 250=0.73% 00:32:29.571 cpu : usr=67.58%, sys=30.73%, ctx=71, majf=0, minf=1540 00:32:29.571 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:32:29.571 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:29.571 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:29.571 issued rwts: total=8771,8759,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:29.571 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:29.571 00:32:29.571 Run status group 0 (all jobs): 00:32:29.571 READ: bw=17.0MiB/s (17.9MB/s), 17.0MiB/s-17.0MiB/s (17.9MB/s-17.9MB/s), io=34.3MiB (35.9MB), run=2012-2012msec 00:32:29.571 WRITE: bw=17.0MiB/s (17.8MB/s), 17.0MiB/s-17.0MiB/s (17.8MB/s-17.8MB/s), io=34.2MiB (35.9MB), run=2012-2012msec 00:32:29.828 ----------------------------------------------------- 00:32:29.828 Suppressions used: 00:32:29.828 count bytes template 00:32:29.828 1 58 /usr/src/fio/parse.c 00:32:29.828 1 8 libtcmalloc_minimal.so 00:32:29.828 ----------------------------------------------------- 00:32:29.828 00:32:29.828 16:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:32:30.392 16:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:32:31.326 16:40:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=3afc997c-e294-400c-b5f3-c640d6810cc8 00:32:31.326 16:40:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 3afc997c-e294-400c-b5f3-c640d6810cc8 00:32:31.326 16:40:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=3afc997c-e294-400c-b5f3-c640d6810cc8 00:32:31.326 16:40:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:32:31.326 16:40:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:32:31.326 16:40:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:32:31.326 16:40:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:31.583 16:40:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:32:31.583 { 00:32:31.583 "uuid": "1e209417-c443-4d82-bf22-baad8ab56d6a", 00:32:31.583 "name": "lvs_0", 00:32:31.583 "base_bdev": "Nvme0n1", 00:32:31.583 "total_data_clusters": 930, 00:32:31.583 "free_clusters": 0, 00:32:31.583 "block_size": 512, 00:32:31.583 "cluster_size": 1073741824 00:32:31.583 }, 00:32:31.583 { 00:32:31.583 "uuid": "3afc997c-e294-400c-b5f3-c640d6810cc8", 00:32:31.583 "name": "lvs_n_0", 00:32:31.583 "base_bdev": "f38b3ffe-48fc-4847-abac-90651bf1d04a", 00:32:31.583 "total_data_clusters": 237847, 00:32:31.583 "free_clusters": 237847, 00:32:31.583 "block_size": 512, 00:32:31.583 "cluster_size": 4194304 00:32:31.583 } 00:32:31.583 ]' 00:32:31.583 16:40:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="3afc997c-e294-400c-b5f3-c640d6810cc8") .free_clusters' 00:32:31.583 16:40:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=237847 00:32:31.583 16:40:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="3afc997c-e294-400c-b5f3-c640d6810cc8") .cluster_size' 00:32:31.840 16:40:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:32:31.840 16:40:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=951388 00:32:31.840 16:40:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 951388 00:32:31.840 951388 00:32:31.840 16:40:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:32:32.773 8f347c2e-15fc-4015-ab1a-9d58771dbb38 00:32:32.773 16:40:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:32:33.030 16:40:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:32:33.350 16:40:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:32:33.648 16:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:33.648 16:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:33.648 16:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:33.648 16:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:33.648 16:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:33.648 16:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:33.648 16:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:32:33.648 16:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:33.648 16:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:33.648 16:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:33.648 16:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:32:33.648 16:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:33.648 16:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:33.648 16:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:33.648 16:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:32:33.648 16:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:33.648 16:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:33.936 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:33.936 fio-3.35 00:32:33.936 Starting 1 thread 00:32:36.466 00:32:36.466 test: (groupid=0, jobs=1): err= 0: pid=3278136: Sun Sep 29 16:40:36 2024 00:32:36.466 read: IOPS=4305, BW=16.8MiB/s (17.6MB/s)(33.8MiB/2009msec) 00:32:36.466 slat (usec): min=3, max=324, avg= 3.94, stdev= 4.10 00:32:36.466 clat (usec): min=5867, max=26801, avg=16206.69, stdev=1519.31 00:32:36.466 lat (usec): min=5873, max=26805, avg=16210.63, stdev=1519.19 00:32:36.466 clat percentiles (usec): 00:32:36.466 | 1.00th=[12780], 5.00th=[13960], 10.00th=[14484], 20.00th=[15008], 00:32:36.466 | 30.00th=[15401], 40.00th=[15795], 50.00th=[16188], 60.00th=[16581], 00:32:36.466 | 70.00th=[16909], 80.00th=[17433], 90.00th=[17957], 95.00th=[18482], 00:32:36.467 | 99.00th=[19792], 99.50th=[20055], 99.90th=[25297], 99.95th=[25560], 00:32:36.467 | 99.99th=[26870] 00:32:36.467 bw ( KiB/s): min=16360, max=17560, per=99.54%, avg=17144.00, stdev=534.63, samples=4 00:32:36.467 iops : min= 4090, max= 4390, avg=4286.00, stdev=133.66, samples=4 00:32:36.467 write: IOPS=4306, BW=16.8MiB/s (17.6MB/s)(33.8MiB/2009msec); 0 zone resets 00:32:36.467 slat (usec): min=3, max=122, avg= 3.97, stdev= 1.97 00:32:36.467 clat (usec): min=2741, max=23261, avg=13379.45, stdev=1255.91 00:32:36.467 lat (usec): min=2749, max=23265, avg=13383.42, stdev=1255.84 00:32:36.467 clat percentiles (usec): 00:32:36.467 | 1.00th=[10159], 5.00th=[11469], 10.00th=[11863], 20.00th=[12387], 00:32:36.467 | 30.00th=[12780], 40.00th=[13173], 50.00th=[13435], 60.00th=[13698], 00:32:36.467 | 70.00th=[13960], 80.00th=[14353], 90.00th=[14877], 95.00th=[15270], 00:32:36.467 | 99.00th=[16188], 99.50th=[16581], 99.90th=[19530], 99.95th=[21103], 00:32:36.467 | 99.99th=[23200] 00:32:36.467 bw ( KiB/s): min=16936, max=17336, per=99.88%, avg=17204.00, stdev=183.07, samples=4 00:32:36.467 iops : min= 4234, max= 4334, avg=4301.00, stdev=45.77, samples=4 00:32:36.467 lat (msec) : 4=0.01%, 10=0.47%, 20=99.18%, 50=0.34% 00:32:36.467 cpu : usr=63.70%, sys=34.51%, ctx=64, majf=0, minf=1541 00:32:36.467 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:32:36.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.467 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:36.467 issued rwts: total=8650,8651,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:36.467 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:36.467 00:32:36.467 Run status group 0 (all jobs): 00:32:36.467 READ: bw=16.8MiB/s (17.6MB/s), 16.8MiB/s-16.8MiB/s (17.6MB/s-17.6MB/s), io=33.8MiB (35.4MB), run=2009-2009msec 00:32:36.467 WRITE: bw=16.8MiB/s (17.6MB/s), 16.8MiB/s-16.8MiB/s (17.6MB/s-17.6MB/s), io=33.8MiB (35.4MB), run=2009-2009msec 00:32:36.724 ----------------------------------------------------- 00:32:36.724 Suppressions used: 00:32:36.724 count bytes template 00:32:36.724 1 58 /usr/src/fio/parse.c 00:32:36.724 1 8 libtcmalloc_minimal.so 00:32:36.724 ----------------------------------------------------- 00:32:36.724 00:32:36.724 16:40:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:32:36.982 16:40:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:32:36.982 16:40:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:32:42.246 16:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:32:42.246 16:40:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:32:44.773 16:40:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:32:44.773 16:40:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:32:46.670 16:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:46.670 16:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:32:46.670 16:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:32:46.670 16:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@512 -- # nvmfcleanup 00:32:46.670 16:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:32:46.670 16:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:46.670 16:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:32:46.670 16:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:46.670 16:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:46.670 rmmod nvme_tcp 00:32:46.670 rmmod nvme_fabrics 00:32:46.928 rmmod nvme_keyring 00:32:46.928 16:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:46.928 16:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:32:46.928 16:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:32:46.928 16:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@513 -- # '[' -n 3275067 ']' 00:32:46.928 16:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # killprocess 3275067 00:32:46.928 16:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 3275067 ']' 00:32:46.928 16:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 3275067 00:32:46.928 16:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:32:46.928 16:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:46.928 16:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3275067 00:32:46.928 16:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:46.928 16:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:46.928 16:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3275067' 00:32:46.928 killing process with pid 3275067 00:32:46.928 16:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 3275067 00:32:46.928 16:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 3275067 00:32:48.301 16:40:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:32:48.301 16:40:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:32:48.301 16:40:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:32:48.301 16:40:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:32:48.301 16:40:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # iptables-save 00:32:48.301 16:40:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:32:48.301 16:40:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # iptables-restore 00:32:48.301 16:40:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:48.301 16:40:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:48.301 16:40:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:48.301 16:40:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:48.301 16:40:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:50.832 16:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:50.832 00:32:50.832 real 0m42.511s 00:32:50.832 user 2m40.546s 00:32:50.832 sys 0m8.626s 00:32:50.832 16:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:50.832 16:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.832 ************************************ 00:32:50.832 END TEST nvmf_fio_host 00:32:50.832 ************************************ 00:32:50.832 16:40:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:32:50.832 16:40:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:50.832 16:40:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:50.832 16:40:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.832 ************************************ 00:32:50.832 START TEST nvmf_failover 00:32:50.832 ************************************ 00:32:50.832 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:32:50.832 * Looking for test storage... 00:32:50.832 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:50.832 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:50.832 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lcov --version 00:32:50.832 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:50.832 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:50.832 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:50.832 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:50.832 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:50.832 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:32:50.832 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:32:50.832 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:32:50.832 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:32:50.832 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:32:50.832 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:32:50.832 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:32:50.832 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:50.832 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:32:50.832 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:32:50.832 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:50.832 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:50.832 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:32:50.832 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:32:50.832 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:50.832 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:32:50.832 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:32:50.832 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:32:50.832 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:32:50.832 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:50.832 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:32:50.832 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:32:50.832 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:50.832 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:50.832 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:32:50.832 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:50.832 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:50.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:50.832 --rc genhtml_branch_coverage=1 00:32:50.832 --rc genhtml_function_coverage=1 00:32:50.832 --rc genhtml_legend=1 00:32:50.832 --rc geninfo_all_blocks=1 00:32:50.832 --rc geninfo_unexecuted_blocks=1 00:32:50.832 00:32:50.832 ' 00:32:50.832 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:50.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:50.832 --rc genhtml_branch_coverage=1 00:32:50.832 --rc genhtml_function_coverage=1 00:32:50.832 --rc genhtml_legend=1 00:32:50.832 --rc geninfo_all_blocks=1 00:32:50.832 --rc geninfo_unexecuted_blocks=1 00:32:50.832 00:32:50.832 ' 00:32:50.832 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:50.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:50.832 --rc genhtml_branch_coverage=1 00:32:50.832 --rc genhtml_function_coverage=1 00:32:50.832 --rc genhtml_legend=1 00:32:50.832 --rc geninfo_all_blocks=1 00:32:50.832 --rc geninfo_unexecuted_blocks=1 00:32:50.832 00:32:50.832 ' 00:32:50.832 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:50.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:50.832 --rc genhtml_branch_coverage=1 00:32:50.832 --rc genhtml_function_coverage=1 00:32:50.832 --rc genhtml_legend=1 00:32:50.832 --rc geninfo_all_blocks=1 00:32:50.832 --rc geninfo_unexecuted_blocks=1 00:32:50.832 00:32:50.832 ' 00:32:50.832 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:50.832 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:32:50.832 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:50.832 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:50.832 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:50.832 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:50.832 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:50.832 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:50.833 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:50.833 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:50.833 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:50.833 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:50.833 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:50.833 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:50.833 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:50.833 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:50.833 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:50.833 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:50.833 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:50.833 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:32:50.833 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:50.833 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:50.833 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:50.833 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.833 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.833 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.833 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:32:50.833 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.833 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:32:50.833 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:50.833 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:50.833 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:50.833 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:50.833 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:50.833 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:50.833 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:50.833 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:50.833 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:50.833 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:50.833 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:50.833 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:50.833 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:50.833 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:50.833 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:32:50.833 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:32:50.833 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:50.833 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # prepare_net_devs 00:32:50.833 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@434 -- # local -g is_hw=no 00:32:50.833 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # remove_spdk_ns 00:32:50.833 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:50.833 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:50.833 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:50.833 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:32:50.833 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:32:50.833 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:32:50.833 16:40:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:52.734 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:52.734 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ up == up ]] 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:52.734 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ up == up ]] 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:52.734 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # is_hw=yes 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:52.734 16:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:52.734 16:40:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:52.734 16:40:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:52.734 16:40:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:52.734 16:40:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:52.734 16:40:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:52.735 16:40:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:52.735 16:40:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:52.735 16:40:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:52.735 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:52.735 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.315 ms 00:32:52.735 00:32:52.735 --- 10.0.0.2 ping statistics --- 00:32:52.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:52.735 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:32:52.735 16:40:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:52.735 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:52.735 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:32:52.735 00:32:52.735 --- 10.0.0.1 ping statistics --- 00:32:52.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:52.735 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:32:52.735 16:40:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:52.735 16:40:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # return 0 00:32:52.735 16:40:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:32:52.735 16:40:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:52.735 16:40:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:32:52.735 16:40:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:32:52.735 16:40:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:52.735 16:40:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:32:52.735 16:40:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:32:52.735 16:40:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:32:52.735 16:40:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:32:52.735 16:40:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:52.735 16:40:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:52.735 16:40:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # nvmfpid=3281647 00:32:52.735 16:40:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:52.735 16:40:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # waitforlisten 3281647 00:32:52.735 16:40:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 3281647 ']' 00:32:52.735 16:40:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:52.735 16:40:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:52.735 16:40:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:52.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:52.735 16:40:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:52.735 16:40:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:52.735 [2024-09-29 16:40:53.221219] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:32:52.735 [2024-09-29 16:40:53.221388] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:52.993 [2024-09-29 16:40:53.360394] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:53.251 [2024-09-29 16:40:53.586309] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:53.251 [2024-09-29 16:40:53.586388] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:53.251 [2024-09-29 16:40:53.586411] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:53.251 [2024-09-29 16:40:53.586431] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:53.251 [2024-09-29 16:40:53.586447] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:53.251 [2024-09-29 16:40:53.586565] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:32:53.251 [2024-09-29 16:40:53.586595] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:32:53.251 [2024-09-29 16:40:53.586605] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:32:53.817 16:40:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:53.817 16:40:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:32:53.817 16:40:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:32:53.817 16:40:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:53.817 16:40:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:53.817 16:40:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:53.817 16:40:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:54.075 [2024-09-29 16:40:54.469997] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:54.075 16:40:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:32:54.333 Malloc0 00:32:54.333 16:40:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:54.591 16:40:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:54.849 16:40:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:55.110 [2024-09-29 16:40:55.640488] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:55.110 16:40:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:55.369 [2024-09-29 16:40:55.909259] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:55.369 16:40:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:55.626 [2024-09-29 16:40:56.182254] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:55.883 16:40:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3282063 00:32:55.883 16:40:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:32:55.883 16:40:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:55.883 16:40:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3282063 /var/tmp/bdevperf.sock 00:32:55.883 16:40:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 3282063 ']' 00:32:55.883 16:40:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:55.883 16:40:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:55.883 16:40:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:55.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:55.883 16:40:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:55.883 16:40:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:56.817 16:40:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:56.817 16:40:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:32:56.817 16:40:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:57.076 NVMe0n1 00:32:57.076 16:40:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:57.641 00:32:57.641 16:40:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3282325 00:32:57.641 16:40:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:57.641 16:40:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:32:59.016 16:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:59.016 [2024-09-29 16:40:59.401125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:59.016 [2024-09-29 16:40:59.401225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:59.016 [2024-09-29 16:40:59.401257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:59.016 [2024-09-29 16:40:59.401276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:59.016 [2024-09-29 16:40:59.401293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:59.016 [2024-09-29 16:40:59.401310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:59.016 [2024-09-29 16:40:59.401326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:59.016 [2024-09-29 16:40:59.401343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:59.016 [2024-09-29 16:40:59.401359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:32:59.016 16:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:33:02.298 16:41:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:02.556 00:33:02.556 16:41:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:02.814 [2024-09-29 16:41:03.217271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:02.814 16:41:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:33:06.095 16:41:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:06.095 [2024-09-29 16:41:06.487818] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:06.095 16:41:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:33:07.029 16:41:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:07.288 [2024-09-29 16:41:07.774473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:07.288 [2024-09-29 16:41:07.774558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:07.288 [2024-09-29 16:41:07.774579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:07.288 [2024-09-29 16:41:07.774596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:07.288 [2024-09-29 16:41:07.774614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:07.288 [2024-09-29 16:41:07.774630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:07.288 [2024-09-29 16:41:07.774662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:07.288 [2024-09-29 16:41:07.774691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:07.288 [2024-09-29 16:41:07.774709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:07.288 [2024-09-29 16:41:07.774742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:07.288 [2024-09-29 16:41:07.774769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:07.288 [2024-09-29 16:41:07.774787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:07.288 [2024-09-29 16:41:07.774803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:07.288 [2024-09-29 16:41:07.774820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:07.288 [2024-09-29 16:41:07.774837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:07.288 [2024-09-29 16:41:07.774853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:07.288 [2024-09-29 16:41:07.774870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:07.288 [2024-09-29 16:41:07.774887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:07.288 [2024-09-29 16:41:07.774903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:07.288 [2024-09-29 16:41:07.774920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:07.288 [2024-09-29 16:41:07.774936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:07.288 [2024-09-29 16:41:07.774952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:07.288 [2024-09-29 16:41:07.774985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:07.288 [2024-09-29 16:41:07.775001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:07.288 [2024-09-29 16:41:07.775017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:07.288 [2024-09-29 16:41:07.775034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:07.288 [2024-09-29 16:41:07.775050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:07.288 [2024-09-29 16:41:07.775066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:07.288 [2024-09-29 16:41:07.775082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:07.288 [2024-09-29 16:41:07.775098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:07.288 [2024-09-29 16:41:07.775113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:07.288 16:41:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3282325 00:33:13.849 { 00:33:13.849 "results": [ 00:33:13.849 { 00:33:13.849 "job": "NVMe0n1", 00:33:13.849 "core_mask": "0x1", 00:33:13.849 "workload": "verify", 00:33:13.849 "status": "finished", 00:33:13.849 "verify_range": { 00:33:13.849 "start": 0, 00:33:13.849 "length": 16384 00:33:13.849 }, 00:33:13.849 "queue_depth": 128, 00:33:13.849 "io_size": 4096, 00:33:13.849 "runtime": 15.048818, 00:33:13.849 "iops": 6030.506847780337, 00:33:13.849 "mibps": 23.55666737414194, 00:33:13.849 "io_failed": 6668, 00:33:13.849 "io_timeout": 0, 00:33:13.849 "avg_latency_us": 19684.98825392915, 00:33:13.849 "min_latency_us": 807.0637037037037, 00:33:13.849 "max_latency_us": 41554.67851851852 00:33:13.849 } 00:33:13.849 ], 00:33:13.849 "core_count": 1 00:33:13.849 } 00:33:13.849 16:41:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3282063 00:33:13.849 16:41:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 3282063 ']' 00:33:13.849 16:41:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 3282063 00:33:13.849 16:41:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:33:13.849 16:41:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:13.849 16:41:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3282063 00:33:13.849 16:41:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:13.849 16:41:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:13.849 16:41:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3282063' 00:33:13.849 killing process with pid 3282063 00:33:13.849 16:41:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 3282063 00:33:13.849 16:41:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 3282063 00:33:13.849 16:41:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:13.849 [2024-09-29 16:40:56.287873] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:33:13.849 [2024-09-29 16:40:56.288044] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3282063 ] 00:33:13.849 [2024-09-29 16:40:56.414553] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:13.849 [2024-09-29 16:40:56.648044] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:33:13.849 Running I/O for 15 seconds... 00:33:13.849 5773.00 IOPS, 22.55 MiB/s [2024-09-29 16:40:59.402792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:53144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.849 [2024-09-29 16:40:59.402852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.849 [2024-09-29 16:40:59.402894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:53152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.849 [2024-09-29 16:40:59.402919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.849 [2024-09-29 16:40:59.402944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:53160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.849 [2024-09-29 16:40:59.402980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.849 [2024-09-29 16:40:59.403005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:53168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.849 [2024-09-29 16:40:59.403041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.849 [2024-09-29 16:40:59.403064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:53176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.849 [2024-09-29 16:40:59.403084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.849 [2024-09-29 16:40:59.403106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:53184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.849 [2024-09-29 16:40:59.403126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.849 [2024-09-29 16:40:59.403149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:53192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.849 [2024-09-29 16:40:59.403169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.849 [2024-09-29 16:40:59.403191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:53200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.849 [2024-09-29 16:40:59.403210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.849 [2024-09-29 16:40:59.403232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:53208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.849 [2024-09-29 16:40:59.403252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.849 [2024-09-29 16:40:59.403274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:53216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.849 [2024-09-29 16:40:59.403294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.849 [2024-09-29 16:40:59.403315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:53224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.849 [2024-09-29 16:40:59.403335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.849 [2024-09-29 16:40:59.403382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:53232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.849 [2024-09-29 16:40:59.403404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.849 [2024-09-29 16:40:59.403426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:53240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.849 [2024-09-29 16:40:59.403445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.849 [2024-09-29 16:40:59.403466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:53248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.849 [2024-09-29 16:40:59.403486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.849 [2024-09-29 16:40:59.403507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:53256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.849 [2024-09-29 16:40:59.403527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.849 [2024-09-29 16:40:59.403548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:53264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.849 [2024-09-29 16:40:59.403568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.849 [2024-09-29 16:40:59.403590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:53272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.849 [2024-09-29 16:40:59.403610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.849 [2024-09-29 16:40:59.403631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:53280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.849 [2024-09-29 16:40:59.403684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.849 [2024-09-29 16:40:59.403712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:53288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.849 [2024-09-29 16:40:59.403753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.849 [2024-09-29 16:40:59.403780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:53296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.849 [2024-09-29 16:40:59.403802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.849 [2024-09-29 16:40:59.403825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:53304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.849 [2024-09-29 16:40:59.403845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.849 [2024-09-29 16:40:59.403868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:53312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.849 [2024-09-29 16:40:59.403889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.849 [2024-09-29 16:40:59.403911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:53320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.850 [2024-09-29 16:40:59.403932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.850 [2024-09-29 16:40:59.403955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:53328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.850 [2024-09-29 16:40:59.403995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.850 [2024-09-29 16:40:59.404018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:53336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.850 [2024-09-29 16:40:59.404038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.850 [2024-09-29 16:40:59.404060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:53344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.850 [2024-09-29 16:40:59.404080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.850 [2024-09-29 16:40:59.404101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:53352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.850 [2024-09-29 16:40:59.404121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.850 [2024-09-29 16:40:59.404142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:53360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.850 [2024-09-29 16:40:59.404162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.850 [2024-09-29 16:40:59.404184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:53368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.850 [2024-09-29 16:40:59.404203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.850 [2024-09-29 16:40:59.404233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:53376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.850 [2024-09-29 16:40:59.404253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.850 [2024-09-29 16:40:59.404275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:53384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.850 [2024-09-29 16:40:59.404294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.850 [2024-09-29 16:40:59.404315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:53392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.850 [2024-09-29 16:40:59.404335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.850 [2024-09-29 16:40:59.404357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:53416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.850 [2024-09-29 16:40:59.404376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.850 [2024-09-29 16:40:59.404397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:53424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.850 [2024-09-29 16:40:59.404417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.850 [2024-09-29 16:40:59.404438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:53432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.850 [2024-09-29 16:40:59.404458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.850 [2024-09-29 16:40:59.404479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:53440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.850 [2024-09-29 16:40:59.404498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.850 [2024-09-29 16:40:59.404532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:53448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.850 [2024-09-29 16:40:59.404552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.850 [2024-09-29 16:40:59.404574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:53456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.850 [2024-09-29 16:40:59.404593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.850 [2024-09-29 16:40:59.404615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:53464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.850 [2024-09-29 16:40:59.404635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.850 [2024-09-29 16:40:59.404679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:53472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.850 [2024-09-29 16:40:59.404703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.850 [2024-09-29 16:40:59.404726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:53480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.850 [2024-09-29 16:40:59.404746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.850 [2024-09-29 16:40:59.404768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:53488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.850 [2024-09-29 16:40:59.404788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.850 [2024-09-29 16:40:59.404811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:53496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.850 [2024-09-29 16:40:59.404838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.850 [2024-09-29 16:40:59.404861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:53504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.850 [2024-09-29 16:40:59.404881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.850 [2024-09-29 16:40:59.404903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:53512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.850 [2024-09-29 16:40:59.404924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.850 [2024-09-29 16:40:59.404946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:53520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.850 [2024-09-29 16:40:59.404967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.850 [2024-09-29 16:40:59.404989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:53528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.850 [2024-09-29 16:40:59.405008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.850 [2024-09-29 16:40:59.405031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:53536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.850 [2024-09-29 16:40:59.405051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.850 [2024-09-29 16:40:59.405073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:53544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.850 [2024-09-29 16:40:59.405097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.850 [2024-09-29 16:40:59.405120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:53552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.850 [2024-09-29 16:40:59.405140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.850 [2024-09-29 16:40:59.405163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:53560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.850 [2024-09-29 16:40:59.405183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.850 [2024-09-29 16:40:59.405205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:53568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.850 [2024-09-29 16:40:59.405225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.850 [2024-09-29 16:40:59.405247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:53576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.850 [2024-09-29 16:40:59.405268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.850 [2024-09-29 16:40:59.405290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:53584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.850 [2024-09-29 16:40:59.405310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.850 [2024-09-29 16:40:59.405332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:53592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.850 [2024-09-29 16:40:59.405352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.850 [2024-09-29 16:40:59.405374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:53600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.850 [2024-09-29 16:40:59.405394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.850 [2024-09-29 16:40:59.405417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:53608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.850 [2024-09-29 16:40:59.405437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.850 [2024-09-29 16:40:59.405459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:53616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.850 [2024-09-29 16:40:59.405479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.850 [2024-09-29 16:40:59.405502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:53624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.850 [2024-09-29 16:40:59.405527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.850 [2024-09-29 16:40:59.405550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:53632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.850 [2024-09-29 16:40:59.405570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.850 [2024-09-29 16:40:59.405592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:53640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.850 [2024-09-29 16:40:59.405612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.850 [2024-09-29 16:40:59.405635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:53648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.850 [2024-09-29 16:40:59.405659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.850 [2024-09-29 16:40:59.405706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:53656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.850 [2024-09-29 16:40:59.405730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.851 [2024-09-29 16:40:59.405754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:53664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.851 [2024-09-29 16:40:59.405775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.851 [2024-09-29 16:40:59.405798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:53672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.851 [2024-09-29 16:40:59.405818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.851 [2024-09-29 16:40:59.405841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:53680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.851 [2024-09-29 16:40:59.405862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.851 [2024-09-29 16:40:59.405885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:53688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.851 [2024-09-29 16:40:59.405905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.851 [2024-09-29 16:40:59.405928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:53696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.851 [2024-09-29 16:40:59.405949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.851 [2024-09-29 16:40:59.405972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:53704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.851 [2024-09-29 16:40:59.406008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.851 [2024-09-29 16:40:59.406031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:53712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.851 [2024-09-29 16:40:59.406051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.851 [2024-09-29 16:40:59.406073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:53720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.851 [2024-09-29 16:40:59.406093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.851 [2024-09-29 16:40:59.406115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:53728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.851 [2024-09-29 16:40:59.406135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.851 [2024-09-29 16:40:59.406157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:53736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.851 [2024-09-29 16:40:59.406176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.851 [2024-09-29 16:40:59.406199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:53744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.851 [2024-09-29 16:40:59.406219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.851 [2024-09-29 16:40:59.406245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:53752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.851 [2024-09-29 16:40:59.406271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.851 [2024-09-29 16:40:59.406312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:53760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.851 [2024-09-29 16:40:59.406333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.851 [2024-09-29 16:40:59.406356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:53768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.851 [2024-09-29 16:40:59.406376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.851 [2024-09-29 16:40:59.406398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:53776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.851 [2024-09-29 16:40:59.406418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.851 [2024-09-29 16:40:59.406440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:53784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.851 [2024-09-29 16:40:59.406460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.851 [2024-09-29 16:40:59.406483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:53792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.851 [2024-09-29 16:40:59.406503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.851 [2024-09-29 16:40:59.406526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:53800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.851 [2024-09-29 16:40:59.406547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.851 [2024-09-29 16:40:59.406569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:53808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.851 [2024-09-29 16:40:59.406589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.851 [2024-09-29 16:40:59.406611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.851 [2024-09-29 16:40:59.406631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.851 [2024-09-29 16:40:59.406653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:53824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.851 [2024-09-29 16:40:59.406679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.851 [2024-09-29 16:40:59.406722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:53832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.851 [2024-09-29 16:40:59.406743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.851 [2024-09-29 16:40:59.406766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:53840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.851 [2024-09-29 16:40:59.406786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.851 [2024-09-29 16:40:59.406809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:53848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.851 [2024-09-29 16:40:59.406835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.851 [2024-09-29 16:40:59.406859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:53856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.851 [2024-09-29 16:40:59.406880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.851 [2024-09-29 16:40:59.406902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:53864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.851 [2024-09-29 16:40:59.406923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.851 [2024-09-29 16:40:59.406946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:53872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.851 [2024-09-29 16:40:59.406967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.851 [2024-09-29 16:40:59.407005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:53880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.851 [2024-09-29 16:40:59.407031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.851 [2024-09-29 16:40:59.407054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:53888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.851 [2024-09-29 16:40:59.407074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.851 [2024-09-29 16:40:59.407097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:53896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.851 [2024-09-29 16:40:59.407117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.851 [2024-09-29 16:40:59.407140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:53904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.851 [2024-09-29 16:40:59.407160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.851 [2024-09-29 16:40:59.407183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:53912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.851 [2024-09-29 16:40:59.407203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.851 [2024-09-29 16:40:59.407225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:53920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.851 [2024-09-29 16:40:59.407245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.851 [2024-09-29 16:40:59.407267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:53928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.851 [2024-09-29 16:40:59.407288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.851 [2024-09-29 16:40:59.407310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:53936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.851 [2024-09-29 16:40:59.407329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.851 [2024-09-29 16:40:59.407351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:53944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.851 [2024-09-29 16:40:59.407370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.851 [2024-09-29 16:40:59.407392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:53952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.851 [2024-09-29 16:40:59.407416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.851 [2024-09-29 16:40:59.407439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:53960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.851 [2024-09-29 16:40:59.407459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.851 [2024-09-29 16:40:59.407481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:53968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.851 [2024-09-29 16:40:59.407501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.851 [2024-09-29 16:40:59.407523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:53976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.851 [2024-09-29 16:40:59.407544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.852 [2024-09-29 16:40:59.407566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:53984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.852 [2024-09-29 16:40:59.407585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.852 [2024-09-29 16:40:59.407608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:53992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.852 [2024-09-29 16:40:59.407628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.852 [2024-09-29 16:40:59.407651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:54000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.852 [2024-09-29 16:40:59.407695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.852 [2024-09-29 16:40:59.407720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:54008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.852 [2024-09-29 16:40:59.407746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.852 [2024-09-29 16:40:59.407770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:54016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.852 [2024-09-29 16:40:59.407791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.852 [2024-09-29 16:40:59.407814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:54024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.852 [2024-09-29 16:40:59.407835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.852 [2024-09-29 16:40:59.407857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:54032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.852 [2024-09-29 16:40:59.407878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.852 [2024-09-29 16:40:59.407901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:54040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.852 [2024-09-29 16:40:59.407922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.852 [2024-09-29 16:40:59.407955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:54048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.852 [2024-09-29 16:40:59.407992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.852 [2024-09-29 16:40:59.408049] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:13.852 [2024-09-29 16:40:59.408075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54056 len:8 PRP1 0x0 PRP2 0x0 00:33:13.852 [2024-09-29 16:40:59.408096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.852 [2024-09-29 16:40:59.408130] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:13.852 [2024-09-29 16:40:59.408150] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:13.852 [2024-09-29 16:40:59.408168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54064 len:8 PRP1 0x0 PRP2 0x0 00:33:13.852 [2024-09-29 16:40:59.408188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.852 [2024-09-29 16:40:59.408207] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:13.852 [2024-09-29 16:40:59.408224] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:13.852 [2024-09-29 16:40:59.408240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54072 len:8 PRP1 0x0 PRP2 0x0 00:33:13.852 [2024-09-29 16:40:59.408259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.852 [2024-09-29 16:40:59.408278] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:13.852 [2024-09-29 16:40:59.408294] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:13.852 [2024-09-29 16:40:59.408311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54080 len:8 PRP1 0x0 PRP2 0x0 00:33:13.852 [2024-09-29 16:40:59.408329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.852 [2024-09-29 16:40:59.408348] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:13.852 [2024-09-29 16:40:59.408364] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:13.852 [2024-09-29 16:40:59.408380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54088 len:8 PRP1 0x0 PRP2 0x0 00:33:13.852 [2024-09-29 16:40:59.408399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.852 [2024-09-29 16:40:59.408417] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:13.852 [2024-09-29 16:40:59.408433] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:13.852 [2024-09-29 16:40:59.408455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54096 len:8 PRP1 0x0 PRP2 0x0 00:33:13.852 [2024-09-29 16:40:59.408474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.852 [2024-09-29 16:40:59.408492] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:13.852 [2024-09-29 16:40:59.408508] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:13.852 [2024-09-29 16:40:59.408525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54104 len:8 PRP1 0x0 PRP2 0x0 00:33:13.852 [2024-09-29 16:40:59.408544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.852 [2024-09-29 16:40:59.408562] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:13.852 [2024-09-29 16:40:59.408579] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:13.852 [2024-09-29 16:40:59.408594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54112 len:8 PRP1 0x0 PRP2 0x0 00:33:13.852 [2024-09-29 16:40:59.408612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.852 [2024-09-29 16:40:59.408636] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:13.852 [2024-09-29 16:40:59.408667] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:13.852 [2024-09-29 16:40:59.408696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54120 len:8 PRP1 0x0 PRP2 0x0 00:33:13.852 [2024-09-29 16:40:59.408716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.852 [2024-09-29 16:40:59.408736] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:13.852 [2024-09-29 16:40:59.408754] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:13.852 [2024-09-29 16:40:59.408770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54128 len:8 PRP1 0x0 PRP2 0x0 00:33:13.852 [2024-09-29 16:40:59.408789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.852 [2024-09-29 16:40:59.408808] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:13.852 [2024-09-29 16:40:59.408824] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:13.852 [2024-09-29 16:40:59.408842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54136 len:8 PRP1 0x0 PRP2 0x0 00:33:13.852 [2024-09-29 16:40:59.408863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.852 [2024-09-29 16:40:59.408884] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:13.852 [2024-09-29 16:40:59.408901] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:13.852 [2024-09-29 16:40:59.408919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54144 len:8 PRP1 0x0 PRP2 0x0 00:33:13.852 [2024-09-29 16:40:59.408938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.852 [2024-09-29 16:40:59.408958] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:13.852 [2024-09-29 16:40:59.408975] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:13.852 [2024-09-29 16:40:59.408992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54152 len:8 PRP1 0x0 PRP2 0x0 00:33:13.852 [2024-09-29 16:40:59.409010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.852 [2024-09-29 16:40:59.409044] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:13.852 [2024-09-29 16:40:59.409062] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:13.852 [2024-09-29 16:40:59.409079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54160 len:8 PRP1 0x0 PRP2 0x0 00:33:13.852 [2024-09-29 16:40:59.409110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.852 [2024-09-29 16:40:59.409131] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:13.852 [2024-09-29 16:40:59.409148] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:13.852 [2024-09-29 16:40:59.409166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:53400 len:8 PRP1 0x0 PRP2 0x0 00:33:13.852 [2024-09-29 16:40:59.409189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.852 [2024-09-29 16:40:59.409208] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:13.852 [2024-09-29 16:40:59.409225] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:13.852 [2024-09-29 16:40:59.409245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:53408 len:8 PRP1 0x0 PRP2 0x0 00:33:13.852 [2024-09-29 16:40:59.409265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.852 [2024-09-29 16:40:59.409525] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f2f00 was disconnected and freed. reset controller. 00:33:13.852 [2024-09-29 16:40:59.409553] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:33:13.852 [2024-09-29 16:40:59.409619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:13.852 [2024-09-29 16:40:59.409646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.852 [2024-09-29 16:40:59.409685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:13.852 [2024-09-29 16:40:59.409709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.852 [2024-09-29 16:40:59.409730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:13.852 [2024-09-29 16:40:59.409751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.852 [2024-09-29 16:40:59.409772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:13.853 [2024-09-29 16:40:59.409792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.853 [2024-09-29 16:40:59.409812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:13.853 [2024-09-29 16:40:59.409903] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2000 (9): Bad file descriptor 00:33:13.853 [2024-09-29 16:40:59.413821] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:13.853 [2024-09-29 16:40:59.452118] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:13.853 5796.00 IOPS, 22.64 MiB/s 5913.67 IOPS, 23.10 MiB/s 6038.50 IOPS, 23.59 MiB/s [2024-09-29 16:41:03.218171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:119080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.853 [2024-09-29 16:41:03.218234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.853 [2024-09-29 16:41:03.218291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:119088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.853 [2024-09-29 16:41:03.218315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.853 [2024-09-29 16:41:03.218339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:119096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.853 [2024-09-29 16:41:03.218377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.853 [2024-09-29 16:41:03.218402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:119104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.853 [2024-09-29 16:41:03.218423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.853 [2024-09-29 16:41:03.218446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.853 [2024-09-29 16:41:03.218467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.853 [2024-09-29 16:41:03.218500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:119120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.853 [2024-09-29 16:41:03.218521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.853 [2024-09-29 16:41:03.218544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:119128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.853 [2024-09-29 16:41:03.218565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.853 [2024-09-29 16:41:03.218587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:119136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.853 [2024-09-29 16:41:03.218607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.853 [2024-09-29 16:41:03.218629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:119144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.853 [2024-09-29 16:41:03.218650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.853 [2024-09-29 16:41:03.218699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:119152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.853 [2024-09-29 16:41:03.218736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.853 [2024-09-29 16:41:03.218761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:119160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.853 [2024-09-29 16:41:03.218782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.853 [2024-09-29 16:41:03.218805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:119168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.853 [2024-09-29 16:41:03.218827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.853 [2024-09-29 16:41:03.218850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:119176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.853 [2024-09-29 16:41:03.218871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.853 [2024-09-29 16:41:03.218894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:119184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.853 [2024-09-29 16:41:03.218915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.853 [2024-09-29 16:41:03.218939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:119192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.853 [2024-09-29 16:41:03.218960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.853 [2024-09-29 16:41:03.219000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:119200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.853 [2024-09-29 16:41:03.219021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.853 [2024-09-29 16:41:03.219044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:119208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.853 [2024-09-29 16:41:03.219064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.853 [2024-09-29 16:41:03.219087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:119216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.853 [2024-09-29 16:41:03.219116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.853 [2024-09-29 16:41:03.219140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:119224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.853 [2024-09-29 16:41:03.219160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.853 [2024-09-29 16:41:03.219183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:119232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.853 [2024-09-29 16:41:03.219204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.853 [2024-09-29 16:41:03.219226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:119240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.853 [2024-09-29 16:41:03.219246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.853 [2024-09-29 16:41:03.219268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:119248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.853 [2024-09-29 16:41:03.219289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.853 [2024-09-29 16:41:03.219312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:119256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.853 [2024-09-29 16:41:03.219332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.853 [2024-09-29 16:41:03.219354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:119264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.853 [2024-09-29 16:41:03.219374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.853 [2024-09-29 16:41:03.219397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:119272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.853 [2024-09-29 16:41:03.219418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.853 [2024-09-29 16:41:03.219440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:119280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.853 [2024-09-29 16:41:03.219460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.853 [2024-09-29 16:41:03.219483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:119288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.853 [2024-09-29 16:41:03.219503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.853 [2024-09-29 16:41:03.219527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:119296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.853 [2024-09-29 16:41:03.219548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.853 [2024-09-29 16:41:03.219570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:119304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.853 [2024-09-29 16:41:03.219591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.853 [2024-09-29 16:41:03.219613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:119312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.853 [2024-09-29 16:41:03.219634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.854 [2024-09-29 16:41:03.219661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:119320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.854 [2024-09-29 16:41:03.219717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.854 [2024-09-29 16:41:03.219745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:119328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.854 [2024-09-29 16:41:03.219766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.854 [2024-09-29 16:41:03.219789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:119336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.854 [2024-09-29 16:41:03.219809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.854 [2024-09-29 16:41:03.219832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:119344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.854 [2024-09-29 16:41:03.219854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.854 [2024-09-29 16:41:03.219876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:119352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.854 [2024-09-29 16:41:03.219897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.854 [2024-09-29 16:41:03.219920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:119360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.854 [2024-09-29 16:41:03.219942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.854 [2024-09-29 16:41:03.219965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:119368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.854 [2024-09-29 16:41:03.220001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.854 [2024-09-29 16:41:03.220025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:119376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.854 [2024-09-29 16:41:03.220045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.854 [2024-09-29 16:41:03.220067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:119384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.854 [2024-09-29 16:41:03.220088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.854 [2024-09-29 16:41:03.220111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:119408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.854 [2024-09-29 16:41:03.220132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.854 [2024-09-29 16:41:03.220155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:119416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.854 [2024-09-29 16:41:03.220175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.854 [2024-09-29 16:41:03.220198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:119424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.854 [2024-09-29 16:41:03.220220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.854 [2024-09-29 16:41:03.220242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:119432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.854 [2024-09-29 16:41:03.220262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.854 [2024-09-29 16:41:03.220290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:119440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.854 [2024-09-29 16:41:03.220323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.854 [2024-09-29 16:41:03.220355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:119448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.854 [2024-09-29 16:41:03.220377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.854 [2024-09-29 16:41:03.220399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:119456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.854 [2024-09-29 16:41:03.220419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.854 [2024-09-29 16:41:03.220442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:119464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.854 [2024-09-29 16:41:03.220463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.854 [2024-09-29 16:41:03.220485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:119472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.854 [2024-09-29 16:41:03.220506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.854 [2024-09-29 16:41:03.220528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:119480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.854 [2024-09-29 16:41:03.220548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.854 [2024-09-29 16:41:03.220571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:119488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.854 [2024-09-29 16:41:03.220607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.854 [2024-09-29 16:41:03.220633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:119496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.854 [2024-09-29 16:41:03.220654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.854 [2024-09-29 16:41:03.220701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:119504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.854 [2024-09-29 16:41:03.220726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.854 [2024-09-29 16:41:03.220752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:119512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.854 [2024-09-29 16:41:03.220773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.854 [2024-09-29 16:41:03.220796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:119520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.854 [2024-09-29 16:41:03.220817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.854 [2024-09-29 16:41:03.220841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:119528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.854 [2024-09-29 16:41:03.220862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.854 [2024-09-29 16:41:03.220885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:119536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.854 [2024-09-29 16:41:03.220916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.854 [2024-09-29 16:41:03.220941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:119544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.854 [2024-09-29 16:41:03.220962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.854 [2024-09-29 16:41:03.221000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:119552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.854 [2024-09-29 16:41:03.221022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.854 [2024-09-29 16:41:03.221045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:119560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.854 [2024-09-29 16:41:03.221067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.854 [2024-09-29 16:41:03.221090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:119568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.854 [2024-09-29 16:41:03.221112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.854 [2024-09-29 16:41:03.221134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:119576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.854 [2024-09-29 16:41:03.221155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.854 [2024-09-29 16:41:03.221177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:119584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.854 [2024-09-29 16:41:03.221198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.854 [2024-09-29 16:41:03.221220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:119592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.854 [2024-09-29 16:41:03.221241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.854 [2024-09-29 16:41:03.221265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:119600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.854 [2024-09-29 16:41:03.221287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.854 [2024-09-29 16:41:03.221310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:119608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.854 [2024-09-29 16:41:03.221331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.854 [2024-09-29 16:41:03.221354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:119616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.854 [2024-09-29 16:41:03.221375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.854 [2024-09-29 16:41:03.221397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:119624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.854 [2024-09-29 16:41:03.221418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.854 [2024-09-29 16:41:03.221442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:119632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.854 [2024-09-29 16:41:03.221463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.854 [2024-09-29 16:41:03.221490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:119640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.854 [2024-09-29 16:41:03.221512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.854 [2024-09-29 16:41:03.221535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:119648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.854 [2024-09-29 16:41:03.221556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.855 [2024-09-29 16:41:03.221579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:119656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.855 [2024-09-29 16:41:03.221600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.855 [2024-09-29 16:41:03.221623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:119664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.855 [2024-09-29 16:41:03.221643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.855 [2024-09-29 16:41:03.221667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:119672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.855 [2024-09-29 16:41:03.221713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.855 [2024-09-29 16:41:03.221738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:119680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.855 [2024-09-29 16:41:03.221760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.855 [2024-09-29 16:41:03.221783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:119688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.855 [2024-09-29 16:41:03.221807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.855 [2024-09-29 16:41:03.221831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:119696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.855 [2024-09-29 16:41:03.221853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.855 [2024-09-29 16:41:03.221876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:119704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.855 [2024-09-29 16:41:03.221898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.855 [2024-09-29 16:41:03.221921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:119712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.855 [2024-09-29 16:41:03.221943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.855 [2024-09-29 16:41:03.221967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:119720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.855 [2024-09-29 16:41:03.222003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.855 [2024-09-29 16:41:03.222027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:119728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.855 [2024-09-29 16:41:03.222048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.855 [2024-09-29 16:41:03.222071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:119736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.855 [2024-09-29 16:41:03.222096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.855 [2024-09-29 16:41:03.222120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:119744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.855 [2024-09-29 16:41:03.222141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.855 [2024-09-29 16:41:03.222163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:119752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.855 [2024-09-29 16:41:03.222184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.855 [2024-09-29 16:41:03.222208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:119760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.855 [2024-09-29 16:41:03.222229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.855 [2024-09-29 16:41:03.222252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:119768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.855 [2024-09-29 16:41:03.222274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.855 [2024-09-29 16:41:03.222296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:119776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.855 [2024-09-29 16:41:03.222318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.855 [2024-09-29 16:41:03.222340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:119784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.855 [2024-09-29 16:41:03.222361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.855 [2024-09-29 16:41:03.222384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:119792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.855 [2024-09-29 16:41:03.222404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.855 [2024-09-29 16:41:03.222428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:119800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.855 [2024-09-29 16:41:03.222449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.855 [2024-09-29 16:41:03.222472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:119808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.855 [2024-09-29 16:41:03.222493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.855 [2024-09-29 16:41:03.222516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:119816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.855 [2024-09-29 16:41:03.222537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.855 [2024-09-29 16:41:03.222561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:119824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.855 [2024-09-29 16:41:03.222582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.855 [2024-09-29 16:41:03.222605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:119832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.855 [2024-09-29 16:41:03.222625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.855 [2024-09-29 16:41:03.222668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:119840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.855 [2024-09-29 16:41:03.222700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.855 [2024-09-29 16:41:03.222726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:119848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.855 [2024-09-29 16:41:03.222748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.855 [2024-09-29 16:41:03.222773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:119856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.855 [2024-09-29 16:41:03.222794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.855 [2024-09-29 16:41:03.222818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:119864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.855 [2024-09-29 16:41:03.222840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.855 [2024-09-29 16:41:03.222863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:119872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.855 [2024-09-29 16:41:03.222884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.855 [2024-09-29 16:41:03.222908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:119880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.855 [2024-09-29 16:41:03.222929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.855 [2024-09-29 16:41:03.222953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:119888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.855 [2024-09-29 16:41:03.222991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.855 [2024-09-29 16:41:03.223015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:119896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.855 [2024-09-29 16:41:03.223036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.855 [2024-09-29 16:41:03.223060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:119904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.855 [2024-09-29 16:41:03.223081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.855 [2024-09-29 16:41:03.223104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:119912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.855 [2024-09-29 16:41:03.223124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.855 [2024-09-29 16:41:03.223147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:119920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.855 [2024-09-29 16:41:03.223169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.855 [2024-09-29 16:41:03.223191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:119928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.855 [2024-09-29 16:41:03.223212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.855 [2024-09-29 16:41:03.223235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:119936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.855 [2024-09-29 16:41:03.223256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.855 [2024-09-29 16:41:03.223283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:119944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.855 [2024-09-29 16:41:03.223305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.855 [2024-09-29 16:41:03.223329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:119952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.855 [2024-09-29 16:41:03.223350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.855 [2024-09-29 16:41:03.223374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:119960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.855 [2024-09-29 16:41:03.223394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.855 [2024-09-29 16:41:03.223417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:119968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.855 [2024-09-29 16:41:03.223439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.855 [2024-09-29 16:41:03.223461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:119976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.856 [2024-09-29 16:41:03.223482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.856 [2024-09-29 16:41:03.223504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:119984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.856 [2024-09-29 16:41:03.223526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.856 [2024-09-29 16:41:03.223549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:119992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.856 [2024-09-29 16:41:03.223569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.856 [2024-09-29 16:41:03.223593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:120000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.856 [2024-09-29 16:41:03.223626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.856 [2024-09-29 16:41:03.223651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:120008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.856 [2024-09-29 16:41:03.223679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.856 [2024-09-29 16:41:03.223723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:120016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.856 [2024-09-29 16:41:03.223745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.856 [2024-09-29 16:41:03.223770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:120024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.856 [2024-09-29 16:41:03.223792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.856 [2024-09-29 16:41:03.223815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:120032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.856 [2024-09-29 16:41:03.223836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.856 [2024-09-29 16:41:03.223860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:120040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.856 [2024-09-29 16:41:03.223886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.856 [2024-09-29 16:41:03.223942] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:13.856 [2024-09-29 16:41:03.223968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120048 len:8 PRP1 0x0 PRP2 0x0 00:33:13.856 [2024-09-29 16:41:03.224006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.856 [2024-09-29 16:41:03.224034] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:13.856 [2024-09-29 16:41:03.224053] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:13.856 [2024-09-29 16:41:03.224073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120056 len:8 PRP1 0x0 PRP2 0x0 00:33:13.856 [2024-09-29 16:41:03.224093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.856 [2024-09-29 16:41:03.224113] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:13.856 [2024-09-29 16:41:03.224130] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:13.856 [2024-09-29 16:41:03.224148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120064 len:8 PRP1 0x0 PRP2 0x0 00:33:13.856 [2024-09-29 16:41:03.224167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.856 [2024-09-29 16:41:03.224186] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:13.856 [2024-09-29 16:41:03.224202] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:13.856 [2024-09-29 16:41:03.224221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120072 len:8 PRP1 0x0 PRP2 0x0 00:33:13.856 [2024-09-29 16:41:03.224239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.856 [2024-09-29 16:41:03.224258] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:13.856 [2024-09-29 16:41:03.224275] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:13.856 [2024-09-29 16:41:03.224292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120080 len:8 PRP1 0x0 PRP2 0x0 00:33:13.856 [2024-09-29 16:41:03.224310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.856 [2024-09-29 16:41:03.224329] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:13.856 [2024-09-29 16:41:03.224346] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:13.856 [2024-09-29 16:41:03.224363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120088 len:8 PRP1 0x0 PRP2 0x0 00:33:13.856 [2024-09-29 16:41:03.224382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.856 [2024-09-29 16:41:03.224401] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:13.856 [2024-09-29 16:41:03.224418] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:13.856 [2024-09-29 16:41:03.224435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120096 len:8 PRP1 0x0 PRP2 0x0 00:33:13.856 [2024-09-29 16:41:03.224454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.856 [2024-09-29 16:41:03.224472] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:13.856 [2024-09-29 16:41:03.224489] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:13.856 [2024-09-29 16:41:03.224510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119392 len:8 PRP1 0x0 PRP2 0x0 00:33:13.856 [2024-09-29 16:41:03.224529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.856 [2024-09-29 16:41:03.224548] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:13.856 [2024-09-29 16:41:03.224564] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:13.856 [2024-09-29 16:41:03.224581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119400 len:8 PRP1 0x0 PRP2 0x0 00:33:13.856 [2024-09-29 16:41:03.224599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.856 [2024-09-29 16:41:03.224897] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f3180 was disconnected and freed. reset controller. 00:33:13.856 [2024-09-29 16:41:03.224930] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:33:13.856 [2024-09-29 16:41:03.224985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:13.856 [2024-09-29 16:41:03.225012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.856 [2024-09-29 16:41:03.225037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:13.856 [2024-09-29 16:41:03.225057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.856 [2024-09-29 16:41:03.225082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:13.856 [2024-09-29 16:41:03.225103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.856 [2024-09-29 16:41:03.225125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:13.856 [2024-09-29 16:41:03.225145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.856 [2024-09-29 16:41:03.225166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:13.856 [2024-09-29 16:41:03.225254] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2000 (9): Bad file descriptor 00:33:13.856 [2024-09-29 16:41:03.229146] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:13.856 6030.00 IOPS, 23.55 MiB/s [2024-09-29 16:41:03.318891] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:13.856 5995.17 IOPS, 23.42 MiB/s 5999.14 IOPS, 23.43 MiB/s 6005.00 IOPS, 23.46 MiB/s 6019.56 IOPS, 23.51 MiB/s [2024-09-29 16:41:07.775397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:95072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.856 [2024-09-29 16:41:07.775453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.856 [2024-09-29 16:41:07.775509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:95080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.856 [2024-09-29 16:41:07.775533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.856 [2024-09-29 16:41:07.775557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:95088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.856 [2024-09-29 16:41:07.775577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.856 [2024-09-29 16:41:07.775606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:95096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.856 [2024-09-29 16:41:07.775627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.856 [2024-09-29 16:41:07.775649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:95104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.856 [2024-09-29 16:41:07.775694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.856 [2024-09-29 16:41:07.775720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:95112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.856 [2024-09-29 16:41:07.775740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.856 [2024-09-29 16:41:07.775762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:95120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.856 [2024-09-29 16:41:07.775783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.856 [2024-09-29 16:41:07.775805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:95128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.856 [2024-09-29 16:41:07.775826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.856 [2024-09-29 16:41:07.775847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:95136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.856 [2024-09-29 16:41:07.775867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.857 [2024-09-29 16:41:07.775888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:95144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.857 [2024-09-29 16:41:07.775909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.857 [2024-09-29 16:41:07.775931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:95152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.857 [2024-09-29 16:41:07.775965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.857 [2024-09-29 16:41:07.775989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:95160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.857 [2024-09-29 16:41:07.776008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.857 [2024-09-29 16:41:07.776030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:95168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.857 [2024-09-29 16:41:07.776050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.857 [2024-09-29 16:41:07.776071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:95176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.857 [2024-09-29 16:41:07.776090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.857 [2024-09-29 16:41:07.776111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:95184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.857 [2024-09-29 16:41:07.776130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.857 [2024-09-29 16:41:07.776152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.857 [2024-09-29 16:41:07.776172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.857 [2024-09-29 16:41:07.776199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:95320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.857 [2024-09-29 16:41:07.776220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.857 [2024-09-29 16:41:07.776241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.857 [2024-09-29 16:41:07.776261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.857 [2024-09-29 16:41:07.776284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:95336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.857 [2024-09-29 16:41:07.776304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.857 [2024-09-29 16:41:07.776325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:95344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.857 [2024-09-29 16:41:07.776345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.857 [2024-09-29 16:41:07.776367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:95352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.857 [2024-09-29 16:41:07.776387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.857 [2024-09-29 16:41:07.776408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:95360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.857 [2024-09-29 16:41:07.776427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.857 [2024-09-29 16:41:07.776448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:95368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.857 [2024-09-29 16:41:07.776475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.857 [2024-09-29 16:41:07.776497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:95376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.857 [2024-09-29 16:41:07.776516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.857 [2024-09-29 16:41:07.776537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:95384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.857 [2024-09-29 16:41:07.776556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.857 [2024-09-29 16:41:07.776578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:95392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.857 [2024-09-29 16:41:07.776598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.857 [2024-09-29 16:41:07.776619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:95400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.857 [2024-09-29 16:41:07.776638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.857 [2024-09-29 16:41:07.776660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:95408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.857 [2024-09-29 16:41:07.776703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.857 [2024-09-29 16:41:07.776728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:95416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.857 [2024-09-29 16:41:07.776753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.857 [2024-09-29 16:41:07.776777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:95424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.857 [2024-09-29 16:41:07.776797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.857 [2024-09-29 16:41:07.776819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:95432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.857 [2024-09-29 16:41:07.776840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.857 [2024-09-29 16:41:07.776862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.857 [2024-09-29 16:41:07.776882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.857 [2024-09-29 16:41:07.776905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:95448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.857 [2024-09-29 16:41:07.776926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.857 [2024-09-29 16:41:07.776959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:95456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.857 [2024-09-29 16:41:07.776979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.857 [2024-09-29 16:41:07.777033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:95464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.857 [2024-09-29 16:41:07.777055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.857 [2024-09-29 16:41:07.777076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.857 [2024-09-29 16:41:07.777096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.857 [2024-09-29 16:41:07.777117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:95480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.857 [2024-09-29 16:41:07.777137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.857 [2024-09-29 16:41:07.777159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:95488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.857 [2024-09-29 16:41:07.777178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.857 [2024-09-29 16:41:07.777200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:95496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.857 [2024-09-29 16:41:07.777221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.857 [2024-09-29 16:41:07.777242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:95504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.857 [2024-09-29 16:41:07.777261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.857 [2024-09-29 16:41:07.777283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:95512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.857 [2024-09-29 16:41:07.777302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.857 [2024-09-29 16:41:07.777328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:95520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.857 [2024-09-29 16:41:07.777348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.857 [2024-09-29 16:41:07.777369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:95528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.857 [2024-09-29 16:41:07.777389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.857 [2024-09-29 16:41:07.777410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.857 [2024-09-29 16:41:07.777430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.857 [2024-09-29 16:41:07.777451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:95544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.857 [2024-09-29 16:41:07.777471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.857 [2024-09-29 16:41:07.777492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:95552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.857 [2024-09-29 16:41:07.777512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.857 [2024-09-29 16:41:07.777550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:95560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.857 [2024-09-29 16:41:07.777571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.857 [2024-09-29 16:41:07.777592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:95568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.857 [2024-09-29 16:41:07.777612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.857 [2024-09-29 16:41:07.777634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:95576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.857 [2024-09-29 16:41:07.777677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.857 [2024-09-29 16:41:07.777704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:95584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.858 [2024-09-29 16:41:07.777726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.858 [2024-09-29 16:41:07.777749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:95592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.858 [2024-09-29 16:41:07.777770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.858 [2024-09-29 16:41:07.777793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:95600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.858 [2024-09-29 16:41:07.777814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.858 [2024-09-29 16:41:07.777837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:95608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.858 [2024-09-29 16:41:07.777858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.858 [2024-09-29 16:41:07.777881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:95616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.858 [2024-09-29 16:41:07.777906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.858 [2024-09-29 16:41:07.777930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:95624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.858 [2024-09-29 16:41:07.777952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.858 [2024-09-29 16:41:07.777974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:95632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.858 [2024-09-29 16:41:07.778009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.858 [2024-09-29 16:41:07.778033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:95640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.858 [2024-09-29 16:41:07.778054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.858 [2024-09-29 16:41:07.778076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:95192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.858 [2024-09-29 16:41:07.778096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.858 [2024-09-29 16:41:07.778119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:95200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.858 [2024-09-29 16:41:07.778139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.858 [2024-09-29 16:41:07.778162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:95208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.858 [2024-09-29 16:41:07.778182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.858 [2024-09-29 16:41:07.778204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:95216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.858 [2024-09-29 16:41:07.778224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.858 [2024-09-29 16:41:07.778246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:95224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.858 [2024-09-29 16:41:07.778267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.858 [2024-09-29 16:41:07.778290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:95232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.858 [2024-09-29 16:41:07.778310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.858 [2024-09-29 16:41:07.778332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:95240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.858 [2024-09-29 16:41:07.778352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.858 [2024-09-29 16:41:07.778375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:95648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.858 [2024-09-29 16:41:07.778396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.858 [2024-09-29 16:41:07.778418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:95656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.858 [2024-09-29 16:41:07.778438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.858 [2024-09-29 16:41:07.778460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:95664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.858 [2024-09-29 16:41:07.778484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.858 [2024-09-29 16:41:07.778507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:95672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.858 [2024-09-29 16:41:07.778528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.858 [2024-09-29 16:41:07.778550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:95680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.858 [2024-09-29 16:41:07.778570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.858 [2024-09-29 16:41:07.778592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:95688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.858 [2024-09-29 16:41:07.778612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.858 [2024-09-29 16:41:07.778635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:95696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.858 [2024-09-29 16:41:07.778680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.858 [2024-09-29 16:41:07.778708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.858 [2024-09-29 16:41:07.778730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.858 [2024-09-29 16:41:07.778753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:95712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.858 [2024-09-29 16:41:07.778774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.858 [2024-09-29 16:41:07.778797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:95720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.858 [2024-09-29 16:41:07.778818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.858 [2024-09-29 16:41:07.778841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:95728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.858 [2024-09-29 16:41:07.778862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.858 [2024-09-29 16:41:07.778885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:95736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.858 [2024-09-29 16:41:07.778906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.858 [2024-09-29 16:41:07.778929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:95744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.858 [2024-09-29 16:41:07.778949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.858 [2024-09-29 16:41:07.778987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:95752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.858 [2024-09-29 16:41:07.779008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.858 [2024-09-29 16:41:07.779030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:95760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.858 [2024-09-29 16:41:07.779051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.858 [2024-09-29 16:41:07.779077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:95768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.858 [2024-09-29 16:41:07.779098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.858 [2024-09-29 16:41:07.779121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:95776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.858 [2024-09-29 16:41:07.779141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.858 [2024-09-29 16:41:07.779164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:95784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.858 [2024-09-29 16:41:07.779184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.858 [2024-09-29 16:41:07.779206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:95792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.858 [2024-09-29 16:41:07.779227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.858 [2024-09-29 16:41:07.779249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:95800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.858 [2024-09-29 16:41:07.779269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.858 [2024-09-29 16:41:07.779292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:95808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.858 [2024-09-29 16:41:07.779312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.859 [2024-09-29 16:41:07.779335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:95816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.859 [2024-09-29 16:41:07.779355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.859 [2024-09-29 16:41:07.779378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:95824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.859 [2024-09-29 16:41:07.779398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.859 [2024-09-29 16:41:07.779421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:95832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.859 [2024-09-29 16:41:07.779442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.859 [2024-09-29 16:41:07.779464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:95840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.859 [2024-09-29 16:41:07.779484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.859 [2024-09-29 16:41:07.779507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:95848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.859 [2024-09-29 16:41:07.779528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.859 [2024-09-29 16:41:07.779550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:95856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.859 [2024-09-29 16:41:07.779571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.859 [2024-09-29 16:41:07.779593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:95864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.859 [2024-09-29 16:41:07.779617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.859 [2024-09-29 16:41:07.779641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:95872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.859 [2024-09-29 16:41:07.779767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.859 [2024-09-29 16:41:07.779798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:95880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.859 [2024-09-29 16:41:07.779820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.859 [2024-09-29 16:41:07.779843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:95888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.859 [2024-09-29 16:41:07.779865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.859 [2024-09-29 16:41:07.779888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:95896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.859 [2024-09-29 16:41:07.779909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.859 [2024-09-29 16:41:07.779933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:95904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.859 [2024-09-29 16:41:07.779954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.859 [2024-09-29 16:41:07.779994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:95912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.859 [2024-09-29 16:41:07.780015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.859 [2024-09-29 16:41:07.780067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:95920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.859 [2024-09-29 16:41:07.780090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.859 [2024-09-29 16:41:07.780113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:95928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.859 [2024-09-29 16:41:07.780134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.859 [2024-09-29 16:41:07.780157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:95936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.859 [2024-09-29 16:41:07.780179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.859 [2024-09-29 16:41:07.780203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:95944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.859 [2024-09-29 16:41:07.780224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.859 [2024-09-29 16:41:07.780247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:95952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.859 [2024-09-29 16:41:07.780268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.859 [2024-09-29 16:41:07.780291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:95248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.859 [2024-09-29 16:41:07.780312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.859 [2024-09-29 16:41:07.780340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:95256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.859 [2024-09-29 16:41:07.780362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.859 [2024-09-29 16:41:07.780401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:95264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.859 [2024-09-29 16:41:07.780421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.859 [2024-09-29 16:41:07.780444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:95272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.859 [2024-09-29 16:41:07.780464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.859 [2024-09-29 16:41:07.780487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:95280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.859 [2024-09-29 16:41:07.780507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.859 [2024-09-29 16:41:07.780530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:95288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.859 [2024-09-29 16:41:07.780550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.859 [2024-09-29 16:41:07.780573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:95296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.859 [2024-09-29 16:41:07.780594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.859 [2024-09-29 16:41:07.780616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:95304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.859 [2024-09-29 16:41:07.780636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.859 [2024-09-29 16:41:07.780682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:95960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.859 [2024-09-29 16:41:07.780706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.859 [2024-09-29 16:41:07.780731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:95968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.859 [2024-09-29 16:41:07.780753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.859 [2024-09-29 16:41:07.780777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:95976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.859 [2024-09-29 16:41:07.780798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.859 [2024-09-29 16:41:07.780821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.859 [2024-09-29 16:41:07.780842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.859 [2024-09-29 16:41:07.780865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:95992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.859 [2024-09-29 16:41:07.780886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.859 [2024-09-29 16:41:07.780909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:96000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.859 [2024-09-29 16:41:07.780930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.859 [2024-09-29 16:41:07.780958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:96008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.859 [2024-09-29 16:41:07.780995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.859 [2024-09-29 16:41:07.781019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:96016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.859 [2024-09-29 16:41:07.781039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.859 [2024-09-29 16:41:07.781061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:96024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.859 [2024-09-29 16:41:07.781082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.859 [2024-09-29 16:41:07.781105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:96032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.859 [2024-09-29 16:41:07.781125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.859 [2024-09-29 16:41:07.781147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:96040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.859 [2024-09-29 16:41:07.781167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.859 [2024-09-29 16:41:07.781189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.859 [2024-09-29 16:41:07.781209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.859 [2024-09-29 16:41:07.781231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:96056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.859 [2024-09-29 16:41:07.781252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.859 [2024-09-29 16:41:07.781274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:96064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.859 [2024-09-29 16:41:07.781294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.860 [2024-09-29 16:41:07.781317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:96072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.860 [2024-09-29 16:41:07.781339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.860 [2024-09-29 16:41:07.781362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:96080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.860 [2024-09-29 16:41:07.781383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.860 [2024-09-29 16:41:07.781424] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:13.860 [2024-09-29 16:41:07.781447] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:13.860 [2024-09-29 16:41:07.781466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96088 len:8 PRP1 0x0 PRP2 0x0 00:33:13.860 [2024-09-29 16:41:07.781485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.860 [2024-09-29 16:41:07.781774] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f3900 was disconnected and freed. reset controller. 00:33:13.860 [2024-09-29 16:41:07.781805] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:33:13.860 [2024-09-29 16:41:07.781862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:13.860 [2024-09-29 16:41:07.781888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.860 [2024-09-29 16:41:07.781912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:13.860 [2024-09-29 16:41:07.781932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.860 [2024-09-29 16:41:07.781953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:13.860 [2024-09-29 16:41:07.781973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.860 [2024-09-29 16:41:07.781993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:13.860 [2024-09-29 16:41:07.782013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.860 [2024-09-29 16:41:07.782032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:13.860 [2024-09-29 16:41:07.782094] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2000 (9): Bad file descriptor 00:33:13.860 [2024-09-29 16:41:07.785953] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:13.860 [2024-09-29 16:41:07.877438] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:13.860 5975.70 IOPS, 23.34 MiB/s 6005.18 IOPS, 23.46 MiB/s 6024.17 IOPS, 23.53 MiB/s 6024.08 IOPS, 23.53 MiB/s 6042.86 IOPS, 23.60 MiB/s 6045.87 IOPS, 23.62 MiB/s 00:33:13.860 Latency(us) 00:33:13.860 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:13.860 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:13.860 Verification LBA range: start 0x0 length 0x4000 00:33:13.860 NVMe0n1 : 15.05 6030.51 23.56 443.09 0.00 19684.99 807.06 41554.68 00:33:13.860 =================================================================================================================== 00:33:13.860 Total : 6030.51 23.56 443.09 0.00 19684.99 807.06 41554.68 00:33:13.860 Received shutdown signal, test time was about 15.000000 seconds 00:33:13.860 00:33:13.860 Latency(us) 00:33:13.860 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:13.860 =================================================================================================================== 00:33:13.860 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:13.860 16:41:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:33:13.860 16:41:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:33:13.860 16:41:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:33:13.860 16:41:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3284172 00:33:13.860 16:41:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:33:13.860 16:41:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3284172 /var/tmp/bdevperf.sock 00:33:13.860 16:41:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 3284172 ']' 00:33:13.860 16:41:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:13.860 16:41:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:13.860 16:41:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:13.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:13.860 16:41:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:13.860 16:41:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:15.234 16:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:15.234 16:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:33:15.234 16:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:15.234 [2024-09-29 16:41:15.680579] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:15.234 16:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:15.492 [2024-09-29 16:41:15.957554] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:33:15.492 16:41:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:16.057 NVMe0n1 00:33:16.057 16:41:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:16.315 00:33:16.315 16:41:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:16.880 00:33:16.880 16:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:16.880 16:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:33:17.138 16:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:17.395 16:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:33:20.672 16:41:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:20.672 16:41:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:33:20.672 16:41:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3284971 00:33:20.672 16:41:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:20.672 16:41:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3284971 00:33:22.045 { 00:33:22.045 "results": [ 00:33:22.045 { 00:33:22.045 "job": "NVMe0n1", 00:33:22.045 "core_mask": "0x1", 00:33:22.045 "workload": "verify", 00:33:22.045 "status": "finished", 00:33:22.045 "verify_range": { 00:33:22.045 "start": 0, 00:33:22.045 "length": 16384 00:33:22.045 }, 00:33:22.045 "queue_depth": 128, 00:33:22.045 "io_size": 4096, 00:33:22.045 "runtime": 1.008161, 00:33:22.045 "iops": 6120.054237368833, 00:33:22.045 "mibps": 23.906461864722004, 00:33:22.045 "io_failed": 0, 00:33:22.045 "io_timeout": 0, 00:33:22.045 "avg_latency_us": 20820.13888180563, 00:33:22.045 "min_latency_us": 2111.7155555555555, 00:33:22.045 "max_latency_us": 18738.44148148148 00:33:22.045 } 00:33:22.045 ], 00:33:22.045 "core_count": 1 00:33:22.045 } 00:33:22.045 16:41:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:22.045 [2024-09-29 16:41:14.466542] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:33:22.046 [2024-09-29 16:41:14.466704] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3284172 ] 00:33:22.046 [2024-09-29 16:41:14.598363] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:22.046 [2024-09-29 16:41:14.832678] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:33:22.046 [2024-09-29 16:41:17.841822] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:33:22.046 [2024-09-29 16:41:17.841953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:22.046 [2024-09-29 16:41:17.841987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.046 [2024-09-29 16:41:17.842017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:22.046 [2024-09-29 16:41:17.842038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.046 [2024-09-29 16:41:17.842060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:22.046 [2024-09-29 16:41:17.842081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.046 [2024-09-29 16:41:17.842102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:22.046 [2024-09-29 16:41:17.842121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.046 [2024-09-29 16:41:17.842142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.046 [2024-09-29 16:41:17.842222] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.046 [2024-09-29 16:41:17.842283] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2000 (9): Bad file descriptor 00:33:22.046 [2024-09-29 16:41:17.850417] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:22.046 Running I/O for 1 seconds... 00:33:22.046 6042.00 IOPS, 23.60 MiB/s 00:33:22.046 Latency(us) 00:33:22.046 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:22.046 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:22.046 Verification LBA range: start 0x0 length 0x4000 00:33:22.046 NVMe0n1 : 1.01 6120.05 23.91 0.00 0.00 20820.14 2111.72 18738.44 00:33:22.046 =================================================================================================================== 00:33:22.046 Total : 6120.05 23.91 0.00 0.00 20820.14 2111.72 18738.44 00:33:22.046 16:41:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:22.046 16:41:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:33:22.046 16:41:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:22.611 16:41:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:22.611 16:41:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:33:22.611 16:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:23.176 16:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:33:26.456 16:41:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:26.456 16:41:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:33:26.456 16:41:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3284172 00:33:26.456 16:41:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 3284172 ']' 00:33:26.456 16:41:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 3284172 00:33:26.456 16:41:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:33:26.456 16:41:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:26.456 16:41:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3284172 00:33:26.456 16:41:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:26.456 16:41:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:26.456 16:41:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3284172' 00:33:26.456 killing process with pid 3284172 00:33:26.456 16:41:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 3284172 00:33:26.456 16:41:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 3284172 00:33:27.392 16:41:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:33:27.392 16:41:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:27.650 16:41:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:33:27.650 16:41:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:27.650 16:41:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:33:27.650 16:41:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # nvmfcleanup 00:33:27.650 16:41:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:33:27.650 16:41:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:27.650 16:41:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:33:27.650 16:41:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:27.650 16:41:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:27.650 rmmod nvme_tcp 00:33:27.650 rmmod nvme_fabrics 00:33:27.650 rmmod nvme_keyring 00:33:27.650 16:41:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:27.650 16:41:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:33:27.650 16:41:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:33:27.650 16:41:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@513 -- # '[' -n 3281647 ']' 00:33:27.650 16:41:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # killprocess 3281647 00:33:27.650 16:41:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 3281647 ']' 00:33:27.650 16:41:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 3281647 00:33:27.650 16:41:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:33:27.650 16:41:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:27.650 16:41:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3281647 00:33:27.650 16:41:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:27.650 16:41:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:27.650 16:41:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3281647' 00:33:27.650 killing process with pid 3281647 00:33:27.650 16:41:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 3281647 00:33:27.650 16:41:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 3281647 00:33:29.024 16:41:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:33:29.024 16:41:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:33:29.024 16:41:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:33:29.024 16:41:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:33:29.024 16:41:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # iptables-save 00:33:29.024 16:41:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:33:29.024 16:41:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # iptables-restore 00:33:29.024 16:41:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:29.024 16:41:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:29.024 16:41:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:29.024 16:41:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:29.024 16:41:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:31.593 16:41:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:31.593 00:33:31.593 real 0m40.704s 00:33:31.593 user 2m23.502s 00:33:31.593 sys 0m6.139s 00:33:31.593 16:41:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:31.593 16:41:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:31.593 ************************************ 00:33:31.593 END TEST nvmf_failover 00:33:31.593 ************************************ 00:33:31.594 16:41:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:33:31.594 16:41:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:31.594 16:41:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:31.594 16:41:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.594 ************************************ 00:33:31.594 START TEST nvmf_host_discovery 00:33:31.594 ************************************ 00:33:31.594 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:33:31.594 * Looking for test storage... 00:33:31.594 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:31.594 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:33:31.594 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:33:31.594 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:33:31.594 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:33:31.594 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:31.594 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:31.594 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:31.594 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:33:31.594 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:33:31.594 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:33:31.594 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:33:31.594 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:33:31.594 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:33:31.594 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:33:31.594 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:31.594 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:33:31.594 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:33:31.594 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:31.594 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:31.594 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:33:31.594 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:33:31.594 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:31.594 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:33:31.594 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:33:31.594 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:33:31.594 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:33:31.594 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:31.594 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:33:31.594 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:33:31.594 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:31.594 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:31.594 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:33:31.594 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:31.594 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:33:31.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:31.594 --rc genhtml_branch_coverage=1 00:33:31.594 --rc genhtml_function_coverage=1 00:33:31.594 --rc genhtml_legend=1 00:33:31.594 --rc geninfo_all_blocks=1 00:33:31.594 --rc geninfo_unexecuted_blocks=1 00:33:31.594 00:33:31.594 ' 00:33:31.594 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:33:31.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:31.594 --rc genhtml_branch_coverage=1 00:33:31.594 --rc genhtml_function_coverage=1 00:33:31.594 --rc genhtml_legend=1 00:33:31.594 --rc geninfo_all_blocks=1 00:33:31.594 --rc geninfo_unexecuted_blocks=1 00:33:31.594 00:33:31.594 ' 00:33:31.594 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:33:31.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:31.594 --rc genhtml_branch_coverage=1 00:33:31.594 --rc genhtml_function_coverage=1 00:33:31.594 --rc genhtml_legend=1 00:33:31.594 --rc geninfo_all_blocks=1 00:33:31.594 --rc geninfo_unexecuted_blocks=1 00:33:31.594 00:33:31.594 ' 00:33:31.594 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:33:31.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:31.594 --rc genhtml_branch_coverage=1 00:33:31.594 --rc genhtml_function_coverage=1 00:33:31.594 --rc genhtml_legend=1 00:33:31.594 --rc geninfo_all_blocks=1 00:33:31.594 --rc geninfo_unexecuted_blocks=1 00:33:31.594 00:33:31.594 ' 00:33:31.594 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:31.594 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:33:31.594 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:31.594 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:31.594 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:31.594 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:31.594 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:31.594 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:31.594 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:31.594 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:31.594 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:31.594 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:31.594 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:31.594 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:31.594 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:31.594 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:31.594 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:31.594 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:31.594 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:31.594 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:33:31.594 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:31.594 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:31.594 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:31.594 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:31.594 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:31.594 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:31.594 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:33:31.594 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:31.594 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:33:31.594 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:31.594 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:31.594 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:31.595 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:31.595 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:31.595 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:31.595 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:31.595 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:31.595 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:31.595 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:31.595 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:33:31.595 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:33:31.595 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:33:31.595 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:33:31.595 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:33:31.595 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:33:31.595 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:33:31.595 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:33:31.595 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:31.595 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@472 -- # prepare_net_devs 00:33:31.595 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@434 -- # local -g is_hw=no 00:33:31.595 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@436 -- # remove_spdk_ns 00:33:31.595 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:31.595 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:31.595 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:31.595 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:33:31.595 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:33:31.595 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:33:31.595 16:41:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:33.601 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:33.601 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:33:33.601 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:33.601 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:33.601 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:33.601 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:33.601 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:33.601 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:33.602 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:33.602 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ up == up ]] 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:33.602 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ up == up ]] 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:33.602 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # is_hw=yes 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:33.602 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:33.602 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:33:33.602 00:33:33.602 --- 10.0.0.2 ping statistics --- 00:33:33.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:33.602 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:33.602 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:33.602 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:33:33.602 00:33:33.602 --- 10.0.0.1 ping statistics --- 00:33:33.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:33.602 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # return 0 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:33.602 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:33.603 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@505 -- # nvmfpid=3287846 00:33:33.603 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:33:33.603 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@506 -- # waitforlisten 3287846 00:33:33.603 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 3287846 ']' 00:33:33.603 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:33.603 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:33.603 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:33.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:33.603 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:33.603 16:41:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:33.603 [2024-09-29 16:41:33.916549] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:33:33.603 [2024-09-29 16:41:33.916736] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:33.603 [2024-09-29 16:41:34.053768] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:33.861 [2024-09-29 16:41:34.277632] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:33.861 [2024-09-29 16:41:34.277744] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:33.861 [2024-09-29 16:41:34.277767] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:33.861 [2024-09-29 16:41:34.277787] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:33.861 [2024-09-29 16:41:34.277804] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:33.861 [2024-09-29 16:41:34.277852] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:33:34.428 16:41:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:34.428 16:41:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:33:34.428 16:41:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:33:34.428 16:41:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:34.428 16:41:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:34.428 16:41:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:34.428 16:41:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:34.428 16:41:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.428 16:41:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:34.428 [2024-09-29 16:41:34.888840] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:34.428 16:41:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:34.428 16:41:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:33:34.428 16:41:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.428 16:41:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:34.428 [2024-09-29 16:41:34.897103] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:33:34.428 16:41:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:34.428 16:41:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:33:34.428 16:41:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.428 16:41:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:34.428 null0 00:33:34.428 16:41:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:34.428 16:41:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:33:34.428 16:41:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.428 16:41:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:34.428 null1 00:33:34.428 16:41:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:34.428 16:41:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:33:34.428 16:41:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.428 16:41:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:34.428 16:41:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:34.428 16:41:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3287994 00:33:34.428 16:41:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:33:34.428 16:41:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3287994 /tmp/host.sock 00:33:34.428 16:41:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 3287994 ']' 00:33:34.428 16:41:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:33:34.428 16:41:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:34.428 16:41:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:33:34.428 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:33:34.428 16:41:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:34.428 16:41:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:34.687 [2024-09-29 16:41:35.027030] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:33:34.687 [2024-09-29 16:41:35.027199] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3287994 ] 00:33:34.687 [2024-09-29 16:41:35.172315] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:34.945 [2024-09-29 16:41:35.425082] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:33:35.512 16:41:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:35.512 16:41:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:33:35.512 16:41:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:35.512 16:41:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:33:35.512 16:41:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:35.512 16:41:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:35.512 16:41:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:35.512 16:41:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:33:35.512 16:41:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:35.512 16:41:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:35.512 16:41:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:35.512 16:41:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:33:35.512 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:33:35.512 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:35.512 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:35.512 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:35.512 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:35.512 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:35.512 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:35.512 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:35.512 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:33:35.512 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:33:35.512 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:35.512 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:35.512 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:35.512 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:35.512 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:35.512 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:35.512 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:35.771 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:33:35.771 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:33:35.771 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:35.771 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:35.771 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:35.771 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:33:35.771 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:35.771 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:35.771 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:35.771 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:35.771 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:35.771 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:35.771 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:35.771 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:33:35.771 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:33:35.771 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:35.771 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:35.771 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:35.771 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:35.771 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:35.771 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:35.771 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:35.771 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:33:35.771 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:33:35.771 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:35.771 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:35.771 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:35.771 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:33:35.771 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:35.772 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:35.772 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:35.772 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:35.772 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:35.772 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:35.772 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:35.772 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:33:35.772 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:33:35.772 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:35.772 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:35.772 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:35.772 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:35.772 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:35.772 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:35.772 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:35.772 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:33:35.772 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:35.772 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:35.772 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:35.772 [2024-09-29 16:41:36.252971] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:35.772 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:35.772 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:33:35.772 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:35.772 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:35.772 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:35.772 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:35.772 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:35.772 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:35.772 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:35.772 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:33:35.772 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:33:35.772 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:35.772 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:35.772 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:35.772 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:35.772 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:35.772 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:35.772 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:36.031 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:33:36.031 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:33:36.031 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:36.031 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:36.031 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:36.031 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:36.031 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:36.031 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:36.031 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:33:36.031 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:33:36.031 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:36.031 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:36.031 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:36.031 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:36.031 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:36.031 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:33:36.031 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:33:36.031 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:36.031 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:33:36.031 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:36.031 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:36.031 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:36.031 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:36.031 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:36.031 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:36.031 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:36.031 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:36.031 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:33:36.031 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:36.031 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:36.031 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:36.031 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:36.031 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:36.031 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:36.031 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:36.031 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:33:36.031 16:41:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:33:36.598 [2024-09-29 16:41:37.051870] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:36.598 [2024-09-29 16:41:37.051935] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:36.598 [2024-09-29 16:41:37.051980] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:36.598 [2024-09-29 16:41:37.138319] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:33:36.857 [2024-09-29 16:41:37.325026] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:36.857 [2024-09-29 16:41:37.325066] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:37.116 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:37.116 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:37.116 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:33:37.116 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:37.116 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:37.116 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.116 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:37.116 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:37.116 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:37.116 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.116 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:37.116 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:37.116 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:33:37.116 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:33:37.116 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:37.116 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:37.116 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:33:37.116 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:33:37.116 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:37.116 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:37.116 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.116 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:37.116 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:37.116 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:37.116 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.116 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:33:37.116 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:37.116 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:33:37.116 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:33:37.116 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:37.116 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:37.116 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:33:37.116 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:33:37.116 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:37.116 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.116 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:37.116 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:37.116 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:37.116 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:37.116 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.116 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:33:37.116 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:37.116 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:33:37.116 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:33:37.116 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:37.116 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:37.116 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:37.116 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:37.116 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:37.116 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:33:37.116 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:33:37.116 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.116 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:37.116 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:37.117 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.117 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:33:37.117 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:33:37.117 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:33:37.117 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:37.117 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:33:37.117 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.117 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:37.117 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.117 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:37.117 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:37.117 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:37.117 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:37.117 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:37.117 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:33:37.117 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:37.117 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.117 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:37.117 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:37.117 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:37.117 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:37.117 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.117 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:37.117 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:37.117 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:33:37.117 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:33:37.117 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:37.117 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:37.117 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:37.117 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:37.117 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:37.117 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:33:37.117 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:33:37.117 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:37.117 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.117 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:37.117 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.376 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:33:37.376 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:37.376 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:33:37.376 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:37.376 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:33:37.376 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.376 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:37.376 [2024-09-29 16:41:37.695622] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:37.376 [2024-09-29 16:41:37.696605] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:33:37.376 [2024-09-29 16:41:37.696689] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:37.376 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.376 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:37.376 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:37.376 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:37.376 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:37.376 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:37.376 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:33:37.376 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:37.376 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:37.376 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.376 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:37.376 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:37.376 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:37.376 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.376 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:37.376 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:37.376 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:37.376 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:37.376 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:37.376 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:37.376 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:37.376 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:33:37.376 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:37.376 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.376 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:37.376 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:37.376 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:37.376 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:37.376 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.376 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:37.376 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:37.376 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:33:37.376 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:33:37.376 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:37.376 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:37.376 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:33:37.376 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:33:37.376 [2024-09-29 16:41:37.784629] bdev_nvme.c:7086:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:33:37.376 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:37.376 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:37.376 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.376 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:37.376 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:37.376 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:37.376 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.376 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:33:37.376 16:41:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:33:37.635 [2024-09-29 16:41:38.049666] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:37.635 [2024-09-29 16:41:38.049727] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:37.635 [2024-09-29 16:41:38.049753] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:38.571 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:38.571 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:33:38.571 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:33:38.571 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:38.571 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:38.571 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.571 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:38.571 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:38.571 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:38.571 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.571 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:33:38.571 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:38.571 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:33:38.571 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:38.571 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:38.571 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:38.571 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:38.571 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:38.571 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:38.571 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:33:38.571 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:38.571 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:38.571 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.571 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:38.571 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.571 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:38.571 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:38.571 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:33:38.571 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:38.571 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:38.571 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.571 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:38.571 [2024-09-29 16:41:38.908295] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:33:38.571 [2024-09-29 16:41:38.908354] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:38.571 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.571 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:38.571 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:38.571 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:38.571 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:38.571 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:38.571 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:33:38.571 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:38.572 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.572 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:38.572 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:38.572 [2024-09-29 16:41:38.915358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:38.572 [2024-09-29 16:41:38.915403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:38.572 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:38.572 [2024-09-29 16:41:38.915430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:38.572 [2024-09-29 16:41:38.915459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:38.572 [2024-09-29 16:41:38.915481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:38.572 [2024-09-29 16:41:38.915502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:38.572 [2024-09-29 16:41:38.915525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:38.572 [2024-09-29 16:41:38.915546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:38.572 [2024-09-29 16:41:38.915567] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:38.572 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:38.572 [2024-09-29 16:41:38.925336] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:38.572 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.572 [2024-09-29 16:41:38.935389] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:38.572 [2024-09-29 16:41:38.935648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.572 [2024-09-29 16:41:38.935719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:38.572 [2024-09-29 16:41:38.935765] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:38.572 [2024-09-29 16:41:38.935801] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:38.572 [2024-09-29 16:41:38.935833] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:38.572 [2024-09-29 16:41:38.935856] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:38.572 [2024-09-29 16:41:38.935879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:38.572 [2024-09-29 16:41:38.935913] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.572 [2024-09-29 16:41:38.945516] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:38.572 [2024-09-29 16:41:38.945737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.572 [2024-09-29 16:41:38.945774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:38.572 [2024-09-29 16:41:38.945797] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:38.572 [2024-09-29 16:41:38.945829] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:38.572 [2024-09-29 16:41:38.945859] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:38.572 [2024-09-29 16:41:38.945880] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:38.572 [2024-09-29 16:41:38.945899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:38.572 [2024-09-29 16:41:38.945929] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.572 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:38.572 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:38.572 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:38.572 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:38.572 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:38.572 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:38.572 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:38.572 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:33:38.572 [2024-09-29 16:41:38.955621] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:38.572 [2024-09-29 16:41:38.955915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.572 [2024-09-29 16:41:38.955955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:38.572 [2024-09-29 16:41:38.956002] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:38.572 [2024-09-29 16:41:38.956039] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:38.572 [2024-09-29 16:41:38.956071] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:38.572 [2024-09-29 16:41:38.956095] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:38.572 [2024-09-29 16:41:38.956125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:38.572 [2024-09-29 16:41:38.956158] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.572 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:38.572 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:38.572 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.572 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:38.572 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:38.572 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:38.572 [2024-09-29 16:41:38.965761] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:38.572 [2024-09-29 16:41:38.966020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.572 [2024-09-29 16:41:38.966061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:38.572 [2024-09-29 16:41:38.966088] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:38.572 [2024-09-29 16:41:38.966124] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:38.572 [2024-09-29 16:41:38.966176] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:38.572 [2024-09-29 16:41:38.966204] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:38.572 [2024-09-29 16:41:38.966226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:38.572 [2024-09-29 16:41:38.966259] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.572 [2024-09-29 16:41:38.975859] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:38.572 [2024-09-29 16:41:38.976074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.572 [2024-09-29 16:41:38.976115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:38.572 [2024-09-29 16:41:38.976147] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:38.572 [2024-09-29 16:41:38.976185] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:38.572 [2024-09-29 16:41:38.976236] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:38.572 [2024-09-29 16:41:38.976263] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:38.572 [2024-09-29 16:41:38.976285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:38.572 [2024-09-29 16:41:38.976318] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.572 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.572 [2024-09-29 16:41:38.985966] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:38.572 [2024-09-29 16:41:38.986181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.572 [2024-09-29 16:41:38.986222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:38.572 [2024-09-29 16:41:38.986247] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:38.572 [2024-09-29 16:41:38.986284] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:38.572 [2024-09-29 16:41:38.986337] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:38.572 [2024-09-29 16:41:38.986365] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:38.572 [2024-09-29 16:41:38.986387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:38.572 [2024-09-29 16:41:38.986420] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.572 [2024-09-29 16:41:38.994382] bdev_nvme.c:6949:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:33:38.572 [2024-09-29 16:41:38.994429] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:38.572 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:38.572 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:38.572 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:33:38.572 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:33:38.572 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:38.572 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:38.572 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:33:38.573 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:33:38.573 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:38.573 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.573 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:38.573 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:38.573 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:38.573 16:41:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:38.573 16:41:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.573 16:41:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:33:38.573 16:41:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:38.573 16:41:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:33:38.573 16:41:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:38.573 16:41:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:38.573 16:41:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:38.573 16:41:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:38.573 16:41:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:38.573 16:41:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:38.573 16:41:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:33:38.573 16:41:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:38.573 16:41:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:38.573 16:41:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.573 16:41:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:38.573 16:41:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.573 16:41:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:38.573 16:41:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:38.573 16:41:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:33:38.573 16:41:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:38.573 16:41:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:33:38.573 16:41:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.573 16:41:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:38.573 16:41:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.573 16:41:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:33:38.573 16:41:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:33:38.573 16:41:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:38.573 16:41:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:38.573 16:41:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:33:38.573 16:41:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:33:38.573 16:41:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:38.573 16:41:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:38.573 16:41:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.573 16:41:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:38.573 16:41:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:38.573 16:41:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:38.573 16:41:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.573 16:41:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:33:38.573 16:41:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:38.573 16:41:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:33:38.573 16:41:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:33:38.573 16:41:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:38.573 16:41:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:38.573 16:41:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:33:38.573 16:41:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:33:38.573 16:41:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:38.573 16:41:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.573 16:41:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:38.573 16:41:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:38.573 16:41:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:38.573 16:41:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:38.830 16:41:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.830 16:41:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:33:38.830 16:41:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:38.830 16:41:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:33:38.830 16:41:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:33:38.830 16:41:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:38.830 16:41:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:38.830 16:41:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:38.830 16:41:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:38.830 16:41:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:38.831 16:41:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:33:38.831 16:41:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:38.831 16:41:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:38.831 16:41:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.831 16:41:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:38.831 16:41:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.831 16:41:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:33:38.831 16:41:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:33:38.831 16:41:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:33:38.831 16:41:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:38.831 16:41:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:38.831 16:41:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.831 16:41:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:39.764 [2024-09-29 16:41:40.265883] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:39.764 [2024-09-29 16:41:40.265938] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:39.764 [2024-09-29 16:41:40.266010] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:40.022 [2024-09-29 16:41:40.352298] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:33:40.022 [2024-09-29 16:41:40.460861] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:40.022 [2024-09-29 16:41:40.460939] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:40.022 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.022 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:40.022 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:33:40.022 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:40.022 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:33:40.022 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:40.022 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:33:40.022 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:40.022 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:40.022 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.022 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:40.022 request: 00:33:40.022 { 00:33:40.022 "name": "nvme", 00:33:40.022 "trtype": "tcp", 00:33:40.022 "traddr": "10.0.0.2", 00:33:40.022 "adrfam": "ipv4", 00:33:40.022 "trsvcid": "8009", 00:33:40.022 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:40.022 "wait_for_attach": true, 00:33:40.022 "method": "bdev_nvme_start_discovery", 00:33:40.022 "req_id": 1 00:33:40.022 } 00:33:40.022 Got JSON-RPC error response 00:33:40.022 response: 00:33:40.022 { 00:33:40.022 "code": -17, 00:33:40.022 "message": "File exists" 00:33:40.022 } 00:33:40.022 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:33:40.022 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:33:40.022 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:40.022 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:40.022 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:40.022 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:33:40.022 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:40.022 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.022 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:40.022 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:40.022 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:40.022 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:40.022 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.022 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:33:40.022 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:33:40.022 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:40.022 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.022 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:40.022 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:40.022 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:40.022 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:40.022 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.022 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:40.022 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:40.022 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:33:40.022 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:40.022 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:33:40.022 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:40.023 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:33:40.023 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:40.023 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:40.023 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.023 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:40.023 request: 00:33:40.023 { 00:33:40.023 "name": "nvme_second", 00:33:40.023 "trtype": "tcp", 00:33:40.023 "traddr": "10.0.0.2", 00:33:40.023 "adrfam": "ipv4", 00:33:40.023 "trsvcid": "8009", 00:33:40.023 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:40.023 "wait_for_attach": true, 00:33:40.023 "method": "bdev_nvme_start_discovery", 00:33:40.023 "req_id": 1 00:33:40.023 } 00:33:40.023 Got JSON-RPC error response 00:33:40.023 response: 00:33:40.023 { 00:33:40.023 "code": -17, 00:33:40.023 "message": "File exists" 00:33:40.023 } 00:33:40.023 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:33:40.023 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:33:40.023 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:40.023 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:40.023 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:40.023 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:33:40.023 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:40.023 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.023 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:40.023 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:40.023 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:40.023 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:40.023 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.280 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:33:40.280 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:33:40.281 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:40.281 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.281 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:40.281 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:40.281 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:40.281 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:40.281 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.281 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:40.281 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:40.281 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:33:40.281 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:40.281 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:33:40.281 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:40.281 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:33:40.281 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:40.281 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:40.281 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.281 16:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:41.213 [2024-09-29 16:41:41.656613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.213 [2024-09-29 16:41:41.656697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4080 with addr=10.0.0.2, port=8010 00:33:41.213 [2024-09-29 16:41:41.656809] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:41.213 [2024-09-29 16:41:41.656834] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:41.213 [2024-09-29 16:41:41.656856] bdev_nvme.c:7224:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:33:42.145 [2024-09-29 16:41:42.659133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.145 [2024-09-29 16:41:42.659231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4300 with addr=10.0.0.2, port=8010 00:33:42.145 [2024-09-29 16:41:42.659310] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:42.145 [2024-09-29 16:41:42.659349] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:42.145 [2024-09-29 16:41:42.659370] bdev_nvme.c:7224:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:33:43.516 [2024-09-29 16:41:43.661148] bdev_nvme.c:7205:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:33:43.516 request: 00:33:43.516 { 00:33:43.516 "name": "nvme_second", 00:33:43.516 "trtype": "tcp", 00:33:43.516 "traddr": "10.0.0.2", 00:33:43.516 "adrfam": "ipv4", 00:33:43.516 "trsvcid": "8010", 00:33:43.516 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:43.516 "wait_for_attach": false, 00:33:43.516 "attach_timeout_ms": 3000, 00:33:43.516 "method": "bdev_nvme_start_discovery", 00:33:43.516 "req_id": 1 00:33:43.516 } 00:33:43.516 Got JSON-RPC error response 00:33:43.516 response: 00:33:43.516 { 00:33:43.516 "code": -110, 00:33:43.516 "message": "Connection timed out" 00:33:43.516 } 00:33:43.516 16:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:33:43.516 16:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:33:43.516 16:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:43.516 16:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:43.516 16:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:43.516 16:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:33:43.516 16:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:43.516 16:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:43.516 16:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.516 16:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:43.516 16:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:43.516 16:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:43.516 16:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.516 16:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:33:43.516 16:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:33:43.516 16:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3287994 00:33:43.516 16:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:33:43.516 16:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # nvmfcleanup 00:33:43.516 16:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:33:43.516 16:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:43.516 16:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:33:43.516 16:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:43.516 16:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:43.516 rmmod nvme_tcp 00:33:43.516 rmmod nvme_fabrics 00:33:43.516 rmmod nvme_keyring 00:33:43.516 16:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:43.516 16:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:33:43.516 16:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:33:43.516 16:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@513 -- # '[' -n 3287846 ']' 00:33:43.516 16:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@514 -- # killprocess 3287846 00:33:43.516 16:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 3287846 ']' 00:33:43.516 16:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 3287846 00:33:43.516 16:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:33:43.516 16:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:43.516 16:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3287846 00:33:43.516 16:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:43.516 16:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:43.516 16:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3287846' 00:33:43.516 killing process with pid 3287846 00:33:43.516 16:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 3287846 00:33:43.516 16:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 3287846 00:33:44.891 16:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:33:44.891 16:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:33:44.891 16:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:33:44.891 16:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:33:44.891 16:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # iptables-save 00:33:44.891 16:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:33:44.891 16:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # iptables-restore 00:33:44.891 16:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:44.891 16:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:44.891 16:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:44.891 16:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:44.891 16:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:46.793 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:46.793 00:33:46.793 real 0m15.580s 00:33:46.793 user 0m23.142s 00:33:46.793 sys 0m3.016s 00:33:46.793 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:46.794 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:46.794 ************************************ 00:33:46.794 END TEST nvmf_host_discovery 00:33:46.794 ************************************ 00:33:46.794 16:41:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:33:46.794 16:41:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:46.794 16:41:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:46.794 16:41:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.794 ************************************ 00:33:46.794 START TEST nvmf_host_multipath_status 00:33:46.794 ************************************ 00:33:46.794 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:33:46.794 * Looking for test storage... 00:33:46.794 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:46.794 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:33:46.794 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lcov --version 00:33:46.794 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:33:46.794 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:33:46.794 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:46.794 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:46.794 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:46.794 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:33:46.794 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:33:46.794 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:33:46.794 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:33:46.794 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:33:46.794 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:33:46.794 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:33:46.794 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:46.794 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:33:46.794 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:33:46.794 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:46.794 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:46.794 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:33:46.794 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:33:46.794 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:46.794 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:33:46.794 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:33:46.794 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:33:46.794 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:33:46.794 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:46.794 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:33:46.794 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:33:46.794 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:46.794 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:46.794 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:33:46.794 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:46.794 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:33:46.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:46.794 --rc genhtml_branch_coverage=1 00:33:46.794 --rc genhtml_function_coverage=1 00:33:46.794 --rc genhtml_legend=1 00:33:46.794 --rc geninfo_all_blocks=1 00:33:46.794 --rc geninfo_unexecuted_blocks=1 00:33:46.794 00:33:46.794 ' 00:33:46.794 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:33:46.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:46.794 --rc genhtml_branch_coverage=1 00:33:46.794 --rc genhtml_function_coverage=1 00:33:46.794 --rc genhtml_legend=1 00:33:46.794 --rc geninfo_all_blocks=1 00:33:46.794 --rc geninfo_unexecuted_blocks=1 00:33:46.794 00:33:46.794 ' 00:33:46.794 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:33:46.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:46.794 --rc genhtml_branch_coverage=1 00:33:46.794 --rc genhtml_function_coverage=1 00:33:46.794 --rc genhtml_legend=1 00:33:46.794 --rc geninfo_all_blocks=1 00:33:46.794 --rc geninfo_unexecuted_blocks=1 00:33:46.794 00:33:46.794 ' 00:33:46.794 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:33:46.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:46.794 --rc genhtml_branch_coverage=1 00:33:46.794 --rc genhtml_function_coverage=1 00:33:46.794 --rc genhtml_legend=1 00:33:46.794 --rc geninfo_all_blocks=1 00:33:46.794 --rc geninfo_unexecuted_blocks=1 00:33:46.794 00:33:46.794 ' 00:33:46.794 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:46.794 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:33:46.794 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:46.794 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:46.794 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:46.794 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:46.794 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:46.794 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:46.794 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:46.794 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:46.794 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:46.794 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:46.794 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:46.794 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:46.794 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:46.794 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:46.794 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:46.794 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:46.794 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:46.794 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:33:46.794 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:46.794 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:46.794 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:46.794 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.794 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:47.054 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:47.054 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:33:47.054 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:47.054 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:33:47.054 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:47.054 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:47.054 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:47.054 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:47.054 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:47.054 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:47.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:47.054 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:47.054 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:47.054 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:47.054 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:33:47.054 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:33:47.054 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:47.054 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:33:47.054 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:47.054 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:33:47.054 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:33:47.054 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:33:47.054 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:47.054 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@472 -- # prepare_net_devs 00:33:47.054 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@434 -- # local -g is_hw=no 00:33:47.054 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@436 -- # remove_spdk_ns 00:33:47.054 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:47.054 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:47.054 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:47.054 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:33:47.054 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:33:47.054 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:33:47.054 16:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:48.955 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:48.955 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:33:48.955 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:48.955 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:48.955 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:48.955 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:48.955 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:48.955 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:48.956 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:48.956 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ up == up ]] 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:48.956 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ up == up ]] 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:48.956 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # is_hw=yes 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:48.956 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:48.956 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:33:48.956 00:33:48.956 --- 10.0.0.2 ping statistics --- 00:33:48.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:48.956 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:48.956 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:48.956 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:33:48.956 00:33:48.956 --- 10.0.0.1 ping statistics --- 00:33:48.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:48.956 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # return 0 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:48.956 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:33:48.957 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:33:48.957 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:33:48.957 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:33:48.957 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:48.957 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:48.957 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@505 -- # nvmfpid=3291281 00:33:48.957 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:33:48.957 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@506 -- # waitforlisten 3291281 00:33:48.957 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 3291281 ']' 00:33:48.957 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:48.957 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:48.957 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:48.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:48.957 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:48.957 16:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:49.215 [2024-09-29 16:41:49.599522] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:33:49.215 [2024-09-29 16:41:49.599678] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:49.215 [2024-09-29 16:41:49.743515] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:49.473 [2024-09-29 16:41:50.001924] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:49.473 [2024-09-29 16:41:50.002001] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:49.473 [2024-09-29 16:41:50.002025] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:49.473 [2024-09-29 16:41:50.002047] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:49.473 [2024-09-29 16:41:50.002065] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:49.473 [2024-09-29 16:41:50.002153] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:33:49.473 [2024-09-29 16:41:50.002162] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:33:50.407 16:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:50.407 16:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:33:50.407 16:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:33:50.407 16:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:50.407 16:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:50.407 16:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:50.407 16:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3291281 00:33:50.407 16:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:50.407 [2024-09-29 16:41:50.905042] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:50.407 16:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:33:50.973 Malloc0 00:33:50.973 16:41:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:33:51.232 16:41:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:51.490 16:41:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:51.748 [2024-09-29 16:41:52.068077] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:51.748 16:41:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:52.006 [2024-09-29 16:41:52.336832] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:52.006 16:41:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3291585 00:33:52.006 16:41:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:33:52.006 16:41:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:33:52.006 16:41:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3291585 /var/tmp/bdevperf.sock 00:33:52.006 16:41:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 3291585 ']' 00:33:52.006 16:41:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:52.006 16:41:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:52.006 16:41:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:52.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:52.006 16:41:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:52.006 16:41:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:52.940 16:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:52.940 16:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:33:52.940 16:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:33:53.198 16:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:33:53.764 Nvme0n1 00:33:53.764 16:41:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:33:54.021 Nvme0n1 00:33:54.021 16:41:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:33:54.021 16:41:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:33:56.550 16:41:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:33:56.550 16:41:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:33:56.550 16:41:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:56.808 16:41:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:33:57.741 16:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:33:57.741 16:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:57.741 16:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:57.741 16:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:57.999 16:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:57.999 16:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:57.999 16:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:57.999 16:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:58.258 16:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:58.258 16:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:58.258 16:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:58.258 16:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:58.516 16:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:58.516 16:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:58.516 16:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:58.516 16:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:58.774 16:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:58.774 16:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:58.774 16:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:58.774 16:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:59.031 16:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:59.031 16:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:59.031 16:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:59.031 16:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:59.289 16:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:59.289 16:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:33:59.289 16:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:59.578 16:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:59.861 16:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:34:01.233 16:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:34:01.233 16:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:01.233 16:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:01.233 16:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:01.233 16:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:01.233 16:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:01.233 16:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:01.233 16:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:01.491 16:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:01.491 16:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:01.491 16:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:01.491 16:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:01.749 16:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:01.749 16:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:01.749 16:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:01.749 16:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:02.006 16:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:02.006 16:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:02.006 16:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:02.006 16:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:02.263 16:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:02.263 16:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:02.263 16:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:02.263 16:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:02.828 16:42:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:02.828 16:42:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:34:02.828 16:42:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:02.828 16:42:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:34:03.085 16:42:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:34:04.453 16:42:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:34:04.453 16:42:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:04.453 16:42:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:04.453 16:42:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:04.453 16:42:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:04.453 16:42:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:04.453 16:42:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:04.453 16:42:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:04.710 16:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:04.710 16:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:04.710 16:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:04.710 16:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:04.967 16:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:04.967 16:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:04.967 16:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:04.967 16:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:05.224 16:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:05.224 16:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:05.224 16:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:05.224 16:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:05.481 16:42:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:05.481 16:42:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:05.481 16:42:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:05.481 16:42:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:06.046 16:42:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:06.046 16:42:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:34:06.046 16:42:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:06.046 16:42:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:06.305 16:42:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:34:07.677 16:42:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:34:07.677 16:42:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:07.677 16:42:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:07.677 16:42:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:07.677 16:42:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:07.677 16:42:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:07.677 16:42:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:07.678 16:42:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:07.936 16:42:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:07.936 16:42:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:07.936 16:42:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:07.936 16:42:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:08.194 16:42:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:08.194 16:42:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:08.194 16:42:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:08.194 16:42:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:08.759 16:42:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:08.759 16:42:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:08.759 16:42:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:08.759 16:42:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:09.017 16:42:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:09.017 16:42:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:09.017 16:42:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:09.017 16:42:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:09.275 16:42:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:09.275 16:42:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:34:09.275 16:42:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:34:09.533 16:42:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:09.791 16:42:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:34:10.724 16:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:34:10.724 16:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:10.724 16:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:10.725 16:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:10.982 16:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:10.982 16:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:10.982 16:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:10.982 16:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:11.241 16:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:11.241 16:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:11.241 16:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:11.241 16:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:11.499 16:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:11.499 16:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:11.499 16:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:11.499 16:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:11.756 16:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:11.756 16:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:34:11.756 16:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:11.756 16:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:12.014 16:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:12.014 16:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:12.014 16:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:12.014 16:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:12.272 16:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:12.272 16:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:34:12.272 16:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:34:12.530 16:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:12.787 16:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:34:14.161 16:42:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:34:14.161 16:42:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:14.161 16:42:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:14.161 16:42:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:14.161 16:42:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:14.161 16:42:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:14.161 16:42:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:14.161 16:42:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:14.419 16:42:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:14.419 16:42:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:14.419 16:42:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:14.419 16:42:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:14.677 16:42:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:14.677 16:42:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:14.677 16:42:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:14.677 16:42:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:14.935 16:42:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:14.935 16:42:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:34:14.935 16:42:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:14.935 16:42:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:15.193 16:42:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:15.193 16:42:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:15.193 16:42:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:15.193 16:42:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:15.452 16:42:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:15.452 16:42:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:34:16.018 16:42:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:34:16.018 16:42:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:34:16.018 16:42:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:16.276 16:42:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:34:17.651 16:42:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:34:17.651 16:42:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:17.651 16:42:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:17.651 16:42:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:17.651 16:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:17.651 16:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:17.651 16:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:17.651 16:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:17.909 16:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:17.909 16:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:17.909 16:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:17.909 16:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:18.167 16:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:18.167 16:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:18.167 16:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:18.167 16:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:18.426 16:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:18.426 16:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:18.426 16:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:18.426 16:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:18.683 16:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:18.683 16:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:18.683 16:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:18.683 16:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:19.249 16:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:19.249 16:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:34:19.249 16:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:19.249 16:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:19.507 16:42:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:34:20.882 16:42:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:34:20.882 16:42:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:20.882 16:42:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:20.882 16:42:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:20.882 16:42:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:20.882 16:42:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:20.882 16:42:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:20.882 16:42:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:21.141 16:42:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:21.141 16:42:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:21.141 16:42:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:21.141 16:42:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:21.399 16:42:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:21.399 16:42:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:21.399 16:42:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:21.399 16:42:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:21.657 16:42:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:21.657 16:42:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:21.657 16:42:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:21.657 16:42:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:21.915 16:42:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:21.915 16:42:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:21.915 16:42:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:21.915 16:42:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:22.174 16:42:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:22.174 16:42:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:34:22.174 16:42:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:22.433 16:42:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:34:22.691 16:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:34:24.064 16:42:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:34:24.064 16:42:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:24.064 16:42:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:24.064 16:42:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:24.064 16:42:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:24.064 16:42:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:24.064 16:42:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:24.064 16:42:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:24.322 16:42:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:24.322 16:42:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:24.322 16:42:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:24.322 16:42:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:24.580 16:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:24.580 16:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:24.580 16:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:24.580 16:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:24.838 16:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:24.838 16:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:24.838 16:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:24.838 16:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:25.096 16:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:25.096 16:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:25.096 16:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:25.096 16:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:25.660 16:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:25.660 16:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:34:25.660 16:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:25.660 16:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:25.919 16:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:34:27.291 16:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:34:27.292 16:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:27.292 16:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:27.292 16:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:27.292 16:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:27.292 16:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:27.292 16:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:27.292 16:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:27.550 16:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:27.550 16:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:27.550 16:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:27.550 16:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:27.808 16:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:27.808 16:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:27.808 16:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:27.808 16:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:28.066 16:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:28.066 16:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:28.066 16:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:28.066 16:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:28.324 16:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:28.324 16:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:28.324 16:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:28.324 16:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:28.890 16:42:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:28.890 16:42:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3291585 00:34:28.890 16:42:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 3291585 ']' 00:34:28.890 16:42:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 3291585 00:34:28.890 16:42:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:34:28.890 16:42:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:28.890 16:42:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3291585 00:34:28.890 16:42:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:34:28.890 16:42:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:34:28.890 16:42:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3291585' 00:34:28.890 killing process with pid 3291585 00:34:28.890 16:42:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 3291585 00:34:28.890 16:42:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 3291585 00:34:28.890 { 00:34:28.890 "results": [ 00:34:28.890 { 00:34:28.890 "job": "Nvme0n1", 00:34:28.890 "core_mask": "0x4", 00:34:28.890 "workload": "verify", 00:34:28.890 "status": "terminated", 00:34:28.890 "verify_range": { 00:34:28.890 "start": 0, 00:34:28.890 "length": 16384 00:34:28.890 }, 00:34:28.890 "queue_depth": 128, 00:34:28.890 "io_size": 4096, 00:34:28.890 "runtime": 34.471285, 00:34:28.890 "iops": 5892.034486094731, 00:34:28.890 "mibps": 23.015759711307542, 00:34:28.890 "io_failed": 0, 00:34:28.890 "io_timeout": 0, 00:34:28.890 "avg_latency_us": 21688.022136224434, 00:34:28.890 "min_latency_us": 236.65777777777777, 00:34:28.890 "max_latency_us": 4101097.2444444443 00:34:28.890 } 00:34:28.890 ], 00:34:28.890 "core_count": 1 00:34:28.890 } 00:34:29.848 16:42:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3291585 00:34:29.848 16:42:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:29.848 [2024-09-29 16:41:52.445635] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:34:29.848 [2024-09-29 16:41:52.445824] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3291585 ] 00:34:29.848 [2024-09-29 16:41:52.576345] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:29.848 [2024-09-29 16:41:52.810058] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:34:29.848 [2024-09-29 16:41:54.454282] bdev_nvme.c:5605:nvme_bdev_ctrlr_create: *WARNING*: multipath_config: deprecated feature bdev_nvme_attach_controller.multipath configuration mismatch to be removed in v25.01 00:34:29.848 Running I/O for 90 seconds... 00:34:29.848 6244.00 IOPS, 24.39 MiB/s 6365.00 IOPS, 24.86 MiB/s 6410.33 IOPS, 25.04 MiB/s 6366.00 IOPS, 24.87 MiB/s 6348.40 IOPS, 24.80 MiB/s 6312.83 IOPS, 24.66 MiB/s 6293.71 IOPS, 24.58 MiB/s 6294.38 IOPS, 24.59 MiB/s 6284.78 IOPS, 24.55 MiB/s 6270.70 IOPS, 24.49 MiB/s 6274.36 IOPS, 24.51 MiB/s 6260.75 IOPS, 24.46 MiB/s 6270.23 IOPS, 24.49 MiB/s 6264.07 IOPS, 24.47 MiB/s 6269.33 IOPS, 24.49 MiB/s [2024-09-29 16:42:09.865853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.848 [2024-09-29 16:42:09.865948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:29.848 [2024-09-29 16:42:09.866007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.848 [2024-09-29 16:42:09.866036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:29.848 [2024-09-29 16:42:09.866075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:105536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.848 [2024-09-29 16:42:09.866118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:29.848 [2024-09-29 16:42:09.866156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:105544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.849 [2024-09-29 16:42:09.866181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:29.849 [2024-09-29 16:42:09.866233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:105552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.849 [2024-09-29 16:42:09.866258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:29.849 [2024-09-29 16:42:09.866292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:105560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.849 [2024-09-29 16:42:09.866331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:29.849 [2024-09-29 16:42:09.866365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:105568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.849 [2024-09-29 16:42:09.866388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:29.849 [2024-09-29 16:42:09.866422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:105576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.849 [2024-09-29 16:42:09.866445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:29.849 [2024-09-29 16:42:09.866479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:105584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.849 [2024-09-29 16:42:09.866514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:29.849 [2024-09-29 16:42:09.866549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:105592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.849 [2024-09-29 16:42:09.866573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:29.849 [2024-09-29 16:42:09.866605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:105600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.849 [2024-09-29 16:42:09.866629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:29.849 [2024-09-29 16:42:09.866684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:105608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.849 [2024-09-29 16:42:09.866710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:29.849 [2024-09-29 16:42:09.866762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:105616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.849 [2024-09-29 16:42:09.866787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:29.849 [2024-09-29 16:42:09.866823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.849 [2024-09-29 16:42:09.866847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:29.849 [2024-09-29 16:42:09.866883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:105632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.849 [2024-09-29 16:42:09.866907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:29.849 [2024-09-29 16:42:09.866942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:105640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.849 [2024-09-29 16:42:09.866981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:29.849 [2024-09-29 16:42:09.867018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:105648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.849 [2024-09-29 16:42:09.867057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:29.849 [2024-09-29 16:42:09.867091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:105656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.849 [2024-09-29 16:42:09.867113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:29.849 [2024-09-29 16:42:09.867146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:105664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.849 [2024-09-29 16:42:09.867169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:29.849 [2024-09-29 16:42:09.867202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:105672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.849 [2024-09-29 16:42:09.867224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:29.849 [2024-09-29 16:42:09.867256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:105680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.849 [2024-09-29 16:42:09.867284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:29.849 [2024-09-29 16:42:09.867318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:105688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.849 [2024-09-29 16:42:09.867341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:29.849 [2024-09-29 16:42:09.867373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:105696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.849 [2024-09-29 16:42:09.867395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:29.849 [2024-09-29 16:42:09.867428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.849 [2024-09-29 16:42:09.867450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:29.849 [2024-09-29 16:42:09.867483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:105712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.849 [2024-09-29 16:42:09.867505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:29.849 [2024-09-29 16:42:09.867538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:105720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.849 [2024-09-29 16:42:09.867562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:29.849 [2024-09-29 16:42:09.867595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:105728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.849 [2024-09-29 16:42:09.867618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:29.849 [2024-09-29 16:42:09.867651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:105736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.849 [2024-09-29 16:42:09.867695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:29.849 [2024-09-29 16:42:09.867734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:105744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.849 [2024-09-29 16:42:09.867772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:29.849 [2024-09-29 16:42:09.867824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:105752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.849 [2024-09-29 16:42:09.867849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:29.849 [2024-09-29 16:42:09.867885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:105760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.849 [2024-09-29 16:42:09.867915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:29.849 [2024-09-29 16:42:09.867953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:105768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.849 [2024-09-29 16:42:09.867979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:29.849 [2024-09-29 16:42:09.868015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:105776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.849 [2024-09-29 16:42:09.868039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:29.850 [2024-09-29 16:42:09.868096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:105784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.850 [2024-09-29 16:42:09.868120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:29.850 [2024-09-29 16:42:09.868169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:105792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.850 [2024-09-29 16:42:09.868193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:29.850 [2024-09-29 16:42:09.868227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:105800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.850 [2024-09-29 16:42:09.868250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:29.850 [2024-09-29 16:42:09.868298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:105808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.850 [2024-09-29 16:42:09.868323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:29.850 [2024-09-29 16:42:09.868359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:105816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.850 [2024-09-29 16:42:09.868383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:29.850 [2024-09-29 16:42:09.868418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:105824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.850 [2024-09-29 16:42:09.868442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:29.850 [2024-09-29 16:42:09.868477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:105832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.850 [2024-09-29 16:42:09.868500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:29.850 [2024-09-29 16:42:09.868536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:105840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.850 [2024-09-29 16:42:09.868559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:29.850 [2024-09-29 16:42:09.868594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:105848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.850 [2024-09-29 16:42:09.868634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:29.850 [2024-09-29 16:42:09.868669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.850 [2024-09-29 16:42:09.868719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:29.850 [2024-09-29 16:42:09.868756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:105864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.850 [2024-09-29 16:42:09.868780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:29.850 [2024-09-29 16:42:09.868813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:105872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.850 [2024-09-29 16:42:09.868836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:29.850 [2024-09-29 16:42:09.868876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:105880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.850 [2024-09-29 16:42:09.868900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:29.850 [2024-09-29 16:42:09.868934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:105888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.850 [2024-09-29 16:42:09.868958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:29.850 [2024-09-29 16:42:09.870243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:105896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.850 [2024-09-29 16:42:09.870273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:29.850 [2024-09-29 16:42:09.870325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:105904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.850 [2024-09-29 16:42:09.870351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:29.850 [2024-09-29 16:42:09.870385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:105912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.850 [2024-09-29 16:42:09.870408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:29.850 [2024-09-29 16:42:09.870440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:105920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.850 [2024-09-29 16:42:09.870462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:29.850 [2024-09-29 16:42:09.870496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:105928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.850 [2024-09-29 16:42:09.870518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:29.850 [2024-09-29 16:42:09.870551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:105936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.850 [2024-09-29 16:42:09.870574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:29.850 [2024-09-29 16:42:09.870606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:105944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.850 [2024-09-29 16:42:09.870629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:29.850 [2024-09-29 16:42:09.870661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:105952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.850 [2024-09-29 16:42:09.870708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:29.850 [2024-09-29 16:42:09.870745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:105960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.850 [2024-09-29 16:42:09.870770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:29.850 [2024-09-29 16:42:09.870802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:105968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.850 [2024-09-29 16:42:09.870826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:29.850 [2024-09-29 16:42:09.870864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:105976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.850 [2024-09-29 16:42:09.870889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:29.850 [2024-09-29 16:42:09.870923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:105984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.850 [2024-09-29 16:42:09.870946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:29.850 [2024-09-29 16:42:09.870994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:105992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.850 [2024-09-29 16:42:09.871018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:29.850 [2024-09-29 16:42:09.871051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:106000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.850 [2024-09-29 16:42:09.871073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:29.850 [2024-09-29 16:42:09.871106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:106008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.850 [2024-09-29 16:42:09.871128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:29.850 [2024-09-29 16:42:09.871200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:106016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.850 [2024-09-29 16:42:09.871241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:29.851 [2024-09-29 16:42:09.871276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:106024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.851 [2024-09-29 16:42:09.871300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:29.851 [2024-09-29 16:42:09.871334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:106032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.851 [2024-09-29 16:42:09.871358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:29.851 [2024-09-29 16:42:09.871391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:106040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.851 [2024-09-29 16:42:09.871415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:29.851 [2024-09-29 16:42:09.871449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:106048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.851 [2024-09-29 16:42:09.871473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:29.851 [2024-09-29 16:42:09.871507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:106056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.851 [2024-09-29 16:42:09.871531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:29.851 [2024-09-29 16:42:09.871565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:106064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.851 [2024-09-29 16:42:09.871589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:29.851 [2024-09-29 16:42:09.871622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:106072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.851 [2024-09-29 16:42:09.871666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:29.851 [2024-09-29 16:42:09.871714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:106080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.851 [2024-09-29 16:42:09.871739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:29.851 [2024-09-29 16:42:09.871774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:106088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.851 [2024-09-29 16:42:09.871799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:29.851 [2024-09-29 16:42:09.871833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.851 [2024-09-29 16:42:09.871858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:29.851 [2024-09-29 16:42:09.871893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:106104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.851 [2024-09-29 16:42:09.871919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:29.851 [2024-09-29 16:42:09.871955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:106112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.851 [2024-09-29 16:42:09.871995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:29.851 [2024-09-29 16:42:09.872030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:106120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.851 [2024-09-29 16:42:09.872053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:29.851 [2024-09-29 16:42:09.872087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:106128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.851 [2024-09-29 16:42:09.872110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:29.851 [2024-09-29 16:42:09.872143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:106136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.851 [2024-09-29 16:42:09.872167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:29.851 [2024-09-29 16:42:09.872201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:106144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.851 [2024-09-29 16:42:09.872225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:29.851 [2024-09-29 16:42:09.872258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:106152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.851 [2024-09-29 16:42:09.872281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:29.851 [2024-09-29 16:42:09.872313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:106160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.851 [2024-09-29 16:42:09.872336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:29.851 [2024-09-29 16:42:09.872369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:106168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.851 [2024-09-29 16:42:09.872397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:29.851 [2024-09-29 16:42:09.872431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:106176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.851 [2024-09-29 16:42:09.872454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:29.851 [2024-09-29 16:42:09.872486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:106184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.851 [2024-09-29 16:42:09.872509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:29.851 [2024-09-29 16:42:09.872543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:106192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.851 [2024-09-29 16:42:09.872566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:29.851 [2024-09-29 16:42:09.872599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:106200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.851 [2024-09-29 16:42:09.872623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:29.851 [2024-09-29 16:42:09.872656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:106208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.851 [2024-09-29 16:42:09.872705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:29.851 [2024-09-29 16:42:09.872743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:106216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.851 [2024-09-29 16:42:09.872768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:29.851 [2024-09-29 16:42:09.872802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:106224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.851 [2024-09-29 16:42:09.872826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:29.851 [2024-09-29 16:42:09.872860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:106232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.851 [2024-09-29 16:42:09.872885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:29.851 [2024-09-29 16:42:09.872919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:106240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.851 [2024-09-29 16:42:09.872943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:29.851 [2024-09-29 16:42:09.872991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:106248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.851 [2024-09-29 16:42:09.873015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:29.851 [2024-09-29 16:42:09.873049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:106256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.851 [2024-09-29 16:42:09.873073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:29.851 [2024-09-29 16:42:09.873106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:106264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.852 [2024-09-29 16:42:09.873129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:29.852 [2024-09-29 16:42:09.873168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:106272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.852 [2024-09-29 16:42:09.873191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:29.852 [2024-09-29 16:42:09.873225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:106288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.852 [2024-09-29 16:42:09.873249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:29.852 [2024-09-29 16:42:09.873282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:106296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.852 [2024-09-29 16:42:09.873305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:29.852 [2024-09-29 16:42:09.873339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:106304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.852 [2024-09-29 16:42:09.873362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:29.852 [2024-09-29 16:42:09.873395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:106312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.852 [2024-09-29 16:42:09.873418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:29.852 [2024-09-29 16:42:09.873450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:106320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.852 [2024-09-29 16:42:09.873472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:29.852 [2024-09-29 16:42:09.873507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:106328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.852 [2024-09-29 16:42:09.873530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:29.852 [2024-09-29 16:42:09.873563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:106336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.852 [2024-09-29 16:42:09.873586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:29.852 [2024-09-29 16:42:09.873620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.852 [2024-09-29 16:42:09.873643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:29.852 [2024-09-29 16:42:09.873702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:106352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.852 [2024-09-29 16:42:09.873726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:29.852 [2024-09-29 16:42:09.873761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:106360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.852 [2024-09-29 16:42:09.873785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:29.852 [2024-09-29 16:42:09.873819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:106368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.852 [2024-09-29 16:42:09.873843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:29.852 [2024-09-29 16:42:09.873883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:106376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.852 [2024-09-29 16:42:09.873908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:29.852 [2024-09-29 16:42:09.873942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:106384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.852 [2024-09-29 16:42:09.873966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:29.852 [2024-09-29 16:42:09.874016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:106392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.852 [2024-09-29 16:42:09.874039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:29.852 [2024-09-29 16:42:09.874073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:106400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.852 [2024-09-29 16:42:09.874096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:29.852 [2024-09-29 16:42:09.874130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:106408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.852 [2024-09-29 16:42:09.874153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:29.852 [2024-09-29 16:42:09.875132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:106280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.852 [2024-09-29 16:42:09.875164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:29.852 [2024-09-29 16:42:09.875222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:105520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.852 [2024-09-29 16:42:09.875262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:29.852 [2024-09-29 16:42:09.875301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:105528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.852 [2024-09-29 16:42:09.875326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:29.852 [2024-09-29 16:42:09.875361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:105536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.852 [2024-09-29 16:42:09.875402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:29.852 [2024-09-29 16:42:09.875453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.852 [2024-09-29 16:42:09.875479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:29.852 [2024-09-29 16:42:09.875515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:105552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.852 [2024-09-29 16:42:09.875540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:29.852 [2024-09-29 16:42:09.875576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:105560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.852 [2024-09-29 16:42:09.875601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:29.852 [2024-09-29 16:42:09.875645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:105568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.852 [2024-09-29 16:42:09.875679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:29.852 [2024-09-29 16:42:09.875717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:105576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.852 [2024-09-29 16:42:09.875743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:29.852 [2024-09-29 16:42:09.875780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:105584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.852 [2024-09-29 16:42:09.875805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:29.852 [2024-09-29 16:42:09.875840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:105592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.852 [2024-09-29 16:42:09.875865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:29.852 [2024-09-29 16:42:09.875900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:105600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.853 [2024-09-29 16:42:09.875924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:29.853 [2024-09-29 16:42:09.875959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:105608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.853 [2024-09-29 16:42:09.875984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:29.853 [2024-09-29 16:42:09.876035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:105616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.853 [2024-09-29 16:42:09.876058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:29.853 [2024-09-29 16:42:09.876093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:105624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.853 [2024-09-29 16:42:09.876132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:29.853 [2024-09-29 16:42:09.876182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:105632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.853 [2024-09-29 16:42:09.876205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:29.853 [2024-09-29 16:42:09.876239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:105640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.853 [2024-09-29 16:42:09.876262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:29.853 [2024-09-29 16:42:09.876295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:105648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.853 [2024-09-29 16:42:09.876319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:29.853 [2024-09-29 16:42:09.876352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:105656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.853 [2024-09-29 16:42:09.876375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:29.853 [2024-09-29 16:42:09.876408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:105664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.853 [2024-09-29 16:42:09.876436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:29.853 [2024-09-29 16:42:09.876472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:105672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.853 [2024-09-29 16:42:09.876496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:29.853 [2024-09-29 16:42:09.876530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:105680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.853 [2024-09-29 16:42:09.876553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:29.853 [2024-09-29 16:42:09.876586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:105688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.853 [2024-09-29 16:42:09.876609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:29.853 [2024-09-29 16:42:09.876644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.853 [2024-09-29 16:42:09.876694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:29.853 [2024-09-29 16:42:09.876732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:105704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.853 [2024-09-29 16:42:09.876755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:29.853 [2024-09-29 16:42:09.876790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:105712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.853 [2024-09-29 16:42:09.876813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:29.853 [2024-09-29 16:42:09.876848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:105720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.853 [2024-09-29 16:42:09.876872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:29.853 [2024-09-29 16:42:09.876906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:105728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.853 [2024-09-29 16:42:09.876930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:29.853 [2024-09-29 16:42:09.876980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:105736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.853 [2024-09-29 16:42:09.877004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:29.853 [2024-09-29 16:42:09.877038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:105744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.853 [2024-09-29 16:42:09.877061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:29.853 [2024-09-29 16:42:09.877095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:105752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.853 [2024-09-29 16:42:09.877118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:29.853 [2024-09-29 16:42:09.877152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:105760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.853 [2024-09-29 16:42:09.877179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:29.853 [2024-09-29 16:42:09.877214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:105768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.853 [2024-09-29 16:42:09.877237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:29.853 [2024-09-29 16:42:09.877269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.853 [2024-09-29 16:42:09.877292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:29.853 [2024-09-29 16:42:09.877325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:105784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.853 [2024-09-29 16:42:09.877348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:29.853 [2024-09-29 16:42:09.877381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:105792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.853 [2024-09-29 16:42:09.877404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:29.853 [2024-09-29 16:42:09.877437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:105800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.853 [2024-09-29 16:42:09.877460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:29.853 [2024-09-29 16:42:09.877494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:105808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.853 [2024-09-29 16:42:09.877518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:29.853 [2024-09-29 16:42:09.877551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:105816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.853 [2024-09-29 16:42:09.877574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:29.853 [2024-09-29 16:42:09.877608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:105824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.853 [2024-09-29 16:42:09.877631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:29.853 [2024-09-29 16:42:09.877689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:105832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.853 [2024-09-29 16:42:09.877715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:29.853 [2024-09-29 16:42:09.877751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:105840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.854 [2024-09-29 16:42:09.877775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:29.854 [2024-09-29 16:42:09.877810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:105848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.854 [2024-09-29 16:42:09.877841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:29.854 [2024-09-29 16:42:09.877876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:105856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.854 [2024-09-29 16:42:09.877922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:29.854 [2024-09-29 16:42:09.877960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:105864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.854 [2024-09-29 16:42:09.878005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:29.854 [2024-09-29 16:42:09.878042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.854 [2024-09-29 16:42:09.878066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:29.854 [2024-09-29 16:42:09.878100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.854 [2024-09-29 16:42:09.878124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:29.854 [2024-09-29 16:42:09.878728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:105888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.854 [2024-09-29 16:42:09.878760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:29.854 [2024-09-29 16:42:09.878803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:106416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.854 [2024-09-29 16:42:09.878829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:29.854 [2024-09-29 16:42:09.878865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:106424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.854 [2024-09-29 16:42:09.878891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:29.854 [2024-09-29 16:42:09.878927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:106432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.854 [2024-09-29 16:42:09.878952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:29.854 [2024-09-29 16:42:09.879004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:106440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.854 [2024-09-29 16:42:09.879028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:29.854 [2024-09-29 16:42:09.879063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:106448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.854 [2024-09-29 16:42:09.879087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:29.854 [2024-09-29 16:42:09.879121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:106456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.854 [2024-09-29 16:42:09.879146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:29.854 [2024-09-29 16:42:09.879180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:106464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.854 [2024-09-29 16:42:09.879204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:29.854 [2024-09-29 16:42:09.879238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.854 [2024-09-29 16:42:09.879277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:29.854 [2024-09-29 16:42:09.879317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:106480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.854 [2024-09-29 16:42:09.879341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:29.854 [2024-09-29 16:42:09.879374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:106488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.854 [2024-09-29 16:42:09.879398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:29.854 [2024-09-29 16:42:09.879432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:106496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.854 [2024-09-29 16:42:09.879461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:29.854 [2024-09-29 16:42:09.879497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:106504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.854 [2024-09-29 16:42:09.879520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:29.854 [2024-09-29 16:42:09.879554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:106512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.854 [2024-09-29 16:42:09.879577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:29.854 [2024-09-29 16:42:09.879610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:106520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.854 [2024-09-29 16:42:09.879633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:29.854 [2024-09-29 16:42:09.879702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:106528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.854 [2024-09-29 16:42:09.879730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:29.854 [2024-09-29 16:42:09.879767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:106536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.854 [2024-09-29 16:42:09.879792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:29.854 [2024-09-29 16:42:09.879828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:105896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.854 [2024-09-29 16:42:09.879853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:29.855 [2024-09-29 16:42:09.879889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:105904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.855 [2024-09-29 16:42:09.879914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:29.855 [2024-09-29 16:42:09.879951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.855 [2024-09-29 16:42:09.879976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:29.855 [2024-09-29 16:42:09.880026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:105920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.855 [2024-09-29 16:42:09.880049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:29.855 [2024-09-29 16:42:09.880089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:105928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.855 [2024-09-29 16:42:09.880112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:29.855 [2024-09-29 16:42:09.880145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:105936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.855 [2024-09-29 16:42:09.880168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:29.855 [2024-09-29 16:42:09.880201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:105944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.855 [2024-09-29 16:42:09.880224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:29.855 [2024-09-29 16:42:09.880258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:105952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.855 [2024-09-29 16:42:09.880281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:29.855 [2024-09-29 16:42:09.880314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.855 [2024-09-29 16:42:09.880337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:29.855 [2024-09-29 16:42:09.880371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:105968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.855 [2024-09-29 16:42:09.880394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:29.855 [2024-09-29 16:42:09.880427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.855 [2024-09-29 16:42:09.880450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:29.855 [2024-09-29 16:42:09.880483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:105984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.855 [2024-09-29 16:42:09.880506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:29.855 [2024-09-29 16:42:09.880539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:105992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.855 [2024-09-29 16:42:09.880563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:29.855 [2024-09-29 16:42:09.880596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:106000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.855 [2024-09-29 16:42:09.880636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:29.855 [2024-09-29 16:42:09.880694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:106008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.855 [2024-09-29 16:42:09.880721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:29.855 [2024-09-29 16:42:09.880773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:106016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.855 [2024-09-29 16:42:09.880798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:29.855 [2024-09-29 16:42:09.880834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:106024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.855 [2024-09-29 16:42:09.880863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:29.855 [2024-09-29 16:42:09.880899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:106032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.855 [2024-09-29 16:42:09.880925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:29.855 [2024-09-29 16:42:09.880960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:106040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.855 [2024-09-29 16:42:09.881000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:29.855 [2024-09-29 16:42:09.881036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:106048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.855 [2024-09-29 16:42:09.881060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:29.855 [2024-09-29 16:42:09.881094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:106056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.855 [2024-09-29 16:42:09.881118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:29.855 [2024-09-29 16:42:09.881167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.855 [2024-09-29 16:42:09.881190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:29.855 [2024-09-29 16:42:09.881222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:106072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.855 [2024-09-29 16:42:09.881245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:29.855 [2024-09-29 16:42:09.881278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:106080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.855 [2024-09-29 16:42:09.881301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:29.855 [2024-09-29 16:42:09.881334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:106088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.855 [2024-09-29 16:42:09.881357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:29.855 [2024-09-29 16:42:09.881391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:106096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.855 [2024-09-29 16:42:09.881414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:29.855 [2024-09-29 16:42:09.881447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:106104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.855 [2024-09-29 16:42:09.881471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:29.855 [2024-09-29 16:42:09.881504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:106112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.855 [2024-09-29 16:42:09.881527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:29.855 [2024-09-29 16:42:09.881560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:106120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.855 [2024-09-29 16:42:09.881587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:29.855 [2024-09-29 16:42:09.881622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:106128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.855 [2024-09-29 16:42:09.881645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:29.856 [2024-09-29 16:42:09.881702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:106136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.856 [2024-09-29 16:42:09.881728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:29.856 [2024-09-29 16:42:09.881763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:106144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.856 [2024-09-29 16:42:09.881787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:29.856 [2024-09-29 16:42:09.881820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:106152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.856 [2024-09-29 16:42:09.881844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:29.856 [2024-09-29 16:42:09.881878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:106160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.856 [2024-09-29 16:42:09.881902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:29.856 [2024-09-29 16:42:09.881936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:106168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.856 [2024-09-29 16:42:09.881960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:29.856 [2024-09-29 16:42:09.882010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:106176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.856 [2024-09-29 16:42:09.882033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:29.856 [2024-09-29 16:42:09.882068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:106184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.856 [2024-09-29 16:42:09.882091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:29.856 [2024-09-29 16:42:09.882124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:106192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.856 [2024-09-29 16:42:09.882147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:29.856 [2024-09-29 16:42:09.882181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:106200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.856 [2024-09-29 16:42:09.882204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:29.856 [2024-09-29 16:42:09.882237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:106208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.856 [2024-09-29 16:42:09.882260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:29.856 [2024-09-29 16:42:09.882294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:106216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.856 [2024-09-29 16:42:09.882323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:29.856 [2024-09-29 16:42:09.882358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:106224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.856 [2024-09-29 16:42:09.882381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:29.856 [2024-09-29 16:42:09.882414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:106232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.856 [2024-09-29 16:42:09.882438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:29.856 [2024-09-29 16:42:09.882471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:106240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.856 [2024-09-29 16:42:09.882494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:29.856 [2024-09-29 16:42:09.882527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:106248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.856 [2024-09-29 16:42:09.882551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:29.856 [2024-09-29 16:42:09.882584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:106256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.856 [2024-09-29 16:42:09.882607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:29.856 [2024-09-29 16:42:09.882641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:106264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.856 [2024-09-29 16:42:09.882689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:29.856 [2024-09-29 16:42:09.882741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:106272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.856 [2024-09-29 16:42:09.882766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:29.856 [2024-09-29 16:42:09.882802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:106288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.856 [2024-09-29 16:42:09.882838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:29.856 [2024-09-29 16:42:09.882876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:106296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.856 [2024-09-29 16:42:09.882901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:29.856 [2024-09-29 16:42:09.882937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:106304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.856 [2024-09-29 16:42:09.882963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:29.856 [2024-09-29 16:42:09.883015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:106312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.856 [2024-09-29 16:42:09.883055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:29.856 [2024-09-29 16:42:09.883090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:106320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.856 [2024-09-29 16:42:09.883114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:29.856 [2024-09-29 16:42:09.883153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:106328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.856 [2024-09-29 16:42:09.883177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:29.856 [2024-09-29 16:42:09.883212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:106336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.856 [2024-09-29 16:42:09.883235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:29.856 [2024-09-29 16:42:09.883270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:106344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.856 [2024-09-29 16:42:09.883293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:29.856 [2024-09-29 16:42:09.883328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:106352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.856 [2024-09-29 16:42:09.883351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:29.856 [2024-09-29 16:42:09.883385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:106360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.856 [2024-09-29 16:42:09.883409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:29.856 [2024-09-29 16:42:09.883442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:106368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.856 [2024-09-29 16:42:09.883466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:29.856 [2024-09-29 16:42:09.883501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:106376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.857 [2024-09-29 16:42:09.883524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:29.857 [2024-09-29 16:42:09.883557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:106384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.857 [2024-09-29 16:42:09.883580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:29.857 [2024-09-29 16:42:09.883614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:106392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.857 [2024-09-29 16:42:09.883638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:29.857 [2024-09-29 16:42:09.883697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:106400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.857 [2024-09-29 16:42:09.883724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:29.857 [2024-09-29 16:42:09.884760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:106408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.857 [2024-09-29 16:42:09.884793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:29.857 [2024-09-29 16:42:09.884837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:106280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.857 [2024-09-29 16:42:09.884863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:29.857 [2024-09-29 16:42:09.884905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:105520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.857 [2024-09-29 16:42:09.884931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:29.857 [2024-09-29 16:42:09.884967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:105528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.857 [2024-09-29 16:42:09.884992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:29.857 [2024-09-29 16:42:09.885028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:105536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.857 [2024-09-29 16:42:09.885053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:29.857 [2024-09-29 16:42:09.885088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:105544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.857 [2024-09-29 16:42:09.885129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:29.857 [2024-09-29 16:42:09.885165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:105552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.857 [2024-09-29 16:42:09.885189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:29.857 [2024-09-29 16:42:09.885224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:105560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.857 [2024-09-29 16:42:09.885248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:29.857 [2024-09-29 16:42:09.885298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:105568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.857 [2024-09-29 16:42:09.885323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:29.857 [2024-09-29 16:42:09.885356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:105576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.857 [2024-09-29 16:42:09.885380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:29.857 [2024-09-29 16:42:09.885428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:105584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.857 [2024-09-29 16:42:09.885453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:29.857 [2024-09-29 16:42:09.885488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:105592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.857 [2024-09-29 16:42:09.885518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:29.857 [2024-09-29 16:42:09.885554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:105600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.857 [2024-09-29 16:42:09.885578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:29.857 [2024-09-29 16:42:09.885613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.857 [2024-09-29 16:42:09.885639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:29.857 [2024-09-29 16:42:09.885700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:105616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.857 [2024-09-29 16:42:09.885747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:29.857 [2024-09-29 16:42:09.885785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:105624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.857 [2024-09-29 16:42:09.885809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:29.857 [2024-09-29 16:42:09.885865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:105632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.857 [2024-09-29 16:42:09.885890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:29.857 [2024-09-29 16:42:09.885925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:105640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.857 [2024-09-29 16:42:09.885949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:29.857 [2024-09-29 16:42:09.885999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:105648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.857 [2024-09-29 16:42:09.886024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:29.857 [2024-09-29 16:42:09.886058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:105656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.857 [2024-09-29 16:42:09.886083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:29.857 [2024-09-29 16:42:09.886116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:105664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.857 [2024-09-29 16:42:09.886140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:29.857 [2024-09-29 16:42:09.886174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:105672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.857 [2024-09-29 16:42:09.886198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:29.857 [2024-09-29 16:42:09.886232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:105680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.857 [2024-09-29 16:42:09.886256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:29.857 [2024-09-29 16:42:09.886289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.857 [2024-09-29 16:42:09.886312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:29.857 [2024-09-29 16:42:09.886346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:105696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.857 [2024-09-29 16:42:09.886370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:29.857 [2024-09-29 16:42:09.886404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:105704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.857 [2024-09-29 16:42:09.886428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:29.858 [2024-09-29 16:42:09.886462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:105712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.858 [2024-09-29 16:42:09.886491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:29.858 [2024-09-29 16:42:09.886526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:105720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.858 [2024-09-29 16:42:09.886555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:29.858 [2024-09-29 16:42:09.886591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:105728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.858 [2024-09-29 16:42:09.886615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:29.858 [2024-09-29 16:42:09.886664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:105736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.858 [2024-09-29 16:42:09.886697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:29.858 [2024-09-29 16:42:09.886735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:105744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.858 [2024-09-29 16:42:09.886760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:29.858 [2024-09-29 16:42:09.886795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:105752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.858 [2024-09-29 16:42:09.886820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:29.858 [2024-09-29 16:42:09.886856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:105760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.858 [2024-09-29 16:42:09.886882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:29.858 [2024-09-29 16:42:09.886917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:105768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.858 [2024-09-29 16:42:09.886942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:29.858 [2024-09-29 16:42:09.886992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:105776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.858 [2024-09-29 16:42:09.887016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:29.858 [2024-09-29 16:42:09.887050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:105784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.858 [2024-09-29 16:42:09.887074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:29.858 [2024-09-29 16:42:09.887109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:105792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.858 [2024-09-29 16:42:09.887133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:29.858 [2024-09-29 16:42:09.887167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:105800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.858 [2024-09-29 16:42:09.887191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:29.858 [2024-09-29 16:42:09.887225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:105808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.858 [2024-09-29 16:42:09.887253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:29.858 [2024-09-29 16:42:09.887288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:105816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.858 [2024-09-29 16:42:09.887313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:29.858 [2024-09-29 16:42:09.887348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:105824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.858 [2024-09-29 16:42:09.887372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:29.858 [2024-09-29 16:42:09.887406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:105832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.858 [2024-09-29 16:42:09.887429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:29.858 [2024-09-29 16:42:09.887463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.858 [2024-09-29 16:42:09.887487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:29.858 [2024-09-29 16:42:09.887538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:105848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.858 [2024-09-29 16:42:09.887564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:29.858 [2024-09-29 16:42:09.887598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:105856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.858 [2024-09-29 16:42:09.887623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:29.858 [2024-09-29 16:42:09.887684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:105864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.858 [2024-09-29 16:42:09.887710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:29.858 [2024-09-29 16:42:09.887755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:105872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.858 [2024-09-29 16:42:09.887781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:29.858 [2024-09-29 16:42:09.888367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:105880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.858 [2024-09-29 16:42:09.888396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:29.858 [2024-09-29 16:42:09.888436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:105888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.858 [2024-09-29 16:42:09.888461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:29.858 [2024-09-29 16:42:09.888496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:106416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.858 [2024-09-29 16:42:09.888520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:29.858 [2024-09-29 16:42:09.888555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:106424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.858 [2024-09-29 16:42:09.888595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:29.858 [2024-09-29 16:42:09.888636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:106432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.858 [2024-09-29 16:42:09.888684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:29.858 [2024-09-29 16:42:09.888739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:106440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.858 [2024-09-29 16:42:09.888765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:29.858 [2024-09-29 16:42:09.888801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:106448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.858 [2024-09-29 16:42:09.888826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:29.858 [2024-09-29 16:42:09.888862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:106456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.858 [2024-09-29 16:42:09.888886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:29.858 [2024-09-29 16:42:09.888922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:106464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.859 [2024-09-29 16:42:09.888947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:29.859 [2024-09-29 16:42:09.888998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:106472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.859 [2024-09-29 16:42:09.889023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:29.859 [2024-09-29 16:42:09.889071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:106480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.859 [2024-09-29 16:42:09.889094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:29.859 [2024-09-29 16:42:09.889127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:106488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.859 [2024-09-29 16:42:09.889150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:29.859 [2024-09-29 16:42:09.889181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:106496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.859 [2024-09-29 16:42:09.889204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:29.859 [2024-09-29 16:42:09.889237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:106504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.859 [2024-09-29 16:42:09.889259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:29.859 [2024-09-29 16:42:09.889291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:106512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.859 [2024-09-29 16:42:09.889314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:29.859 [2024-09-29 16:42:09.889345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:106520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.859 [2024-09-29 16:42:09.889368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:29.859 [2024-09-29 16:42:09.889404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:106528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.859 [2024-09-29 16:42:09.889427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:29.859 [2024-09-29 16:42:09.889458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:106536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.859 [2024-09-29 16:42:09.889482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:29.859 [2024-09-29 16:42:09.889516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:105896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.859 [2024-09-29 16:42:09.889538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:29.859 [2024-09-29 16:42:09.889571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:105904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.859 [2024-09-29 16:42:09.889593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:29.859 [2024-09-29 16:42:09.889625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:105912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.859 [2024-09-29 16:42:09.889647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:29.859 [2024-09-29 16:42:09.889709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:105920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.859 [2024-09-29 16:42:09.889734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:29.859 [2024-09-29 16:42:09.889768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:105928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.859 [2024-09-29 16:42:09.889793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:29.859 [2024-09-29 16:42:09.889826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:105936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.859 [2024-09-29 16:42:09.889850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:29.859 [2024-09-29 16:42:09.889883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:105944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.859 [2024-09-29 16:42:09.889907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:29.859 [2024-09-29 16:42:09.889941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:105952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.859 [2024-09-29 16:42:09.889979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:29.859 [2024-09-29 16:42:09.890012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:105960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.859 [2024-09-29 16:42:09.890034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:29.859 [2024-09-29 16:42:09.890066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:105968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.859 [2024-09-29 16:42:09.890088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:29.859 [2024-09-29 16:42:09.890126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:105976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.859 [2024-09-29 16:42:09.890159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:29.859 [2024-09-29 16:42:09.890191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:105984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.859 [2024-09-29 16:42:09.890213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:29.859 [2024-09-29 16:42:09.890245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:105992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.859 [2024-09-29 16:42:09.890267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:29.859 [2024-09-29 16:42:09.890313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:106000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.859 [2024-09-29 16:42:09.890338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:29.859 [2024-09-29 16:42:09.890373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:106008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.859 [2024-09-29 16:42:09.890397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:29.859 [2024-09-29 16:42:09.890448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:106016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.859 [2024-09-29 16:42:09.890472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:29.859 [2024-09-29 16:42:09.890506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:106024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.859 [2024-09-29 16:42:09.890537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:29.859 [2024-09-29 16:42:09.890571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:106032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.859 [2024-09-29 16:42:09.890595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:29.859 [2024-09-29 16:42:09.890628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:106040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.859 [2024-09-29 16:42:09.890652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:29.859 [2024-09-29 16:42:09.890715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:106048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.860 [2024-09-29 16:42:09.890742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:29.860 [2024-09-29 16:42:09.890778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:106056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.860 [2024-09-29 16:42:09.890802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:29.860 [2024-09-29 16:42:09.890852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:106064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.860 [2024-09-29 16:42:09.890877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:29.860 [2024-09-29 16:42:09.890910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:106072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.860 [2024-09-29 16:42:09.890939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:29.860 [2024-09-29 16:42:09.890992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.860 [2024-09-29 16:42:09.891033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:29.860 [2024-09-29 16:42:09.891065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:106088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.860 [2024-09-29 16:42:09.891089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:29.860 [2024-09-29 16:42:09.891121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:106096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.860 [2024-09-29 16:42:09.891143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:29.860 [2024-09-29 16:42:09.891175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:106104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.860 [2024-09-29 16:42:09.891198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:29.860 [2024-09-29 16:42:09.891230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:106112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.860 [2024-09-29 16:42:09.891252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:29.860 [2024-09-29 16:42:09.891285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:106120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.860 [2024-09-29 16:42:09.891308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:29.860 [2024-09-29 16:42:09.891340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:106128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.860 [2024-09-29 16:42:09.891363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:29.860 [2024-09-29 16:42:09.891396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:106136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.860 [2024-09-29 16:42:09.891418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:29.860 [2024-09-29 16:42:09.891450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:106144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.860 [2024-09-29 16:42:09.891472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:29.860 [2024-09-29 16:42:09.891505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:106152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.860 [2024-09-29 16:42:09.891527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:29.860 [2024-09-29 16:42:09.891559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:106160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.860 [2024-09-29 16:42:09.891582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:29.860 [2024-09-29 16:42:09.891614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:106168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.860 [2024-09-29 16:42:09.891641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:29.860 [2024-09-29 16:42:09.891713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:106176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.860 [2024-09-29 16:42:09.891742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:29.860 [2024-09-29 16:42:09.891777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:106184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.860 [2024-09-29 16:42:09.891801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:29.860 [2024-09-29 16:42:09.891834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:106192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.860 [2024-09-29 16:42:09.891859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:29.860 [2024-09-29 16:42:09.891893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:106200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.860 [2024-09-29 16:42:09.891917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:29.860 [2024-09-29 16:42:09.891966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:106208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.860 [2024-09-29 16:42:09.891990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:29.860 [2024-09-29 16:42:09.892038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:106216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.860 [2024-09-29 16:42:09.892061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:29.860 [2024-09-29 16:42:09.892108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:106224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.860 [2024-09-29 16:42:09.892132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:29.860 [2024-09-29 16:42:09.892164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:106232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.860 [2024-09-29 16:42:09.892193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:29.860 [2024-09-29 16:42:09.892227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:106240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.860 [2024-09-29 16:42:09.892250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:29.860 [2024-09-29 16:42:09.892282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:106248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.861 [2024-09-29 16:42:09.892305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:29.861 [2024-09-29 16:42:09.892339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:106256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.861 [2024-09-29 16:42:09.892362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:29.861 [2024-09-29 16:42:09.892396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:106264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.861 [2024-09-29 16:42:09.892434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:29.861 [2024-09-29 16:42:09.892471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:106272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.861 [2024-09-29 16:42:09.892494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:29.861 [2024-09-29 16:42:09.892527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:106288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.861 [2024-09-29 16:42:09.892549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:29.861 [2024-09-29 16:42:09.892581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:106296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.861 [2024-09-29 16:42:09.892603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:29.861 [2024-09-29 16:42:09.892637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:106304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.861 [2024-09-29 16:42:09.892685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:29.861 [2024-09-29 16:42:09.892725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:106312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.861 [2024-09-29 16:42:09.892750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:29.861 [2024-09-29 16:42:09.892786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:106320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.861 [2024-09-29 16:42:09.892811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:29.861 [2024-09-29 16:42:09.892846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.861 [2024-09-29 16:42:09.892871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:29.861 [2024-09-29 16:42:09.892907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:106336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.861 [2024-09-29 16:42:09.892932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:29.861 [2024-09-29 16:42:09.892981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:106344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.861 [2024-09-29 16:42:09.893006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:29.861 [2024-09-29 16:42:09.893055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:106352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.861 [2024-09-29 16:42:09.893078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:29.861 [2024-09-29 16:42:09.893110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:106360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.861 [2024-09-29 16:42:09.893133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:29.861 [2024-09-29 16:42:09.893166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:106368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.861 [2024-09-29 16:42:09.893194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:29.861 [2024-09-29 16:42:09.893231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:106376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.861 [2024-09-29 16:42:09.893254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:29.861 [2024-09-29 16:42:09.893287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:106384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.861 [2024-09-29 16:42:09.893310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:29.861 [2024-09-29 16:42:09.893343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:106392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.861 [2024-09-29 16:42:09.893366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:29.861 [2024-09-29 16:42:09.894507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:106400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.861 [2024-09-29 16:42:09.894539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:29.861 [2024-09-29 16:42:09.894582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:106408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.861 [2024-09-29 16:42:09.894609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:29.861 [2024-09-29 16:42:09.894645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:106280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.861 [2024-09-29 16:42:09.894680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:29.861 [2024-09-29 16:42:09.894720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:105520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.861 [2024-09-29 16:42:09.894745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:29.861 [2024-09-29 16:42:09.894782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.861 [2024-09-29 16:42:09.894807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:29.861 [2024-09-29 16:42:09.894842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:105536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.861 [2024-09-29 16:42:09.894867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:29.861 [2024-09-29 16:42:09.894903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:105544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.861 [2024-09-29 16:42:09.894928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:29.861 [2024-09-29 16:42:09.894963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:105552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.861 [2024-09-29 16:42:09.894989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:29.861 [2024-09-29 16:42:09.895040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:105560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.861 [2024-09-29 16:42:09.895064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:29.861 [2024-09-29 16:42:09.895118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:105568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.861 [2024-09-29 16:42:09.895142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:29.861 [2024-09-29 16:42:09.895175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:105576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.861 [2024-09-29 16:42:09.895212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:29.861 [2024-09-29 16:42:09.895247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:105584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.861 [2024-09-29 16:42:09.895286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:29.861 [2024-09-29 16:42:09.895322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:105592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.862 [2024-09-29 16:42:09.895347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:29.862 [2024-09-29 16:42:09.895382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:105600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.862 [2024-09-29 16:42:09.895406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:29.862 [2024-09-29 16:42:09.895440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:105608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.862 [2024-09-29 16:42:09.895464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:29.862 [2024-09-29 16:42:09.895499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:105616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.862 [2024-09-29 16:42:09.895523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:29.862 [2024-09-29 16:42:09.895572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:105624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.862 [2024-09-29 16:42:09.895596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:29.862 [2024-09-29 16:42:09.895644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:105632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.862 [2024-09-29 16:42:09.895693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:29.862 [2024-09-29 16:42:09.895731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:105640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.862 [2024-09-29 16:42:09.895755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:29.862 [2024-09-29 16:42:09.895790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:105648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.862 [2024-09-29 16:42:09.895814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:29.862 [2024-09-29 16:42:09.895848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:105656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.862 [2024-09-29 16:42:09.895872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:29.862 [2024-09-29 16:42:09.895907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:105664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.862 [2024-09-29 16:42:09.895936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:29.862 [2024-09-29 16:42:09.895989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:105672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.862 [2024-09-29 16:42:09.896028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:29.862 [2024-09-29 16:42:09.896063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.862 [2024-09-29 16:42:09.896085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:29.862 [2024-09-29 16:42:09.896118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:105688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.862 [2024-09-29 16:42:09.896140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:29.862 [2024-09-29 16:42:09.896174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:105696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.862 [2024-09-29 16:42:09.896196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:29.862 [2024-09-29 16:42:09.896229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:105704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.862 [2024-09-29 16:42:09.896251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:29.862 [2024-09-29 16:42:09.896283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:105712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.862 [2024-09-29 16:42:09.896305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:29.862 [2024-09-29 16:42:09.896338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:105720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.862 [2024-09-29 16:42:09.896360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:29.862 [2024-09-29 16:42:09.896392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:105728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.862 [2024-09-29 16:42:09.896415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:29.862 [2024-09-29 16:42:09.896446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:105736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.862 [2024-09-29 16:42:09.896469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:29.862 [2024-09-29 16:42:09.896500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:105744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.862 [2024-09-29 16:42:09.896522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:29.862 [2024-09-29 16:42:09.896554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:105752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.862 [2024-09-29 16:42:09.896576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:29.862 [2024-09-29 16:42:09.896609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.862 [2024-09-29 16:42:09.896635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:29.862 [2024-09-29 16:42:09.896691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:105768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.862 [2024-09-29 16:42:09.896718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:29.862 [2024-09-29 16:42:09.896754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:105776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.862 [2024-09-29 16:42:09.896778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:29.862 [2024-09-29 16:42:09.896813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:105784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.862 [2024-09-29 16:42:09.896838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:29.862 [2024-09-29 16:42:09.896873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:105792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.862 [2024-09-29 16:42:09.896897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:29.862 [2024-09-29 16:42:09.896933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:105800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.862 [2024-09-29 16:42:09.896972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:29.862 [2024-09-29 16:42:09.897007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:105808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.862 [2024-09-29 16:42:09.897045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:29.862 [2024-09-29 16:42:09.897078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:105816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.862 [2024-09-29 16:42:09.897100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:29.862 [2024-09-29 16:42:09.897133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:105824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.862 [2024-09-29 16:42:09.897156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:29.863 [2024-09-29 16:42:09.897189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:105832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.863 [2024-09-29 16:42:09.897211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:29.863 [2024-09-29 16:42:09.897243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:105840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.863 [2024-09-29 16:42:09.897266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:29.863 [2024-09-29 16:42:09.897316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:105848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.863 [2024-09-29 16:42:09.897340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:29.863 [2024-09-29 16:42:09.897376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.863 [2024-09-29 16:42:09.897411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:29.863 [2024-09-29 16:42:09.897450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.863 [2024-09-29 16:42:09.897474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:29.863 [2024-09-29 16:42:09.898072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:105872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.863 [2024-09-29 16:42:09.898119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:29.863 [2024-09-29 16:42:09.898159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:105880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.863 [2024-09-29 16:42:09.898185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:29.863 [2024-09-29 16:42:09.898219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:105888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.863 [2024-09-29 16:42:09.898243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:29.863 [2024-09-29 16:42:09.898278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:106416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.863 [2024-09-29 16:42:09.898317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:29.863 [2024-09-29 16:42:09.898351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:106424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.863 [2024-09-29 16:42:09.898390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:29.863 [2024-09-29 16:42:09.898424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:106432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.863 [2024-09-29 16:42:09.898447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:29.863 [2024-09-29 16:42:09.898480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:106440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.863 [2024-09-29 16:42:09.898503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:29.863 [2024-09-29 16:42:09.898535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:106448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.863 [2024-09-29 16:42:09.898558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:29.863 [2024-09-29 16:42:09.898590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.863 [2024-09-29 16:42:09.898613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:29.863 [2024-09-29 16:42:09.898645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:106464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.863 [2024-09-29 16:42:09.898693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:29.863 [2024-09-29 16:42:09.898746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:106472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.863 [2024-09-29 16:42:09.898771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:29.863 [2024-09-29 16:42:09.898812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:106480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.863 [2024-09-29 16:42:09.898836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:29.863 [2024-09-29 16:42:09.898871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:106488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.863 [2024-09-29 16:42:09.898895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:29.863 [2024-09-29 16:42:09.898935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:106496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.863 [2024-09-29 16:42:09.898975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:29.863 [2024-09-29 16:42:09.899009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:106504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.863 [2024-09-29 16:42:09.899048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:29.863 [2024-09-29 16:42:09.899081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:106512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.863 [2024-09-29 16:42:09.899104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:29.863 [2024-09-29 16:42:09.899135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:106520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.863 [2024-09-29 16:42:09.899157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:29.863 [2024-09-29 16:42:09.899189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:106528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.863 [2024-09-29 16:42:09.899212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:29.863 [2024-09-29 16:42:09.899243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:106536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.863 [2024-09-29 16:42:09.899265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:29.863 [2024-09-29 16:42:09.899296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.863 [2024-09-29 16:42:09.899318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:29.863 [2024-09-29 16:42:09.899351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:105904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.863 [2024-09-29 16:42:09.899373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:29.863 [2024-09-29 16:42:09.899404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:105912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.863 [2024-09-29 16:42:09.899426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:29.863 [2024-09-29 16:42:09.899458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:105920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.863 [2024-09-29 16:42:09.899481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:29.863 [2024-09-29 16:42:09.899517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:105928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.863 [2024-09-29 16:42:09.899540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:29.863 [2024-09-29 16:42:09.899572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:105936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.864 [2024-09-29 16:42:09.899594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:29.864 [2024-09-29 16:42:09.899626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.864 [2024-09-29 16:42:09.899648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:29.864 [2024-09-29 16:42:09.899701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:105952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.864 [2024-09-29 16:42:09.899735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:29.864 [2024-09-29 16:42:09.899769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.864 [2024-09-29 16:42:09.899792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:29.864 [2024-09-29 16:42:09.899825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:105968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.864 [2024-09-29 16:42:09.899847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:29.864 [2024-09-29 16:42:09.899881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:105976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.864 [2024-09-29 16:42:09.899909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:29.864 [2024-09-29 16:42:09.899959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:105984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.864 [2024-09-29 16:42:09.899984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:29.864 [2024-09-29 16:42:09.900018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:105992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.864 [2024-09-29 16:42:09.900042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:29.864 [2024-09-29 16:42:09.900075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:106000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.864 [2024-09-29 16:42:09.900100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:29.864 [2024-09-29 16:42:09.900134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:106008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.864 [2024-09-29 16:42:09.900159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:29.864 [2024-09-29 16:42:09.900238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:106016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.864 [2024-09-29 16:42:09.900278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:29.864 [2024-09-29 16:42:09.900313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:106024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.864 [2024-09-29 16:42:09.900342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:29.864 [2024-09-29 16:42:09.900376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:106032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.864 [2024-09-29 16:42:09.900399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:29.864 [2024-09-29 16:42:09.900433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:106040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.864 [2024-09-29 16:42:09.900471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:29.864 [2024-09-29 16:42:09.900504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.864 [2024-09-29 16:42:09.900526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:29.864 [2024-09-29 16:42:09.900558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:106056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.864 [2024-09-29 16:42:09.900580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:29.864 [2024-09-29 16:42:09.900613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:106064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.864 [2024-09-29 16:42:09.900636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:29.864 [2024-09-29 16:42:09.900691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:106072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.864 [2024-09-29 16:42:09.900731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:29.864 [2024-09-29 16:42:09.912580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:106080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.864 [2024-09-29 16:42:09.912617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:29.864 [2024-09-29 16:42:09.912669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:106088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.864 [2024-09-29 16:42:09.912704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:29.864 [2024-09-29 16:42:09.912757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:106096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.864 [2024-09-29 16:42:09.912781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:29.864 [2024-09-29 16:42:09.912816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:106104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.864 [2024-09-29 16:42:09.912841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:29.864 [2024-09-29 16:42:09.912876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:106112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.864 [2024-09-29 16:42:09.912899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:29.864 [2024-09-29 16:42:09.912934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:106120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.864 [2024-09-29 16:42:09.912987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:29.864 [2024-09-29 16:42:09.913023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:106128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.864 [2024-09-29 16:42:09.913061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:29.864 [2024-09-29 16:42:09.913093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:106136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.864 [2024-09-29 16:42:09.913116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:29.864 [2024-09-29 16:42:09.913148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:106144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.864 [2024-09-29 16:42:09.913171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:29.864 [2024-09-29 16:42:09.913203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:106152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.864 [2024-09-29 16:42:09.913225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:29.864 [2024-09-29 16:42:09.913257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:106160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.864 [2024-09-29 16:42:09.913280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:29.864 [2024-09-29 16:42:09.913312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:106168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.864 [2024-09-29 16:42:09.913335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:29.864 [2024-09-29 16:42:09.913367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:106176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.864 [2024-09-29 16:42:09.913390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:29.865 [2024-09-29 16:42:09.913422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:106184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.865 [2024-09-29 16:42:09.913444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:29.865 [2024-09-29 16:42:09.913475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:106192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.865 [2024-09-29 16:42:09.913498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:29.865 [2024-09-29 16:42:09.913529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:106200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.865 [2024-09-29 16:42:09.913552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:29.865 [2024-09-29 16:42:09.913583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:106208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.865 [2024-09-29 16:42:09.913605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:29.865 [2024-09-29 16:42:09.913636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:106216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.865 [2024-09-29 16:42:09.913697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:29.865 [2024-09-29 16:42:09.913737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:106224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.865 [2024-09-29 16:42:09.913760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:29.865 [2024-09-29 16:42:09.913793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:106232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.865 [2024-09-29 16:42:09.913816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:29.865 [2024-09-29 16:42:09.913849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:106240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.865 [2024-09-29 16:42:09.913872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:29.865 [2024-09-29 16:42:09.913905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:106248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.865 [2024-09-29 16:42:09.913928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:29.865 [2024-09-29 16:42:09.913970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:106256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.865 [2024-09-29 16:42:09.914009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:29.865 [2024-09-29 16:42:09.914041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:106264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.865 [2024-09-29 16:42:09.914072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:29.865 [2024-09-29 16:42:09.914104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:106272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.865 [2024-09-29 16:42:09.914127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:29.865 [2024-09-29 16:42:09.914158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:106288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.865 [2024-09-29 16:42:09.914181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:29.865 [2024-09-29 16:42:09.914214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:106296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.865 [2024-09-29 16:42:09.914236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:29.865 [2024-09-29 16:42:09.914268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:106304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.865 [2024-09-29 16:42:09.914290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:29.865 [2024-09-29 16:42:09.914322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:106312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.865 [2024-09-29 16:42:09.914344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:29.865 [2024-09-29 16:42:09.914376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:106320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.865 [2024-09-29 16:42:09.914398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:29.865 [2024-09-29 16:42:09.914434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:106328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.865 [2024-09-29 16:42:09.914457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:29.865 [2024-09-29 16:42:09.914488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:106336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.865 [2024-09-29 16:42:09.914510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:29.865 [2024-09-29 16:42:09.914542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:106344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.865 [2024-09-29 16:42:09.914563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:29.865 [2024-09-29 16:42:09.914595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:106352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.865 [2024-09-29 16:42:09.914617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:29.865 [2024-09-29 16:42:09.914649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:106360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.865 [2024-09-29 16:42:09.914700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:29.865 [2024-09-29 16:42:09.914738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:106368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.865 [2024-09-29 16:42:09.914761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:29.865 [2024-09-29 16:42:09.914795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:106376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.865 [2024-09-29 16:42:09.914818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:29.865 [2024-09-29 16:42:09.914853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:106384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.865 [2024-09-29 16:42:09.914876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:29.865 [2024-09-29 16:42:09.915977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:106392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.865 [2024-09-29 16:42:09.916009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:29.865 [2024-09-29 16:42:09.916054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:106400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.865 [2024-09-29 16:42:09.916096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:29.865 [2024-09-29 16:42:09.916133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:106408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.865 [2024-09-29 16:42:09.916158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:29.865 [2024-09-29 16:42:09.916193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:106280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.866 [2024-09-29 16:42:09.916217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:29.866 [2024-09-29 16:42:09.916273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:105520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.866 [2024-09-29 16:42:09.916298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:29.866 [2024-09-29 16:42:09.916331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:105528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.866 [2024-09-29 16:42:09.916355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:29.866 [2024-09-29 16:42:09.916403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:105536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.866 [2024-09-29 16:42:09.916425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:29.866 [2024-09-29 16:42:09.916457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:105544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.866 [2024-09-29 16:42:09.916494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:29.866 [2024-09-29 16:42:09.916529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:105552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.866 [2024-09-29 16:42:09.916566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:29.866 [2024-09-29 16:42:09.916601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:105560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.866 [2024-09-29 16:42:09.916624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:29.866 [2024-09-29 16:42:09.916693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:105568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.866 [2024-09-29 16:42:09.916719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:29.866 [2024-09-29 16:42:09.916755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:105576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.866 [2024-09-29 16:42:09.916780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:29.866 [2024-09-29 16:42:09.916814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:105584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.866 [2024-09-29 16:42:09.916838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:29.866 [2024-09-29 16:42:09.916887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.866 [2024-09-29 16:42:09.916912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:29.866 [2024-09-29 16:42:09.916970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:105600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.866 [2024-09-29 16:42:09.916995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:29.866 [2024-09-29 16:42:09.917042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:105608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.866 [2024-09-29 16:42:09.917065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:29.866 [2024-09-29 16:42:09.917102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:105616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.866 [2024-09-29 16:42:09.917125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:29.866 [2024-09-29 16:42:09.917156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:105624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.866 [2024-09-29 16:42:09.917178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:29.866 [2024-09-29 16:42:09.917226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:105632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.866 [2024-09-29 16:42:09.917250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:29.866 [2024-09-29 16:42:09.917291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:105640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.866 [2024-09-29 16:42:09.917313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:29.866 [2024-09-29 16:42:09.917354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:105648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.866 [2024-09-29 16:42:09.917376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:29.866 [2024-09-29 16:42:09.917408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:105656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.866 [2024-09-29 16:42:09.917430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:29.866 [2024-09-29 16:42:09.917461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:105664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.866 [2024-09-29 16:42:09.917483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:29.866 [2024-09-29 16:42:09.917515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.866 [2024-09-29 16:42:09.917537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:29.866 [2024-09-29 16:42:09.917569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:105680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.866 [2024-09-29 16:42:09.917595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:29.866 [2024-09-29 16:42:09.917626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:105688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.866 [2024-09-29 16:42:09.917659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:29.866 [2024-09-29 16:42:09.917726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:105696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.866 [2024-09-29 16:42:09.917752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:29.866 [2024-09-29 16:42:09.917785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:105704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.866 [2024-09-29 16:42:09.917808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:29.866 [2024-09-29 16:42:09.917841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:105712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.866 [2024-09-29 16:42:09.917869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:29.867 [2024-09-29 16:42:09.917904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:105720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.867 [2024-09-29 16:42:09.917928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:29.867 [2024-09-29 16:42:09.917960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:105728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.867 [2024-09-29 16:42:09.917983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:29.867 [2024-09-29 16:42:09.918039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:105736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.867 [2024-09-29 16:42:09.918061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:29.867 [2024-09-29 16:42:09.918093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:105744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.867 [2024-09-29 16:42:09.918116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:29.867 [2024-09-29 16:42:09.918147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:105752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.867 [2024-09-29 16:42:09.918169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:29.867 [2024-09-29 16:42:09.918200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:105760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.867 [2024-09-29 16:42:09.918223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:29.867 [2024-09-29 16:42:09.918255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:105768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.867 [2024-09-29 16:42:09.918276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:29.867 [2024-09-29 16:42:09.918307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:105776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.867 [2024-09-29 16:42:09.918329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:29.867 [2024-09-29 16:42:09.918360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:105784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.867 [2024-09-29 16:42:09.918383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:29.867 [2024-09-29 16:42:09.918414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:105792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.867 [2024-09-29 16:42:09.918436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:29.867 [2024-09-29 16:42:09.918467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:105800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.867 [2024-09-29 16:42:09.918490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:29.867 [2024-09-29 16:42:09.918522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:105808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.867 [2024-09-29 16:42:09.918565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:29.867 [2024-09-29 16:42:09.918614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:105816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.867 [2024-09-29 16:42:09.918638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:29.867 [2024-09-29 16:42:09.918693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.867 [2024-09-29 16:42:09.918742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:29.867 [2024-09-29 16:42:09.918781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:105832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.867 [2024-09-29 16:42:09.918806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:29.867 [2024-09-29 16:42:09.918840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:105840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.867 [2024-09-29 16:42:09.918865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:29.867 [2024-09-29 16:42:09.918900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:105848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.867 [2024-09-29 16:42:09.918925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:29.867 [2024-09-29 16:42:09.918968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:105856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.867 [2024-09-29 16:42:09.918992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:29.867 [2024-09-29 16:42:09.919584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:105864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.867 [2024-09-29 16:42:09.919614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:29.867 [2024-09-29 16:42:09.919689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:105872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.867 [2024-09-29 16:42:09.919717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:29.867 [2024-09-29 16:42:09.919754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.867 [2024-09-29 16:42:09.919779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:29.867 [2024-09-29 16:42:09.919813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:105888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.867 [2024-09-29 16:42:09.919853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:29.867 [2024-09-29 16:42:09.919888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:106416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.867 [2024-09-29 16:42:09.919912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:29.867 [2024-09-29 16:42:09.919946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:106424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.867 [2024-09-29 16:42:09.919995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:29.867 [2024-09-29 16:42:09.920051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:106432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.867 [2024-09-29 16:42:09.920075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:29.867 [2024-09-29 16:42:09.920107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:106440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.867 [2024-09-29 16:42:09.920130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:29.867 [2024-09-29 16:42:09.920163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:106448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.867 [2024-09-29 16:42:09.920184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:29.867 [2024-09-29 16:42:09.920215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:106456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.867 [2024-09-29 16:42:09.920237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:29.867 [2024-09-29 16:42:09.920268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:106464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.868 [2024-09-29 16:42:09.920291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:29.868 [2024-09-29 16:42:09.920322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:106472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.868 [2024-09-29 16:42:09.920344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:29.868 [2024-09-29 16:42:09.920375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:106480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.868 [2024-09-29 16:42:09.920397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:29.868 [2024-09-29 16:42:09.920429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:106488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.868 [2024-09-29 16:42:09.920451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:29.868 [2024-09-29 16:42:09.920482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:106496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.868 [2024-09-29 16:42:09.920504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:29.868 [2024-09-29 16:42:09.920537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:106504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.868 [2024-09-29 16:42:09.920559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:29.868 [2024-09-29 16:42:09.920589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:106512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.868 [2024-09-29 16:42:09.920611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:29.868 [2024-09-29 16:42:09.920643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:106520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.868 [2024-09-29 16:42:09.920689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:29.868 [2024-09-29 16:42:09.920745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:106528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.868 [2024-09-29 16:42:09.920771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:29.868 [2024-09-29 16:42:09.920805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:106536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.868 [2024-09-29 16:42:09.920828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:29.868 [2024-09-29 16:42:09.920863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:105896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.868 [2024-09-29 16:42:09.920886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:29.868 [2024-09-29 16:42:09.920920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:105904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.868 [2024-09-29 16:42:09.920944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:29.868 [2024-09-29 16:42:09.920992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:105912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.868 [2024-09-29 16:42:09.921015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:29.868 [2024-09-29 16:42:09.921063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:105920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.868 [2024-09-29 16:42:09.921086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:29.868 [2024-09-29 16:42:09.921118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:105928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.868 [2024-09-29 16:42:09.921140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:29.868 [2024-09-29 16:42:09.921170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:105936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.868 [2024-09-29 16:42:09.921193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:29.868 [2024-09-29 16:42:09.921224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:105944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.868 [2024-09-29 16:42:09.921246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:29.868 [2024-09-29 16:42:09.921278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:105952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.868 [2024-09-29 16:42:09.921300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:29.868 [2024-09-29 16:42:09.921332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:105960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.868 [2024-09-29 16:42:09.921354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:29.868 [2024-09-29 16:42:09.921386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:105968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.868 [2024-09-29 16:42:09.921408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:29.868 [2024-09-29 16:42:09.921459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:105976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.868 [2024-09-29 16:42:09.921484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:29.868 [2024-09-29 16:42:09.921533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:105984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.868 [2024-09-29 16:42:09.921558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:29.868 [2024-09-29 16:42:09.921591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:105992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.868 [2024-09-29 16:42:09.921614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:29.868 [2024-09-29 16:42:09.921646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:106000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.868 [2024-09-29 16:42:09.921679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:29.868 [2024-09-29 16:42:09.921716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:106008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.868 [2024-09-29 16:42:09.921740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:29.868 [2024-09-29 16:42:09.921788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:106016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.868 [2024-09-29 16:42:09.921812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:29.868 [2024-09-29 16:42:09.921846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:106024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.869 [2024-09-29 16:42:09.921869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:29.869 [2024-09-29 16:42:09.921903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:106032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.869 [2024-09-29 16:42:09.921926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:29.869 [2024-09-29 16:42:09.921985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:106040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.869 [2024-09-29 16:42:09.922007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:29.869 [2024-09-29 16:42:09.922040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:106048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.869 [2024-09-29 16:42:09.922062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:29.869 [2024-09-29 16:42:09.922094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:106056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.869 [2024-09-29 16:42:09.922115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:29.869 [2024-09-29 16:42:09.922147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.869 [2024-09-29 16:42:09.922170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:29.869 [2024-09-29 16:42:09.922201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:106072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.869 [2024-09-29 16:42:09.922229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:29.869 [2024-09-29 16:42:09.922262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:106080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.869 [2024-09-29 16:42:09.922285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:29.869 [2024-09-29 16:42:09.922317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:106088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.869 [2024-09-29 16:42:09.922339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:29.869 [2024-09-29 16:42:09.922371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:106096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.869 [2024-09-29 16:42:09.922394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:29.869 [2024-09-29 16:42:09.922426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:106104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.869 [2024-09-29 16:42:09.922449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:29.869 [2024-09-29 16:42:09.922481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:106112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.869 [2024-09-29 16:42:09.922503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:29.869 [2024-09-29 16:42:09.922535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:106120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.869 [2024-09-29 16:42:09.922557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:29.869 [2024-09-29 16:42:09.922590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:106128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.869 [2024-09-29 16:42:09.922612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:29.869 [2024-09-29 16:42:09.922645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:106136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.869 [2024-09-29 16:42:09.922667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:29.869 [2024-09-29 16:42:09.922729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:106144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.869 [2024-09-29 16:42:09.922752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:29.869 [2024-09-29 16:42:09.922786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:106152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.869 [2024-09-29 16:42:09.922809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:29.869 [2024-09-29 16:42:09.922841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:106160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.869 [2024-09-29 16:42:09.922864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:29.869 [2024-09-29 16:42:09.922897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:106168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.869 [2024-09-29 16:42:09.922924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:29.869 [2024-09-29 16:42:09.922959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:106176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.869 [2024-09-29 16:42:09.922997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:29.869 [2024-09-29 16:42:09.923030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:106184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.869 [2024-09-29 16:42:09.923053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:29.869 [2024-09-29 16:42:09.923084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:106192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.869 [2024-09-29 16:42:09.923107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:29.869 [2024-09-29 16:42:09.923139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:106200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.869 [2024-09-29 16:42:09.923161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:29.869 [2024-09-29 16:42:09.923208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:106208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.869 [2024-09-29 16:42:09.923231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:29.869 [2024-09-29 16:42:09.923265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:106216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.869 [2024-09-29 16:42:09.923288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:29.869 [2024-09-29 16:42:09.923321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:106224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.869 [2024-09-29 16:42:09.923344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:29.869 [2024-09-29 16:42:09.923376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:106232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.869 [2024-09-29 16:42:09.923399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:29.869 [2024-09-29 16:42:09.923432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:106240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.869 [2024-09-29 16:42:09.923455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:29.869 [2024-09-29 16:42:09.923489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:106248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.869 [2024-09-29 16:42:09.923512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:29.869 [2024-09-29 16:42:09.923561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:106256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.869 [2024-09-29 16:42:09.923584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:29.869 [2024-09-29 16:42:09.923616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:106264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.869 [2024-09-29 16:42:09.923642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:29.869 [2024-09-29 16:42:09.923697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:106272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.870 [2024-09-29 16:42:09.923723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:29.870 [2024-09-29 16:42:09.923757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:106288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.870 [2024-09-29 16:42:09.923781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:29.870 [2024-09-29 16:42:09.923814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:106296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.870 [2024-09-29 16:42:09.923837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:29.870 [2024-09-29 16:42:09.923870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:106304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.870 [2024-09-29 16:42:09.923893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:29.870 [2024-09-29 16:42:09.923926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.870 [2024-09-29 16:42:09.923948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:29.870 [2024-09-29 16:42:09.923981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:106320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.870 [2024-09-29 16:42:09.924020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:29.870 [2024-09-29 16:42:09.924053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:106328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.870 [2024-09-29 16:42:09.924075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:29.870 [2024-09-29 16:42:09.924108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:106336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.870 [2024-09-29 16:42:09.924130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:29.870 [2024-09-29 16:42:09.924161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:106344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.870 [2024-09-29 16:42:09.924183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:29.870 [2024-09-29 16:42:09.924215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:106352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.870 [2024-09-29 16:42:09.924237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:29.870 [2024-09-29 16:42:09.924269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:106360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.870 [2024-09-29 16:42:09.924291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:29.870 [2024-09-29 16:42:09.924322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:106368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.870 [2024-09-29 16:42:09.924345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:29.870 [2024-09-29 16:42:09.924382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:106376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.870 [2024-09-29 16:42:09.924405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:29.870 [2024-09-29 16:42:09.925439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:106384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.870 [2024-09-29 16:42:09.925470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:29.870 [2024-09-29 16:42:09.925526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:106392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.870 [2024-09-29 16:42:09.925550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:29.870 [2024-09-29 16:42:09.925600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:106400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.870 [2024-09-29 16:42:09.925624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:29.870 [2024-09-29 16:42:09.925660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:106408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.870 [2024-09-29 16:42:09.925693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:29.870 [2024-09-29 16:42:09.925730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:106280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.870 [2024-09-29 16:42:09.925754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:29.870 [2024-09-29 16:42:09.925803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:105520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.870 [2024-09-29 16:42:09.925829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:29.870 [2024-09-29 16:42:09.925865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:105528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.870 [2024-09-29 16:42:09.925905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:29.870 [2024-09-29 16:42:09.925940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:105536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.870 [2024-09-29 16:42:09.925964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:29.870 [2024-09-29 16:42:09.925998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:105544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.870 [2024-09-29 16:42:09.926022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:29.870 [2024-09-29 16:42:09.926056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:105552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.870 [2024-09-29 16:42:09.926094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:29.870 [2024-09-29 16:42:09.926128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:105560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.870 [2024-09-29 16:42:09.926151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:29.870 [2024-09-29 16:42:09.926203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:105568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.870 [2024-09-29 16:42:09.926227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:29.870 [2024-09-29 16:42:09.926277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:105576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.870 [2024-09-29 16:42:09.926302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:29.870 [2024-09-29 16:42:09.926336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:105584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.870 [2024-09-29 16:42:09.926360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:29.870 [2024-09-29 16:42:09.926395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:105592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.870 [2024-09-29 16:42:09.926418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:29.870 [2024-09-29 16:42:09.926453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:105600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.870 [2024-09-29 16:42:09.926477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:29.870 [2024-09-29 16:42:09.926511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:105608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.870 [2024-09-29 16:42:09.926535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:29.870 [2024-09-29 16:42:09.926588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:105616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.870 [2024-09-29 16:42:09.926611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:29.870 [2024-09-29 16:42:09.926660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:105624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.871 [2024-09-29 16:42:09.926708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:29.871 [2024-09-29 16:42:09.926759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:105632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.871 [2024-09-29 16:42:09.926782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:29.871 [2024-09-29 16:42:09.926816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:105640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.871 [2024-09-29 16:42:09.926838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:29.871 [2024-09-29 16:42:09.926871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:105648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.871 [2024-09-29 16:42:09.926894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:29.871 [2024-09-29 16:42:09.926926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:105656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.871 [2024-09-29 16:42:09.926949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:29.871 [2024-09-29 16:42:09.926995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.871 [2024-09-29 16:42:09.927024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:29.871 [2024-09-29 16:42:09.927057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:105672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.871 [2024-09-29 16:42:09.927080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:29.871 [2024-09-29 16:42:09.927112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:105680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.871 [2024-09-29 16:42:09.927134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:29.871 [2024-09-29 16:42:09.927165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:105688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.871 [2024-09-29 16:42:09.927187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:29.871 [2024-09-29 16:42:09.927219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:105696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.871 [2024-09-29 16:42:09.927242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:29.871 [2024-09-29 16:42:09.927274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:105704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.871 [2024-09-29 16:42:09.927296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:29.871 [2024-09-29 16:42:09.927328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:105712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.871 [2024-09-29 16:42:09.927350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:29.871 [2024-09-29 16:42:09.927382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:105720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.871 [2024-09-29 16:42:09.927405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:29.871 [2024-09-29 16:42:09.927436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:105728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.871 [2024-09-29 16:42:09.927459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:29.871 [2024-09-29 16:42:09.927490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:105736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.871 [2024-09-29 16:42:09.927513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:29.871 [2024-09-29 16:42:09.927545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.871 [2024-09-29 16:42:09.927567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:29.871 [2024-09-29 16:42:09.927599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:105752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.871 [2024-09-29 16:42:09.927621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:29.871 [2024-09-29 16:42:09.927668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:105760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.871 [2024-09-29 16:42:09.927713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:29.871 [2024-09-29 16:42:09.927749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:105768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.871 [2024-09-29 16:42:09.927772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:29.871 [2024-09-29 16:42:09.927804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:105776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.871 [2024-09-29 16:42:09.927827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:29.871 [2024-09-29 16:42:09.927860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:105784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.871 [2024-09-29 16:42:09.927882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:29.871 [2024-09-29 16:42:09.927915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:105792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.871 [2024-09-29 16:42:09.927937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:29.871 [2024-09-29 16:42:09.927970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:105800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.871 [2024-09-29 16:42:09.927993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:29.871 [2024-09-29 16:42:09.928041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:105808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.871 [2024-09-29 16:42:09.928063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:29.871 [2024-09-29 16:42:09.928094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:105816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.871 [2024-09-29 16:42:09.928117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:29.871 [2024-09-29 16:42:09.928149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:105824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.871 [2024-09-29 16:42:09.928188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:29.871 [2024-09-29 16:42:09.928221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:105832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.871 [2024-09-29 16:42:09.928260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:29.871 [2024-09-29 16:42:09.928294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.871 [2024-09-29 16:42:09.928320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:29.871 [2024-09-29 16:42:09.928362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.871 [2024-09-29 16:42:09.928392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:29.871 [2024-09-29 16:42:09.928708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:105856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.871 [2024-09-29 16:42:09.928744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:29.871 [2024-09-29 16:42:09.928821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:105864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.872 [2024-09-29 16:42:09.928849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:29.872 [2024-09-29 16:42:09.928889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:105872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.872 [2024-09-29 16:42:09.928914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:29.872 [2024-09-29 16:42:09.928953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:105880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.872 [2024-09-29 16:42:09.928995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:29.872 [2024-09-29 16:42:09.929034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:105888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.872 [2024-09-29 16:42:09.929058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:29.872 [2024-09-29 16:42:09.929095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:106416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.872 [2024-09-29 16:42:09.929119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:29.872 [2024-09-29 16:42:09.929157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:106424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.872 [2024-09-29 16:42:09.929181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:29.872 [2024-09-29 16:42:09.929218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:106432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.872 [2024-09-29 16:42:09.929241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:29.872 [2024-09-29 16:42:09.929278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.872 [2024-09-29 16:42:09.929302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:29.872 [2024-09-29 16:42:09.929355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:106448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.872 [2024-09-29 16:42:09.929393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:29.872 [2024-09-29 16:42:09.929428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:106456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.872 [2024-09-29 16:42:09.929451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:29.872 [2024-09-29 16:42:09.929485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:106464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.872 [2024-09-29 16:42:09.929508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:29.872 [2024-09-29 16:42:09.929542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:106472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.872 [2024-09-29 16:42:09.929564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:29.872 [2024-09-29 16:42:09.929604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:106480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.872 [2024-09-29 16:42:09.929628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:29.872 [2024-09-29 16:42:09.929663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:106488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.872 [2024-09-29 16:42:09.929710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:29.872 [2024-09-29 16:42:09.929749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:106496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.872 [2024-09-29 16:42:09.929773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:29.872 [2024-09-29 16:42:09.929809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:106504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.872 [2024-09-29 16:42:09.929832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:29.872 [2024-09-29 16:42:09.929867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:106512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.872 [2024-09-29 16:42:09.929890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:29.872 [2024-09-29 16:42:09.929925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:106520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.872 [2024-09-29 16:42:09.929948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:29.872 [2024-09-29 16:42:09.929999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:106528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.872 [2024-09-29 16:42:09.930022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:29.872 [2024-09-29 16:42:09.930057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:106536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.872 [2024-09-29 16:42:09.930079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:29.872 [2024-09-29 16:42:09.930114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:105896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.872 [2024-09-29 16:42:09.930137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:29.872 [2024-09-29 16:42:09.930170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:105904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.872 [2024-09-29 16:42:09.930193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:29.872 [2024-09-29 16:42:09.930227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:105912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.872 [2024-09-29 16:42:09.930249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:29.872 [2024-09-29 16:42:09.930284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:105920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.872 [2024-09-29 16:42:09.930307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:29.872 [2024-09-29 16:42:09.930349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.872 [2024-09-29 16:42:09.930372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:29.872 [2024-09-29 16:42:09.930407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:105936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.872 [2024-09-29 16:42:09.930430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:29.873 [2024-09-29 16:42:09.930464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.873 [2024-09-29 16:42:09.930487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:29.873 [2024-09-29 16:42:09.930521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:105952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.873 [2024-09-29 16:42:09.930543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:29.873 [2024-09-29 16:42:09.930577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:105960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.873 [2024-09-29 16:42:09.930600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:29.873 [2024-09-29 16:42:09.930634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:105968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.873 [2024-09-29 16:42:09.930657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:29.873 [2024-09-29 16:42:09.930717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:105976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.873 [2024-09-29 16:42:09.930742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:29.873 [2024-09-29 16:42:09.930778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:105984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.873 [2024-09-29 16:42:09.930801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:29.873 [2024-09-29 16:42:09.930837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:105992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.873 [2024-09-29 16:42:09.930860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:29.873 [2024-09-29 16:42:09.930896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:106000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.873 [2024-09-29 16:42:09.930918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:29.873 [2024-09-29 16:42:09.930955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:106008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.873 [2024-09-29 16:42:09.930978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:29.873 [2024-09-29 16:42:09.931042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:106016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.873 [2024-09-29 16:42:09.931065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:29.873 [2024-09-29 16:42:09.931099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:106024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.873 [2024-09-29 16:42:09.931126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:29.873 [2024-09-29 16:42:09.931162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.873 [2024-09-29 16:42:09.931184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:29.873 [2024-09-29 16:42:09.931219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:106040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.873 [2024-09-29 16:42:09.931241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:29.873 [2024-09-29 16:42:09.931275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:106048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.873 [2024-09-29 16:42:09.931297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:29.873 [2024-09-29 16:42:09.931332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:106056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.873 [2024-09-29 16:42:09.931355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:29.873 [2024-09-29 16:42:09.931389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:106064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.873 [2024-09-29 16:42:09.931411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:29.873 [2024-09-29 16:42:09.931445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:106072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.873 [2024-09-29 16:42:09.931468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:29.873 [2024-09-29 16:42:09.931503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:106080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.873 [2024-09-29 16:42:09.931525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:29.873 [2024-09-29 16:42:09.931559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:106088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.873 [2024-09-29 16:42:09.931582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:29.873 [2024-09-29 16:42:09.931616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:106096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.873 [2024-09-29 16:42:09.931639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:29.873 [2024-09-29 16:42:09.931681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:106104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.873 [2024-09-29 16:42:09.931721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:29.873 [2024-09-29 16:42:09.931757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:106112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.873 [2024-09-29 16:42:09.931781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:29.873 [2024-09-29 16:42:09.931817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:106120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.873 [2024-09-29 16:42:09.931845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:29.873 [2024-09-29 16:42:09.931881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:106128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.873 [2024-09-29 16:42:09.931904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:29.873 [2024-09-29 16:42:09.931940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:106136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.873 [2024-09-29 16:42:09.931963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:29.873 [2024-09-29 16:42:09.932013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:106144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.873 [2024-09-29 16:42:09.932036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:29.873 [2024-09-29 16:42:09.932071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:106152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.873 [2024-09-29 16:42:09.932093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:29.873 [2024-09-29 16:42:09.932128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:106160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.873 [2024-09-29 16:42:09.932150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:29.873 [2024-09-29 16:42:09.932184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:106168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.873 [2024-09-29 16:42:09.932206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:29.873 [2024-09-29 16:42:09.932241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:106176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.873 [2024-09-29 16:42:09.932263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:29.873 [2024-09-29 16:42:09.932298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:106184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.873 [2024-09-29 16:42:09.932320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:29.874 [2024-09-29 16:42:09.932355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:106192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.874 [2024-09-29 16:42:09.932377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:29.874 [2024-09-29 16:42:09.932411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:106200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.874 [2024-09-29 16:42:09.932434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:29.874 [2024-09-29 16:42:09.932468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:106208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.874 [2024-09-29 16:42:09.932490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:29.874 [2024-09-29 16:42:09.932525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:106216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.874 [2024-09-29 16:42:09.932552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:29.874 [2024-09-29 16:42:09.932587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:106224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.874 [2024-09-29 16:42:09.932609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:29.874 [2024-09-29 16:42:09.932643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:106232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.874 [2024-09-29 16:42:09.932690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:29.874 [2024-09-29 16:42:09.932751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:106240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.874 [2024-09-29 16:42:09.932776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:29.874 [2024-09-29 16:42:09.932812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:106248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.874 [2024-09-29 16:42:09.932836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:29.874 [2024-09-29 16:42:09.932873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:106256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.874 [2024-09-29 16:42:09.932897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:29.874 [2024-09-29 16:42:09.932934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:106264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.874 [2024-09-29 16:42:09.932958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:29.874 [2024-09-29 16:42:09.933009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:106272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.874 [2024-09-29 16:42:09.933048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:29.874 [2024-09-29 16:42:09.933083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:106288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.874 [2024-09-29 16:42:09.933106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:29.874 [2024-09-29 16:42:09.933141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:106296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.874 [2024-09-29 16:42:09.933163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:29.874 [2024-09-29 16:42:09.933197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:106304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.874 [2024-09-29 16:42:09.933220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:29.874 [2024-09-29 16:42:09.933254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:106312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.874 [2024-09-29 16:42:09.933277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:29.874 [2024-09-29 16:42:09.933311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:106320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.874 [2024-09-29 16:42:09.933334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:29.874 [2024-09-29 16:42:09.933372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:106328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.874 [2024-09-29 16:42:09.933396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:29.874 [2024-09-29 16:42:09.933431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:106336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.874 [2024-09-29 16:42:09.933454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:29.874 [2024-09-29 16:42:09.933490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:106344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.874 [2024-09-29 16:42:09.933513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:29.874 [2024-09-29 16:42:09.933548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:106352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.874 [2024-09-29 16:42:09.933571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:29.874 [2024-09-29 16:42:09.933605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:106360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.874 [2024-09-29 16:42:09.933627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:29.874 [2024-09-29 16:42:09.933663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:106368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.874 [2024-09-29 16:42:09.933716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:29.874 [2024-09-29 16:42:09.933948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:106376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.874 [2024-09-29 16:42:09.933990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:29.874 5944.38 IOPS, 23.22 MiB/s 5594.71 IOPS, 21.85 MiB/s 5283.89 IOPS, 20.64 MiB/s 5005.79 IOPS, 19.55 MiB/s 4995.60 IOPS, 19.51 MiB/s 5055.86 IOPS, 19.75 MiB/s 5120.91 IOPS, 20.00 MiB/s 5272.00 IOPS, 20.59 MiB/s 5400.38 IOPS, 21.10 MiB/s 5514.08 IOPS, 21.54 MiB/s 5551.81 IOPS, 21.69 MiB/s 5574.67 IOPS, 21.78 MiB/s 5591.07 IOPS, 21.84 MiB/s 5642.21 IOPS, 22.04 MiB/s 5731.80 IOPS, 22.39 MiB/s 5811.16 IOPS, 22.70 MiB/s [2024-09-29 16:42:26.463588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:46936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.874 [2024-09-29 16:42:26.463718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:29.874 [2024-09-29 16:42:26.463788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:46952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.874 [2024-09-29 16:42:26.463816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:29.874 [2024-09-29 16:42:26.463855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:46968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.874 [2024-09-29 16:42:26.463898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:29.874 [2024-09-29 16:42:26.463939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:46984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.874 [2024-09-29 16:42:26.463974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:29.874 [2024-09-29 16:42:26.464030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:47000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.874 [2024-09-29 16:42:26.464062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:29.874 [2024-09-29 16:42:26.464098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:47016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.874 [2024-09-29 16:42:26.464123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:29.874 [2024-09-29 16:42:26.464158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:47032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.875 [2024-09-29 16:42:26.464182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:29.875 [2024-09-29 16:42:26.464218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:47048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.875 [2024-09-29 16:42:26.464258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:29.875 [2024-09-29 16:42:26.464297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:47064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.875 [2024-09-29 16:42:26.464328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:29.875 [2024-09-29 16:42:26.464367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:47080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.875 [2024-09-29 16:42:26.464392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:29.875 [2024-09-29 16:42:26.464430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:47096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.875 [2024-09-29 16:42:26.464460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:29.875 [2024-09-29 16:42:26.464499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:47112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.875 [2024-09-29 16:42:26.464524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:29.875 [2024-09-29 16:42:26.464561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.875 [2024-09-29 16:42:26.464586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:29.875 [2024-09-29 16:42:26.464624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:47144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.875 [2024-09-29 16:42:26.464649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:29.875 [2024-09-29 16:42:26.464697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:47160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.875 [2024-09-29 16:42:26.464725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:29.875 [2024-09-29 16:42:26.464767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:47176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.875 [2024-09-29 16:42:26.464801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:29.875 [2024-09-29 16:42:26.464841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:47192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.875 [2024-09-29 16:42:26.464866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:29.875 [2024-09-29 16:42:26.464902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:47208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.875 [2024-09-29 16:42:26.464928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:29.875 [2024-09-29 16:42:26.464979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:47224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.875 [2024-09-29 16:42:26.465013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:29.875 [2024-09-29 16:42:26.465049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:47240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.875 [2024-09-29 16:42:26.465082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:29.875 [2024-09-29 16:42:26.465118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:47256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.875 [2024-09-29 16:42:26.465142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:29.875 [2024-09-29 16:42:26.465177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:47272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.875 [2024-09-29 16:42:26.465201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:29.875 [2024-09-29 16:42:26.465236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:47288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.875 [2024-09-29 16:42:26.465261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:29.875 [2024-09-29 16:42:26.465298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:47304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.875 [2024-09-29 16:42:26.465322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:29.875 [2024-09-29 16:42:26.465358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:47320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.875 [2024-09-29 16:42:26.465382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:29.875 [2024-09-29 16:42:26.465419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:47336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.875 [2024-09-29 16:42:26.465443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:29.875 [2024-09-29 16:42:26.465478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:47352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.875 [2024-09-29 16:42:26.465503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:29.875 [2024-09-29 16:42:26.465539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:47368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.875 [2024-09-29 16:42:26.465564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:29.875 [2024-09-29 16:42:26.465607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:47384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.875 [2024-09-29 16:42:26.465633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:29.875 [2024-09-29 16:42:26.465695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:47400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.875 [2024-09-29 16:42:26.465723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:29.875 [2024-09-29 16:42:26.465768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:47416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.875 [2024-09-29 16:42:26.465793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:29.875 [2024-09-29 16:42:26.465830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:47432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.875 [2024-09-29 16:42:26.465855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:29.875 [2024-09-29 16:42:26.465892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:47448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.875 [2024-09-29 16:42:26.465916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:29.875 [2024-09-29 16:42:26.465953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:47464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.875 [2024-09-29 16:42:26.465993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:29.875 [2024-09-29 16:42:26.466037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:46648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.875 [2024-09-29 16:42:26.466062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:29.875 [2024-09-29 16:42:26.466098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:46680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.875 [2024-09-29 16:42:26.466122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:29.875 [2024-09-29 16:42:26.466158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:46712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.876 [2024-09-29 16:42:26.466182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:29.876 [2024-09-29 16:42:26.466218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:46736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.876 [2024-09-29 16:42:26.466243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:29.876 [2024-09-29 16:42:26.466278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:46768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.876 [2024-09-29 16:42:26.466303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:29.876 [2024-09-29 16:42:26.466339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:46808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.876 [2024-09-29 16:42:26.466363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:29.876 [2024-09-29 16:42:26.466404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:46840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.876 [2024-09-29 16:42:26.466429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:29.876 [2024-09-29 16:42:26.466466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:47472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.876 [2024-09-29 16:42:26.466491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:29.876 [2024-09-29 16:42:26.466527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:47488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.876 [2024-09-29 16:42:26.466551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:29.876 [2024-09-29 16:42:26.466587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:47504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.876 [2024-09-29 16:42:26.466612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:29.876 [2024-09-29 16:42:26.466649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:47520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.876 [2024-09-29 16:42:26.466681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:29.876 [2024-09-29 16:42:26.466749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:46872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.876 [2024-09-29 16:42:26.466774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:29.876 [2024-09-29 16:42:26.466811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:46904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.876 [2024-09-29 16:42:26.466836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:29.876 [2024-09-29 16:42:26.466873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:47536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.876 [2024-09-29 16:42:26.466898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:29.876 [2024-09-29 16:42:26.466934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:47552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.876 [2024-09-29 16:42:26.466960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:29.876 [2024-09-29 16:42:26.467019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:46656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.876 [2024-09-29 16:42:26.467044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:29.876 [2024-09-29 16:42:26.467080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:46688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.876 [2024-09-29 16:42:26.467104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:29.876 [2024-09-29 16:42:26.467140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:46720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.876 [2024-09-29 16:42:26.467164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:29.876 [2024-09-29 16:42:26.470506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:46760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.876 [2024-09-29 16:42:26.470548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:29.876 [2024-09-29 16:42:26.470596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:46784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.876 [2024-09-29 16:42:26.470622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:29.876 [2024-09-29 16:42:26.470660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:46816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.876 [2024-09-29 16:42:26.470713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:29.876 [2024-09-29 16:42:26.470760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:46848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.876 [2024-09-29 16:42:26.470786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:29.876 [2024-09-29 16:42:26.470823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:47560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.876 [2024-09-29 16:42:26.470848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:29.876 [2024-09-29 16:42:26.470885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:47576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.876 [2024-09-29 16:42:26.470910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:29.876 [2024-09-29 16:42:26.470951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:47592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.876 [2024-09-29 16:42:26.470978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:29.876 [2024-09-29 16:42:26.471016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:47608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.876 [2024-09-29 16:42:26.471041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:29.876 [2024-09-29 16:42:26.471088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:46880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.876 [2024-09-29 16:42:26.471115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:29.876 [2024-09-29 16:42:26.471151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:46912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.876 [2024-09-29 16:42:26.471176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:29.876 [2024-09-29 16:42:26.471212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:47624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.876 [2024-09-29 16:42:26.471252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:29.876 [2024-09-29 16:42:26.471289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:47640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.876 [2024-09-29 16:42:26.471313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:29.876 [2024-09-29 16:42:26.471348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:46952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.876 [2024-09-29 16:42:26.471378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:29.876 [2024-09-29 16:42:26.471413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:46984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.876 [2024-09-29 16:42:26.471437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:29.876 [2024-09-29 16:42:26.471472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:47016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.877 [2024-09-29 16:42:26.471510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:29.877 [2024-09-29 16:42:26.471549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:47048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.877 [2024-09-29 16:42:26.471573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:29.877 [2024-09-29 16:42:26.471607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:47080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.877 [2024-09-29 16:42:26.471631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:29.877 [2024-09-29 16:42:26.471665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:47112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.877 [2024-09-29 16:42:26.471715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:29.877 [2024-09-29 16:42:26.471756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:47144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.877 [2024-09-29 16:42:26.471782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:29.877 [2024-09-29 16:42:26.471817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:47176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.877 [2024-09-29 16:42:26.471841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:29.877 [2024-09-29 16:42:26.471878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:47208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.877 [2024-09-29 16:42:26.471903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:29.877 [2024-09-29 16:42:26.473795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:47240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.877 [2024-09-29 16:42:26.473828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:29.877 [2024-09-29 16:42:26.473871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:47272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.877 [2024-09-29 16:42:26.473898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:29.877 [2024-09-29 16:42:26.473945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:47304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.877 [2024-09-29 16:42:26.473970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:29.877 [2024-09-29 16:42:26.474015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:47336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.877 [2024-09-29 16:42:26.474045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:29.877 [2024-09-29 16:42:26.474099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:47368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.877 [2024-09-29 16:42:26.474127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:29.877 [2024-09-29 16:42:26.474165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:47400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.877 [2024-09-29 16:42:26.474190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:29.877 [2024-09-29 16:42:26.474226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:47432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.877 [2024-09-29 16:42:26.474251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:29.877 [2024-09-29 16:42:26.474288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:47464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.877 [2024-09-29 16:42:26.474314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:29.877 [2024-09-29 16:42:26.474350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:46680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.877 [2024-09-29 16:42:26.474375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:29.877 [2024-09-29 16:42:26.474412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:46736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.877 [2024-09-29 16:42:26.474452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:29.877 [2024-09-29 16:42:26.474489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:46808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.877 [2024-09-29 16:42:26.474513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:29.877 [2024-09-29 16:42:26.474547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:47472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.877 [2024-09-29 16:42:26.474571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:29.877 [2024-09-29 16:42:26.474606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:47504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.877 [2024-09-29 16:42:26.474631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:29.877 [2024-09-29 16:42:26.474694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:46872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.877 [2024-09-29 16:42:26.474722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:29.877 [2024-09-29 16:42:26.474765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:47536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.877 [2024-09-29 16:42:26.474790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:29.877 [2024-09-29 16:42:26.474827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:46656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.877 [2024-09-29 16:42:26.474853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:29.877 [2024-09-29 16:42:26.474894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:46720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.877 [2024-09-29 16:42:26.474920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:29.877 [2024-09-29 16:42:26.474966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:46960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.877 [2024-09-29 16:42:26.474994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:29.877 [2024-09-29 16:42:26.475030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:46992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.878 [2024-09-29 16:42:26.475056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:29.878 [2024-09-29 16:42:26.475091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:47024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.878 [2024-09-29 16:42:26.475116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:29.878 [2024-09-29 16:42:26.475152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:47056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.878 [2024-09-29 16:42:26.475179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:29.878 [2024-09-29 16:42:26.475215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:47088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.878 [2024-09-29 16:42:26.475241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:29.878 [2024-09-29 16:42:26.475278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:47120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.878 [2024-09-29 16:42:26.475303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:29.878 [2024-09-29 16:42:26.475341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:47152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.878 [2024-09-29 16:42:26.475367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:29.878 [2024-09-29 16:42:26.475403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:47184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.878 [2024-09-29 16:42:26.475429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:29.878 [2024-09-29 16:42:26.475465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:47216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.878 [2024-09-29 16:42:26.475491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:29.878 [2024-09-29 16:42:26.475543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:47248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.878 [2024-09-29 16:42:26.475568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:29.878 [2024-09-29 16:42:26.475603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:47280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.878 [2024-09-29 16:42:26.475627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:29.878 [2024-09-29 16:42:26.475688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:47312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.878 [2024-09-29 16:42:26.475732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:29.878 [2024-09-29 16:42:26.475771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:47344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.878 [2024-09-29 16:42:26.475796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:29.878 [2024-09-29 16:42:26.475832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.878 [2024-09-29 16:42:26.475857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:29.878 [2024-09-29 16:42:26.475893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:47408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.878 [2024-09-29 16:42:26.475919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:29.878 [2024-09-29 16:42:26.475966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:47440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.878 [2024-09-29 16:42:26.475992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:29.878 [2024-09-29 16:42:26.476815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:47480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.878 [2024-09-29 16:42:26.476848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:29.878 [2024-09-29 16:42:26.476891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:47512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.878 [2024-09-29 16:42:26.476918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:29.878 [2024-09-29 16:42:26.476956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:47544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.878 [2024-09-29 16:42:26.476990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:29.878 [2024-09-29 16:42:26.477027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:47664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.878 [2024-09-29 16:42:26.477069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:29.878 [2024-09-29 16:42:26.477108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:47680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.878 [2024-09-29 16:42:26.477133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:29.878 [2024-09-29 16:42:26.477170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:47696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.878 [2024-09-29 16:42:26.477210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:29.878 [2024-09-29 16:42:26.477259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:47712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.878 [2024-09-29 16:42:26.477285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:29.878 [2024-09-29 16:42:26.477321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:47728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.878 [2024-09-29 16:42:26.477352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:29.878 [2024-09-29 16:42:26.477392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:47744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.878 [2024-09-29 16:42:26.477418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:29.878 [2024-09-29 16:42:26.477454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:46784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.878 [2024-09-29 16:42:26.477479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:29.878 [2024-09-29 16:42:26.477516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:46848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.878 [2024-09-29 16:42:26.477541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:29.878 [2024-09-29 16:42:26.477589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:47576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.878 [2024-09-29 16:42:26.477615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:29.878 [2024-09-29 16:42:26.477651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:47608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.878 [2024-09-29 16:42:26.477694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:29.878 [2024-09-29 16:42:26.477733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:46912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.878 [2024-09-29 16:42:26.477759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:29.878 [2024-09-29 16:42:26.477796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:47640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.878 [2024-09-29 16:42:26.477822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:29.879 [2024-09-29 16:42:26.477858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:46984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.879 [2024-09-29 16:42:26.477883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:29.879 [2024-09-29 16:42:26.477921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:47048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.879 [2024-09-29 16:42:26.477947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:29.879 [2024-09-29 16:42:26.477990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:47112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.879 [2024-09-29 16:42:26.478015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:29.879 [2024-09-29 16:42:26.478052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:47176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.879 [2024-09-29 16:42:26.478078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:29.879 [2024-09-29 16:42:26.478570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:47568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.879 [2024-09-29 16:42:26.478601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:29.879 [2024-09-29 16:42:26.478680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:47600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.879 [2024-09-29 16:42:26.478710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:29.879 [2024-09-29 16:42:26.478751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:47632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.879 [2024-09-29 16:42:26.478777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:29.879 [2024-09-29 16:42:26.478813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:46936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.879 [2024-09-29 16:42:26.478837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:29.879 [2024-09-29 16:42:26.478873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:47000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.879 [2024-09-29 16:42:26.478899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:29.879 [2024-09-29 16:42:26.478935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:47064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.879 [2024-09-29 16:42:26.478990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:29.879 [2024-09-29 16:42:26.479029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:47128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.879 [2024-09-29 16:42:26.479053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:29.879 [2024-09-29 16:42:26.479106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:47192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.879 [2024-09-29 16:42:26.479137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:29.879 [2024-09-29 16:42:26.479175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:47768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.879 [2024-09-29 16:42:26.479201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:29.879 [2024-09-29 16:42:26.479238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:47784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.879 [2024-09-29 16:42:26.479263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:29.879 [2024-09-29 16:42:26.479304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:47800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.879 [2024-09-29 16:42:26.479331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:29.879 [2024-09-29 16:42:26.479369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:47224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.879 [2024-09-29 16:42:26.479395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:29.879 [2024-09-29 16:42:26.479444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:47288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.879 [2024-09-29 16:42:26.479472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:29.879 [2024-09-29 16:42:26.479515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:47352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.879 [2024-09-29 16:42:26.479541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:29.879 [2024-09-29 16:42:26.479577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:47416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.879 [2024-09-29 16:42:26.479603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:29.879 [2024-09-29 16:42:26.479640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:47272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.879 [2024-09-29 16:42:26.479697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:29.879 [2024-09-29 16:42:26.479739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:47336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.879 [2024-09-29 16:42:26.479766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:29.879 [2024-09-29 16:42:26.479802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.879 [2024-09-29 16:42:26.479828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:29.879 [2024-09-29 16:42:26.479864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:47464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.879 [2024-09-29 16:42:26.479890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:29.879 [2024-09-29 16:42:26.479927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:46736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.879 [2024-09-29 16:42:26.479952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:29.879 [2024-09-29 16:42:26.482145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:47472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.879 [2024-09-29 16:42:26.482180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:29.879 [2024-09-29 16:42:26.482226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:46872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.879 [2024-09-29 16:42:26.482253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:29.879 [2024-09-29 16:42:26.482291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:46656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.879 [2024-09-29 16:42:26.482318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:29.879 [2024-09-29 16:42:26.482355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:46960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.879 [2024-09-29 16:42:26.482381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:29.879 [2024-09-29 16:42:26.482419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:47024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.879 [2024-09-29 16:42:26.482445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:29.879 [2024-09-29 16:42:26.482480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:47088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.879 [2024-09-29 16:42:26.482511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:29.880 [2024-09-29 16:42:26.482549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:47152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.880 [2024-09-29 16:42:26.482576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:29.880 [2024-09-29 16:42:26.482613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:47216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.880 [2024-09-29 16:42:26.482640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:29.880 [2024-09-29 16:42:26.482695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:47280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.880 [2024-09-29 16:42:26.482722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:29.880 [2024-09-29 16:42:26.482759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:47344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.880 [2024-09-29 16:42:26.482785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:29.880 [2024-09-29 16:42:26.482821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:47408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.880 [2024-09-29 16:42:26.482845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:29.880 [2024-09-29 16:42:26.482883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:47488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.880 [2024-09-29 16:42:26.482909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:29.880 [2024-09-29 16:42:26.482946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:47552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.880 [2024-09-29 16:42:26.482981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:29.880 [2024-09-29 16:42:26.483029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:47512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.880 [2024-09-29 16:42:26.483055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:29.880 [2024-09-29 16:42:26.483091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:47664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.880 [2024-09-29 16:42:26.483116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:29.880 [2024-09-29 16:42:26.483152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:47696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.880 [2024-09-29 16:42:26.483187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:29.880 [2024-09-29 16:42:26.483223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:47728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.880 [2024-09-29 16:42:26.483249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:29.880 [2024-09-29 16:42:26.483284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:46784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.880 [2024-09-29 16:42:26.483317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:29.880 [2024-09-29 16:42:26.483355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:47576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.880 [2024-09-29 16:42:26.483381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:29.880 [2024-09-29 16:42:26.483417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:46912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.880 [2024-09-29 16:42:26.483443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:29.880 [2024-09-29 16:42:26.483480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:46984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.880 [2024-09-29 16:42:26.483505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:29.880 [2024-09-29 16:42:26.483543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:47112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.880 [2024-09-29 16:42:26.483568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:29.880 [2024-09-29 16:42:26.483604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:47600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.880 [2024-09-29 16:42:26.483629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:29.880 [2024-09-29 16:42:26.483682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:46936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.880 [2024-09-29 16:42:26.483709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:29.880 [2024-09-29 16:42:26.483745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.880 [2024-09-29 16:42:26.483773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:29.880 [2024-09-29 16:42:26.483809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:47192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.880 [2024-09-29 16:42:26.483834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:29.880 [2024-09-29 16:42:26.483870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:47784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.880 [2024-09-29 16:42:26.483895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:29.880 [2024-09-29 16:42:26.483932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:47224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.880 [2024-09-29 16:42:26.483967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:29.880 [2024-09-29 16:42:26.484003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:47352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.880 [2024-09-29 16:42:26.484029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:29.880 [2024-09-29 16:42:26.484077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:47272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.880 [2024-09-29 16:42:26.484102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:29.880 [2024-09-29 16:42:26.484154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:47400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.880 [2024-09-29 16:42:26.484180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:29.880 [2024-09-29 16:42:26.484217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:46736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.880 [2024-09-29 16:42:26.484243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:29.880 [2024-09-29 16:42:26.489703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:47824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.880 [2024-09-29 16:42:26.489740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:29.880 [2024-09-29 16:42:26.489787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:47840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.881 [2024-09-29 16:42:26.489820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:29.881 [2024-09-29 16:42:26.489862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:47856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.881 [2024-09-29 16:42:26.489888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:29.881 [2024-09-29 16:42:26.489934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:47872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.881 [2024-09-29 16:42:26.489959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:29.881 [2024-09-29 16:42:26.490005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:47888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.881 [2024-09-29 16:42:26.490030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:29.881 [2024-09-29 16:42:26.490067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:47904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.881 [2024-09-29 16:42:26.490093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:29.881 [2024-09-29 16:42:26.490130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:47920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.881 [2024-09-29 16:42:26.490155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:29.881 [2024-09-29 16:42:26.490192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:47936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.881 [2024-09-29 16:42:26.490217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:29.881 [2024-09-29 16:42:26.490253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:47952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.881 [2024-09-29 16:42:26.490278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:29.881 [2024-09-29 16:42:26.490314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:47968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.881 [2024-09-29 16:42:26.490345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:29.881 [2024-09-29 16:42:26.490391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:47984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.881 [2024-09-29 16:42:26.490418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:29.881 [2024-09-29 16:42:26.490456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:47656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.881 [2024-09-29 16:42:26.490481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:29.881 [2024-09-29 16:42:26.490517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:47688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.881 [2024-09-29 16:42:26.490542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:29.881 [2024-09-29 16:42:26.490578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:47720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.881 [2024-09-29 16:42:26.490603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:29.881 [2024-09-29 16:42:26.490640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:47752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.881 [2024-09-29 16:42:26.490664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:29.881 [2024-09-29 16:42:26.490722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:47592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.881 [2024-09-29 16:42:26.490747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:29.881 [2024-09-29 16:42:26.490784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:46872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.881 [2024-09-29 16:42:26.490809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:29.881 [2024-09-29 16:42:26.490845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:46960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.881 [2024-09-29 16:42:26.490892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:29.881 [2024-09-29 16:42:26.490932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:47088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.881 [2024-09-29 16:42:26.490958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:29.881 [2024-09-29 16:42:26.490999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:47216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.881 [2024-09-29 16:42:26.491024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:29.881 [2024-09-29 16:42:26.491061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:47344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.881 [2024-09-29 16:42:26.491087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:29.881 [2024-09-29 16:42:26.491123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:47488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.881 [2024-09-29 16:42:26.491149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:29.881 [2024-09-29 16:42:26.491186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:47512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.881 [2024-09-29 16:42:26.491216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:29.881 [2024-09-29 16:42:26.491253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.881 [2024-09-29 16:42:26.491279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:29.881 [2024-09-29 16:42:26.491316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:46784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.881 [2024-09-29 16:42:26.491341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:29.881 [2024-09-29 16:42:26.491377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:46912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.881 [2024-09-29 16:42:26.491402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:29.881 [2024-09-29 16:42:26.491437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:47112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.881 [2024-09-29 16:42:26.491463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:29.881 [2024-09-29 16:42:26.491515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:46936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.881 [2024-09-29 16:42:26.491540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:29.881 [2024-09-29 16:42:26.491575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:47192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.881 [2024-09-29 16:42:26.491599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:29.881 [2024-09-29 16:42:26.491652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:47224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.881 [2024-09-29 16:42:26.491693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:29.881 [2024-09-29 16:42:26.491732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:47272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.881 [2024-09-29 16:42:26.491758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:29.881 [2024-09-29 16:42:26.491794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:46736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.881 [2024-09-29 16:42:26.491818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:29.881 [2024-09-29 16:42:26.491862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:46952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.882 [2024-09-29 16:42:26.491887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:29.882 [2024-09-29 16:42:26.491923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:47080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.882 [2024-09-29 16:42:26.491948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:29.882 [2024-09-29 16:42:26.491994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:47208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.882 [2024-09-29 16:42:26.492024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:29.882 [2024-09-29 16:42:26.492061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:48008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.882 [2024-09-29 16:42:26.492087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:29.882 [2024-09-29 16:42:26.492123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:48024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.882 [2024-09-29 16:42:26.492148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:29.882 [2024-09-29 16:42:26.492193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:47760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.882 [2024-09-29 16:42:26.492218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:29.882 [2024-09-29 16:42:26.492253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:47792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.882 [2024-09-29 16:42:26.492279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:29.882 [2024-09-29 16:42:26.492314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:47240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.882 [2024-09-29 16:42:26.492340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:29.882 [2024-09-29 16:42:26.492376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:47368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.882 [2024-09-29 16:42:26.492401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:29.882 [2024-09-29 16:42:26.492436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:47504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.882 [2024-09-29 16:42:26.492462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:29.882 [2024-09-29 16:42:26.492514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:48040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.882 [2024-09-29 16:42:26.492540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:29.882 [2024-09-29 16:42:26.492577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:48056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.882 [2024-09-29 16:42:26.492618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:29.882 [2024-09-29 16:42:26.492656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:48072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.882 [2024-09-29 16:42:26.492693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:29.882 [2024-09-29 16:42:26.492738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:48088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.882 [2024-09-29 16:42:26.492763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:29.882 [2024-09-29 16:42:26.492800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:48104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.882 [2024-09-29 16:42:26.492827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:29.882 [2024-09-29 16:42:26.493978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:48120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.882 [2024-09-29 16:42:26.494011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:29.882 [2024-09-29 16:42:26.494054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:47712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.882 [2024-09-29 16:42:26.494096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:29.882 [2024-09-29 16:42:26.494131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:47608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.882 [2024-09-29 16:42:26.494172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:29.882 [2024-09-29 16:42:26.494224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:47048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.882 [2024-09-29 16:42:26.494249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:29.882 [2024-09-29 16:42:26.494286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:48128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.882 [2024-09-29 16:42:26.494311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:29.882 [2024-09-29 16:42:26.494347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:48144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.882 [2024-09-29 16:42:26.494373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:29.882 [2024-09-29 16:42:26.494409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:48160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.882 [2024-09-29 16:42:26.494435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:29.882 [2024-09-29 16:42:26.494470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:48176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.882 [2024-09-29 16:42:26.494496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:29.882 [2024-09-29 16:42:26.494533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:48192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.882 [2024-09-29 16:42:26.494558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:29.882 [2024-09-29 16:42:26.494594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:48208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.882 [2024-09-29 16:42:26.494619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:29.882 [2024-09-29 16:42:26.494656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:48224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.882 [2024-09-29 16:42:26.494690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:29.882 [2024-09-29 16:42:26.494734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:48240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.882 [2024-09-29 16:42:26.494759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:29.882 [2024-09-29 16:42:26.494802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:47800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.882 [2024-09-29 16:42:26.494828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:29.882 [2024-09-29 16:42:26.496842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:47464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.882 [2024-09-29 16:42:26.496877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:29.882 [2024-09-29 16:42:26.496922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:48256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.882 [2024-09-29 16:42:26.496956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:29.882 [2024-09-29 16:42:26.496993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:48272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.882 [2024-09-29 16:42:26.497020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:29.883 [2024-09-29 16:42:26.497056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:48288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.883 [2024-09-29 16:42:26.497081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:29.883 [2024-09-29 16:42:26.497118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:47832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.883 [2024-09-29 16:42:26.497152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:29.883 [2024-09-29 16:42:26.497188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:47864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.883 [2024-09-29 16:42:26.497213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:29.883 [2024-09-29 16:42:26.497250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:47896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.883 [2024-09-29 16:42:26.497275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:29.883 [2024-09-29 16:42:26.497311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:47928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.883 [2024-09-29 16:42:26.497341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:29.883 [2024-09-29 16:42:26.497383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:47960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.883 [2024-09-29 16:42:26.497408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:29.883 [2024-09-29 16:42:26.497444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:47992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.883 [2024-09-29 16:42:26.497474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:29.883 [2024-09-29 16:42:26.497514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:47840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.883 [2024-09-29 16:42:26.497539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:29.883 [2024-09-29 16:42:26.497582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:47872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.883 [2024-09-29 16:42:26.497609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:29.883 [2024-09-29 16:42:26.497646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:47904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.883 [2024-09-29 16:42:26.497680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:29.883 [2024-09-29 16:42:26.497731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:47936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.883 [2024-09-29 16:42:26.497757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:29.883 [2024-09-29 16:42:26.497793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:47968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.883 [2024-09-29 16:42:26.497819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:29.883 [2024-09-29 16:42:26.497856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:47656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.883 [2024-09-29 16:42:26.497881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:29.883 [2024-09-29 16:42:26.497917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:47720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.883 [2024-09-29 16:42:26.497942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:29.883 [2024-09-29 16:42:26.498003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:47592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.883 [2024-09-29 16:42:26.498042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:29.883 [2024-09-29 16:42:26.498087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:46960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.883 [2024-09-29 16:42:26.498112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:29.883 [2024-09-29 16:42:26.498157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:47216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.883 [2024-09-29 16:42:26.498183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:29.883 [2024-09-29 16:42:26.498218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:47488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.883 [2024-09-29 16:42:26.498243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:29.883 [2024-09-29 16:42:26.498279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:47696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.883 [2024-09-29 16:42:26.498317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:29.883 [2024-09-29 16:42:26.498365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:46912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.883 [2024-09-29 16:42:26.498390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:29.883 [2024-09-29 16:42:26.498425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:46936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.883 [2024-09-29 16:42:26.498455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:29.883 [2024-09-29 16:42:26.498492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:47224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.883 [2024-09-29 16:42:26.498517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:29.883 [2024-09-29 16:42:26.498553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:46736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.883 [2024-09-29 16:42:26.498578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:29.883 [2024-09-29 16:42:26.498613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:47080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.883 [2024-09-29 16:42:26.498639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:29.883 [2024-09-29 16:42:26.498683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:48008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.883 [2024-09-29 16:42:26.498720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:29.884 [2024-09-29 16:42:26.498755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:47760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.884 [2024-09-29 16:42:26.498780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:29.884 [2024-09-29 16:42:26.498816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:47240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.884 [2024-09-29 16:42:26.498842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:29.884 [2024-09-29 16:42:26.498877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:47504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.884 [2024-09-29 16:42:26.498902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:29.884 [2024-09-29 16:42:26.498938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:48056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.884 [2024-09-29 16:42:26.498980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:29.884 [2024-09-29 16:42:26.499018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:48088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.884 [2024-09-29 16:42:26.499057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:29.884 [2024-09-29 16:42:26.500495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.884 [2024-09-29 16:42:26.500528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:29.884 [2024-09-29 16:42:26.500571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:47576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.884 [2024-09-29 16:42:26.500597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:29.884 [2024-09-29 16:42:26.500635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:47784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.884 [2024-09-29 16:42:26.500666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:29.884 [2024-09-29 16:42:26.500728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:47712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.884 [2024-09-29 16:42:26.500772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:29.884 [2024-09-29 16:42:26.500817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:47048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.884 [2024-09-29 16:42:26.500843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:29.884 [2024-09-29 16:42:26.500880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:48144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.884 [2024-09-29 16:42:26.500905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:29.884 [2024-09-29 16:42:26.500941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:48176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.884 [2024-09-29 16:42:26.500991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:29.884 [2024-09-29 16:42:26.501036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:48208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.884 [2024-09-29 16:42:26.501061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:29.884 [2024-09-29 16:42:26.501116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:48240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.884 [2024-09-29 16:42:26.501158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:29.884 [2024-09-29 16:42:26.501776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:48000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.884 [2024-09-29 16:42:26.501809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:29.884 [2024-09-29 16:42:26.501852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:48032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.884 [2024-09-29 16:42:26.501879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:29.884 [2024-09-29 16:42:26.501916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:48304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.884 [2024-09-29 16:42:26.501940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:29.884 [2024-09-29 16:42:26.501981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:48320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.884 [2024-09-29 16:42:26.502005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:29.884 [2024-09-29 16:42:26.502041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:48336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.884 [2024-09-29 16:42:26.502067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:29.884 [2024-09-29 16:42:26.502103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:48352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.884 [2024-09-29 16:42:26.502128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:29.884 [2024-09-29 16:42:26.502174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:48368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.884 [2024-09-29 16:42:26.502203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:29.884 [2024-09-29 16:42:26.502239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:48384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.884 [2024-09-29 16:42:26.502265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:29.884 [2024-09-29 16:42:26.502305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:48400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.884 [2024-09-29 16:42:26.502333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:29.884 [2024-09-29 16:42:26.503418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:48416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.884 [2024-09-29 16:42:26.503450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:29.884 [2024-09-29 16:42:26.503494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:48432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.884 [2024-09-29 16:42:26.503521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:29.884 [2024-09-29 16:42:26.503558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:48448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.884 [2024-09-29 16:42:26.503583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:29.884 [2024-09-29 16:42:26.503619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:48048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.884 [2024-09-29 16:42:26.503644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:29.884 [2024-09-29 16:42:26.503689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:48080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.884 [2024-09-29 16:42:26.503717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:29.884 [2024-09-29 16:42:26.503753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:48112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.884 [2024-09-29 16:42:26.503779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:29.884 [2024-09-29 16:42:26.503814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:48256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.884 [2024-09-29 16:42:26.503841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:29.884 [2024-09-29 16:42:26.503877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:48288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.884 [2024-09-29 16:42:26.503902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:29.885 [2024-09-29 16:42:26.503938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:47864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.885 [2024-09-29 16:42:26.503963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:29.885 [2024-09-29 16:42:26.504020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:47928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.885 [2024-09-29 16:42:26.504049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:29.885 [2024-09-29 16:42:26.504085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:47992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.885 [2024-09-29 16:42:26.504110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:29.885 [2024-09-29 16:42:26.504149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:47872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.885 [2024-09-29 16:42:26.504176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:29.885 [2024-09-29 16:42:26.504213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:47936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.885 [2024-09-29 16:42:26.504239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:29.885 [2024-09-29 16:42:26.504274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:47656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.885 [2024-09-29 16:42:26.504299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:29.885 [2024-09-29 16:42:26.504335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:47592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.885 [2024-09-29 16:42:26.504360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:29.885 [2024-09-29 16:42:26.504395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:47216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.885 [2024-09-29 16:42:26.504420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:29.885 [2024-09-29 16:42:26.504455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:47696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.885 [2024-09-29 16:42:26.504495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:29.885 [2024-09-29 16:42:26.504531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:46936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.885 [2024-09-29 16:42:26.504573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:29.885 [2024-09-29 16:42:26.504610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:46736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.885 [2024-09-29 16:42:26.504635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:29.885 [2024-09-29 16:42:26.506749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:48008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.885 [2024-09-29 16:42:26.506784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:29.885 [2024-09-29 16:42:26.506845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:47240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.885 [2024-09-29 16:42:26.506874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:29.885 [2024-09-29 16:42:26.506913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:48056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.885 [2024-09-29 16:42:26.506943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:29.885 [2024-09-29 16:42:26.506999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:48136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.885 [2024-09-29 16:42:26.507039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:29.885 [2024-09-29 16:42:26.507075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:48168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.885 [2024-09-29 16:42:26.507114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:29.885 [2024-09-29 16:42:26.507163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:48200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.885 [2024-09-29 16:42:26.507196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:29.885 [2024-09-29 16:42:26.507234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:48232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.885 [2024-09-29 16:42:26.507259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:29.885 [2024-09-29 16:42:26.507295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:48472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.885 [2024-09-29 16:42:26.507321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:29.885 [2024-09-29 16:42:26.507357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:48488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.885 [2024-09-29 16:42:26.507381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:29.885 [2024-09-29 16:42:26.507417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:48248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.885 [2024-09-29 16:42:26.507442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:29.885 [2024-09-29 16:42:26.507487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:48280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.885 [2024-09-29 16:42:26.507513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:29.885 [2024-09-29 16:42:26.507549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:47576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.885 [2024-09-29 16:42:26.507574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:29.885 [2024-09-29 16:42:26.507609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:47712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.885 [2024-09-29 16:42:26.507634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:29.885 [2024-09-29 16:42:26.507670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:48144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.885 [2024-09-29 16:42:26.507704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:29.885 [2024-09-29 16:42:26.507741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:48208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.885 [2024-09-29 16:42:26.507770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:29.885 [2024-09-29 16:42:26.507808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:47824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.885 [2024-09-29 16:42:26.507846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:29.885 [2024-09-29 16:42:26.507885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:47888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.885 [2024-09-29 16:42:26.507911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:29.885 [2024-09-29 16:42:26.507947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:47952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.885 [2024-09-29 16:42:26.507972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:29.885 [2024-09-29 16:42:26.508008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:48032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.885 [2024-09-29 16:42:26.508049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:29.885 [2024-09-29 16:42:26.508086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:48320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.886 [2024-09-29 16:42:26.508110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:29.886 [2024-09-29 16:42:26.508146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:48352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.886 [2024-09-29 16:42:26.508170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:29.886 [2024-09-29 16:42:26.508205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:48384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.886 [2024-09-29 16:42:26.508246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:29.886 [2024-09-29 16:42:26.508281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:47112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.886 [2024-09-29 16:42:26.508305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:29.886 [2024-09-29 16:42:26.508354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:48024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.886 [2024-09-29 16:42:26.508379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:29.886 [2024-09-29 16:42:26.508439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:48072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.886 [2024-09-29 16:42:26.508465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:29.886 [2024-09-29 16:42:26.508511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:48504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.886 [2024-09-29 16:42:26.508536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:29.886 [2024-09-29 16:42:26.508573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:48520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.886 [2024-09-29 16:42:26.508598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:29.886 [2024-09-29 16:42:26.508639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:48536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.886 [2024-09-29 16:42:26.508664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:29.886 [2024-09-29 16:42:26.508722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.886 [2024-09-29 16:42:26.508747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:29.886 [2024-09-29 16:42:26.508784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:48432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.886 [2024-09-29 16:42:26.508809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:29.886 [2024-09-29 16:42:26.508846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:48048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.886 [2024-09-29 16:42:26.508870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:29.886 [2024-09-29 16:42:26.508906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:48112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.886 [2024-09-29 16:42:26.508932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:29.886 [2024-09-29 16:42:26.508983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:48288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.886 [2024-09-29 16:42:26.509008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:29.886 [2024-09-29 16:42:26.509042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:47928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.886 [2024-09-29 16:42:26.509083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:29.886 [2024-09-29 16:42:26.509120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:47872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.886 [2024-09-29 16:42:26.509145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:29.886 [2024-09-29 16:42:26.509181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:47656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.886 [2024-09-29 16:42:26.509206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:29.886 [2024-09-29 16:42:26.509242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:47216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.886 [2024-09-29 16:42:26.509267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:29.886 [2024-09-29 16:42:26.509304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:46936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.886 [2024-09-29 16:42:26.509330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:29.886 [2024-09-29 16:42:26.513961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:48120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.886 [2024-09-29 16:42:26.514001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:29.886 [2024-09-29 16:42:26.514055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:48160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.886 [2024-09-29 16:42:26.514083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:29.886 [2024-09-29 16:42:26.514120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:48224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.886 [2024-09-29 16:42:26.514166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:29.886 [2024-09-29 16:42:26.514217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:48576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.886 [2024-09-29 16:42:26.514241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:29.886 [2024-09-29 16:42:26.514292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:48592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.886 [2024-09-29 16:42:26.514317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:29.886 [2024-09-29 16:42:26.514353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:48608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.886 [2024-09-29 16:42:26.514382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:29.886 [2024-09-29 16:42:26.514429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:48624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.886 [2024-09-29 16:42:26.514456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:29.886 [2024-09-29 16:42:26.514493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:48640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.886 [2024-09-29 16:42:26.514518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:29.886 [2024-09-29 16:42:26.514555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:48656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.886 [2024-09-29 16:42:26.514579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:29.886 [2024-09-29 16:42:26.514615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:48672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.886 [2024-09-29 16:42:26.514639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:29.886 [2024-09-29 16:42:26.514684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:48688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.886 [2024-09-29 16:42:26.514726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:29.886 [2024-09-29 16:42:26.514778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:48296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.886 [2024-09-29 16:42:26.514803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:29.886 [2024-09-29 16:42:26.514840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:48328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.887 [2024-09-29 16:42:26.514865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:29.887 [2024-09-29 16:42:26.514902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:48360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.887 [2024-09-29 16:42:26.514931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:29.887 [2024-09-29 16:42:26.514969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:48392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.887 [2024-09-29 16:42:26.514994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:29.887 [2024-09-29 16:42:26.515029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:48424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.887 [2024-09-29 16:42:26.515054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:29.887 [2024-09-29 16:42:26.515090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:48456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.887 [2024-09-29 16:42:26.515114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:29.887 [2024-09-29 16:42:26.515150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:47240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.887 [2024-09-29 16:42:26.515174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:29.887 [2024-09-29 16:42:26.515209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:48136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.887 [2024-09-29 16:42:26.515234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:29.887 [2024-09-29 16:42:26.515270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:48200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.887 [2024-09-29 16:42:26.515312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:29.887 [2024-09-29 16:42:26.515348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:48472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.887 [2024-09-29 16:42:26.515387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:29.887 [2024-09-29 16:42:26.515422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:48248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.887 [2024-09-29 16:42:26.515445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:29.887 [2024-09-29 16:42:26.515478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:47576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.887 [2024-09-29 16:42:26.515501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:29.887 [2024-09-29 16:42:26.515535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:48144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.887 [2024-09-29 16:42:26.515558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:29.887 [2024-09-29 16:42:26.515592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:47824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.887 [2024-09-29 16:42:26.515615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:29.887 [2024-09-29 16:42:26.515648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:47952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.887 [2024-09-29 16:42:26.515701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:29.887 [2024-09-29 16:42:26.515741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.887 [2024-09-29 16:42:26.515766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:29.887 [2024-09-29 16:42:26.515802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:48384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.887 [2024-09-29 16:42:26.515827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:29.887 [2024-09-29 16:42:26.515862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:48024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.887 [2024-09-29 16:42:26.515886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:29.887 [2024-09-29 16:42:26.515923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:48504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.887 [2024-09-29 16:42:26.515948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:29.887 [2024-09-29 16:42:26.515999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:48536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.887 [2024-09-29 16:42:26.516038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:29.887 [2024-09-29 16:42:26.516074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:48432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.887 [2024-09-29 16:42:26.516097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:29.887 [2024-09-29 16:42:26.516131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:48112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.887 [2024-09-29 16:42:26.516154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:29.887 [2024-09-29 16:42:26.516186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:47928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.887 [2024-09-29 16:42:26.516210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:29.887 [2024-09-29 16:42:26.516243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:47656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.887 [2024-09-29 16:42:26.516265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:29.887 [2024-09-29 16:42:26.516298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:46936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.887 [2024-09-29 16:42:26.516321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:29.887 [2024-09-29 16:42:26.516355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:47904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.887 [2024-09-29 16:42:26.516378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:29.887 [2024-09-29 16:42:26.516411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:48704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.887 [2024-09-29 16:42:26.516434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:29.887 [2024-09-29 16:42:26.516472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:48464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.887 [2024-09-29 16:42:26.516495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:29.887 [2024-09-29 16:42:26.516528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:48496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.887 [2024-09-29 16:42:26.516552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:29.887 [2024-09-29 16:42:26.516584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:48240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.887 [2024-09-29 16:42:26.516607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:29.887 [2024-09-29 16:42:26.516640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:48720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.887 [2024-09-29 16:42:26.516700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:29.887 [2024-09-29 16:42:26.516755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:48736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.887 [2024-09-29 16:42:26.516780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:29.888 [2024-09-29 16:42:26.516815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:48752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.888 [2024-09-29 16:42:26.516840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:29.888 [2024-09-29 16:42:26.516876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:48768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.888 [2024-09-29 16:42:26.516900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:29.888 [2024-09-29 16:42:26.516936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:48784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.888 [2024-09-29 16:42:26.516975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:29.888 [2024-09-29 16:42:26.517010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:48800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.888 [2024-09-29 16:42:26.517048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:29.888 [2024-09-29 16:42:26.517082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:48816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.888 [2024-09-29 16:42:26.517104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:29.888 [2024-09-29 16:42:26.517137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:48336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.888 [2024-09-29 16:42:26.517160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:29.888 [2024-09-29 16:42:26.517193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:48400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.888 [2024-09-29 16:42:26.517216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:29.888 [2024-09-29 16:42:26.518193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:48832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.888 [2024-09-29 16:42:26.518226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:29.888 [2024-09-29 16:42:26.518283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:48848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.888 [2024-09-29 16:42:26.518308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:29.888 [2024-09-29 16:42:26.518359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:48864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.888 [2024-09-29 16:42:26.518384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:29.888 [2024-09-29 16:42:26.518418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:48880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.888 [2024-09-29 16:42:26.518441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:29.888 [2024-09-29 16:42:26.518476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:48896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.888 [2024-09-29 16:42:26.518499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:29.888 [2024-09-29 16:42:26.521721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:48512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.888 [2024-09-29 16:42:26.521760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:29.888 [2024-09-29 16:42:26.521844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:48544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.888 [2024-09-29 16:42:26.521874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:29.888 [2024-09-29 16:42:26.521914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:48416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.888 [2024-09-29 16:42:26.521941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:29.888 [2024-09-29 16:42:26.521993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:48256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.888 [2024-09-29 16:42:26.522018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:29.888 [2024-09-29 16:42:26.522053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:47696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.888 [2024-09-29 16:42:26.522076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:29.888 [2024-09-29 16:42:26.522111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:48920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.888 [2024-09-29 16:42:26.522134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:29.888 [2024-09-29 16:42:26.522170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:48936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.888 [2024-09-29 16:42:26.522208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:29.888 [2024-09-29 16:42:26.522250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:48952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.888 [2024-09-29 16:42:26.522275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:29.888 [2024-09-29 16:42:26.522311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:48968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.888 [2024-09-29 16:42:26.522334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:29.888 [2024-09-29 16:42:26.522368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:48584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.888 [2024-09-29 16:42:26.522392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:29.888 [2024-09-29 16:42:26.522428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:48616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.888 [2024-09-29 16:42:26.522452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:29.888 [2024-09-29 16:42:26.522503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:48648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.888 [2024-09-29 16:42:26.522527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:29.888 [2024-09-29 16:42:26.522561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:48680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.888 [2024-09-29 16:42:26.522584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:29.888 [2024-09-29 16:42:26.522618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:48160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.888 [2024-09-29 16:42:26.522641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:29.888 [2024-09-29 16:42:26.522698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:48576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.888 [2024-09-29 16:42:26.522724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:29.888 [2024-09-29 16:42:26.522760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:48608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.888 [2024-09-29 16:42:26.522785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:29.888 [2024-09-29 16:42:26.522820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:48640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.888 [2024-09-29 16:42:26.522844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:29.888 [2024-09-29 16:42:26.522879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:48672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.888 [2024-09-29 16:42:26.522902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:29.888 [2024-09-29 16:42:26.522936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:48296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.889 [2024-09-29 16:42:26.522961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:29.889 [2024-09-29 16:42:26.523011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:48360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.889 [2024-09-29 16:42:26.523040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:29.889 [2024-09-29 16:42:26.523091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:48424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.889 [2024-09-29 16:42:26.523117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:29.889 [2024-09-29 16:42:26.523155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:47240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.889 [2024-09-29 16:42:26.523180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:29.889 [2024-09-29 16:42:26.523215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:48200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.889 [2024-09-29 16:42:26.523240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:29.889 [2024-09-29 16:42:26.523276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:48248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.889 [2024-09-29 16:42:26.523301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:29.889 [2024-09-29 16:42:26.523338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:48144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.889 [2024-09-29 16:42:26.523363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:29.889 [2024-09-29 16:42:26.523410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:47952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.889 [2024-09-29 16:42:26.523437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:29.889 [2024-09-29 16:42:26.523476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:48384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.889 [2024-09-29 16:42:26.523501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:29.889 [2024-09-29 16:42:26.523537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:48504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.889 [2024-09-29 16:42:26.523571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:29.889 [2024-09-29 16:42:26.523609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:48432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.889 [2024-09-29 16:42:26.523634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:29.889 [2024-09-29 16:42:26.523670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:47928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.889 [2024-09-29 16:42:26.523703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:29.889 [2024-09-29 16:42:26.523740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:46936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.889 [2024-09-29 16:42:26.523765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:29.889 [2024-09-29 16:42:26.523802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:48704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.889 [2024-09-29 16:42:26.523831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:29.889 [2024-09-29 16:42:26.523883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:48496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.889 [2024-09-29 16:42:26.523908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:29.889 [2024-09-29 16:42:26.523943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:48720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.889 [2024-09-29 16:42:26.523982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:29.889 [2024-09-29 16:42:26.524019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:48752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.889 [2024-09-29 16:42:26.524042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:29.889 [2024-09-29 16:42:26.524076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:48784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.889 [2024-09-29 16:42:26.524099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:29.889 [2024-09-29 16:42:26.524132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.889 [2024-09-29 16:42:26.524156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:29.889 [2024-09-29 16:42:26.524188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:48400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.889 [2024-09-29 16:42:26.524212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:29.889 [2024-09-29 16:42:26.524245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:48056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.889 [2024-09-29 16:42:26.524268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:29.889 [2024-09-29 16:42:26.524302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:48848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.889 [2024-09-29 16:42:26.524326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:29.889 [2024-09-29 16:42:26.524360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:48880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.889 [2024-09-29 16:42:26.524383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:29.889 [2024-09-29 16:42:26.524418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:48208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.889 [2024-09-29 16:42:26.524440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:29.889 [2024-09-29 16:42:26.524473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:48520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.889 [2024-09-29 16:42:26.524496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:29.889 [2024-09-29 16:42:26.524531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:48288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.890 [2024-09-29 16:42:26.524554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:29.890 [2024-09-29 16:42:26.525600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:48976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.890 [2024-09-29 16:42:26.525632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:29.890 [2024-09-29 16:42:26.525696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:48992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.890 [2024-09-29 16:42:26.525722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:29.890 [2024-09-29 16:42:26.525773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:49008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.890 [2024-09-29 16:42:26.525799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:29.890 [2024-09-29 16:42:26.525834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:49024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.890 [2024-09-29 16:42:26.525858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:29.890 [2024-09-29 16:42:26.525893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:49040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.890 [2024-09-29 16:42:26.525917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:29.890 [2024-09-29 16:42:26.525952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:49056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.890 [2024-09-29 16:42:26.525975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:29.890 [2024-09-29 16:42:26.526010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:49072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.890 [2024-09-29 16:42:26.526063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:29.890 [2024-09-29 16:42:26.526102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:49088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.890 [2024-09-29 16:42:26.526125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:29.890 [2024-09-29 16:42:26.526159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:49104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.890 [2024-09-29 16:42:26.526182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:29.890 [2024-09-29 16:42:26.526215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:49120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.890 [2024-09-29 16:42:26.526238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:29.890 [2024-09-29 16:42:26.526271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:49136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.890 [2024-09-29 16:42:26.526294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:29.890 [2024-09-29 16:42:26.526328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:48712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.890 [2024-09-29 16:42:26.526350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:29.890 [2024-09-29 16:42:26.526389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:48744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.890 [2024-09-29 16:42:26.526413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:29.890 [2024-09-29 16:42:26.526447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:48776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.890 [2024-09-29 16:42:26.526470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:29.890 [2024-09-29 16:42:26.526503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:48808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.890 [2024-09-29 16:42:26.526526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:29.890 [2024-09-29 16:42:26.526560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:48840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.890 [2024-09-29 16:42:26.526583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:29.890 [2024-09-29 16:42:26.526633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:48872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.890 [2024-09-29 16:42:26.526658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:29.890 [2024-09-29 16:42:26.526721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:48904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:29.890 [2024-09-29 16:42:26.526750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:29.890 [2024-09-29 16:42:26.526796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:49160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:29.890 [2024-09-29 16:42:26.526824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:29.890 5863.09 IOPS, 22.90 MiB/s 5878.06 IOPS, 22.96 MiB/s 5886.88 IOPS, 23.00 MiB/s Received shutdown signal, test time was about 34.472073 seconds 00:34:29.890 00:34:29.890 Latency(us) 00:34:29.890 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:29.890 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:34:29.890 Verification LBA range: start 0x0 length 0x4000 00:34:29.890 Nvme0n1 : 34.47 5892.03 23.02 0.00 0.00 21688.02 236.66 4101097.24 00:34:29.890 =================================================================================================================== 00:34:29.890 Total : 5892.03 23.02 0.00 0.00 21688.02 236.66 4101097.24 00:34:29.890 [2024-09-29 16:42:29.207090] app.c:1032:log_deprecation_hits: *WARNING*: multipath_config: deprecation 'bdev_nvme_attach_controller.multipath configuration mismatch' scheduled for removal in v25.01 hit 1 times 00:34:29.890 16:42:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:30.153 16:42:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:34:30.153 16:42:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:30.153 16:42:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:34:30.153 16:42:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # nvmfcleanup 00:34:30.153 16:42:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:34:30.153 16:42:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:30.153 16:42:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:34:30.153 16:42:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:30.153 16:42:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:30.153 rmmod nvme_tcp 00:34:30.153 rmmod nvme_fabrics 00:34:30.153 rmmod nvme_keyring 00:34:30.153 16:42:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:30.153 16:42:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:34:30.153 16:42:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:34:30.153 16:42:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@513 -- # '[' -n 3291281 ']' 00:34:30.153 16:42:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@514 -- # killprocess 3291281 00:34:30.153 16:42:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 3291281 ']' 00:34:30.153 16:42:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 3291281 00:34:30.153 16:42:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:34:30.153 16:42:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:30.153 16:42:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3291281 00:34:30.153 16:42:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:30.153 16:42:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:30.153 16:42:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3291281' 00:34:30.153 killing process with pid 3291281 00:34:30.153 16:42:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 3291281 00:34:30.153 16:42:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 3291281 00:34:31.595 16:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:34:31.595 16:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:34:31.595 16:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:34:31.595 16:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:34:31.595 16:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # iptables-save 00:34:31.595 16:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:34:31.595 16:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # iptables-restore 00:34:31.595 16:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:31.595 16:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:31.595 16:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:31.595 16:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:31.595 16:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:34.125 16:42:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:34.125 00:34:34.125 real 0m46.914s 00:34:34.125 user 2m19.834s 00:34:34.125 sys 0m10.832s 00:34:34.125 16:42:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:34.125 16:42:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:34.125 ************************************ 00:34:34.125 END TEST nvmf_host_multipath_status 00:34:34.125 ************************************ 00:34:34.125 16:42:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:34:34.125 16:42:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:34:34.125 16:42:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:34.125 16:42:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.125 ************************************ 00:34:34.125 START TEST nvmf_discovery_remove_ifc 00:34:34.125 ************************************ 00:34:34.125 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:34:34.125 * Looking for test storage... 00:34:34.125 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:34.125 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:34:34.125 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lcov --version 00:34:34.125 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:34:34.125 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:34:34.125 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:34.125 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:34.125 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:34.125 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:34:34.125 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:34:34.125 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:34:34.125 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:34:34.125 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:34:34.125 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:34:34.125 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:34:34.125 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:34.125 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:34:34.125 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:34:34.125 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:34.125 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:34.125 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:34:34.125 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:34:34.125 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:34.125 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:34:34.125 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:34:34.125 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:34:34.125 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:34:34.125 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:34.125 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:34:34.125 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:34:34.125 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:34.125 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:34.125 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:34:34.125 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:34.125 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:34:34.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:34.125 --rc genhtml_branch_coverage=1 00:34:34.125 --rc genhtml_function_coverage=1 00:34:34.125 --rc genhtml_legend=1 00:34:34.125 --rc geninfo_all_blocks=1 00:34:34.125 --rc geninfo_unexecuted_blocks=1 00:34:34.125 00:34:34.125 ' 00:34:34.125 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:34:34.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:34.125 --rc genhtml_branch_coverage=1 00:34:34.125 --rc genhtml_function_coverage=1 00:34:34.125 --rc genhtml_legend=1 00:34:34.125 --rc geninfo_all_blocks=1 00:34:34.125 --rc geninfo_unexecuted_blocks=1 00:34:34.125 00:34:34.125 ' 00:34:34.125 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:34:34.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:34.125 --rc genhtml_branch_coverage=1 00:34:34.125 --rc genhtml_function_coverage=1 00:34:34.125 --rc genhtml_legend=1 00:34:34.125 --rc geninfo_all_blocks=1 00:34:34.125 --rc geninfo_unexecuted_blocks=1 00:34:34.125 00:34:34.125 ' 00:34:34.125 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:34:34.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:34.125 --rc genhtml_branch_coverage=1 00:34:34.125 --rc genhtml_function_coverage=1 00:34:34.125 --rc genhtml_legend=1 00:34:34.125 --rc geninfo_all_blocks=1 00:34:34.125 --rc geninfo_unexecuted_blocks=1 00:34:34.125 00:34:34.125 ' 00:34:34.125 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:34.125 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:34:34.125 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:34.125 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:34.125 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:34.125 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:34.125 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:34.125 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:34.125 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:34.125 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:34.125 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:34.125 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:34.125 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:34.125 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:34.125 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:34.125 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:34.125 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:34.125 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:34.125 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:34.125 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:34:34.125 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:34.125 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:34.125 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:34.126 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:34.126 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:34.126 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:34.126 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:34:34.126 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:34.126 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:34:34.126 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:34.126 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:34.126 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:34.126 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:34.126 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:34.126 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:34.126 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:34.126 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:34.126 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:34.126 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:34.126 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:34:34.126 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:34:34.126 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:34:34.126 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:34:34.126 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:34:34.126 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:34:34.126 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:34:34.126 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:34:34.126 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:34.126 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@472 -- # prepare_net_devs 00:34:34.126 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@434 -- # local -g is_hw=no 00:34:34.126 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@436 -- # remove_spdk_ns 00:34:34.126 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:34.126 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:34.126 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:34.126 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:34:34.126 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:34:34.126 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:34:34.126 16:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:36.028 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:36.028 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:34:36.028 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:36.028 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:36.028 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:36.028 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:36.028 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:36.028 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:34:36.028 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:36.028 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:36.029 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:36.029 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ up == up ]] 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:36.029 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ up == up ]] 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:36.029 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # is_hw=yes 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:36.029 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:36.029 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:34:36.029 00:34:36.029 --- 10.0.0.2 ping statistics --- 00:34:36.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:36.029 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:36.029 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:36.029 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:34:36.029 00:34:36.029 --- 10.0.0.1 ping statistics --- 00:34:36.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:36.029 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # return 0 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:34:36.029 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:34:36.030 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:34:36.030 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:36.030 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:36.030 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@505 -- # nvmfpid=3298941 00:34:36.030 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:34:36.030 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@506 -- # waitforlisten 3298941 00:34:36.030 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 3298941 ']' 00:34:36.030 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:36.030 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:36.030 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:36.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:36.030 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:36.030 16:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:36.030 [2024-09-29 16:42:36.568912] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:34:36.030 [2024-09-29 16:42:36.569118] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:36.288 [2024-09-29 16:42:36.707528] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:36.546 [2024-09-29 16:42:36.958547] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:36.546 [2024-09-29 16:42:36.958659] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:36.546 [2024-09-29 16:42:36.958709] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:36.546 [2024-09-29 16:42:36.958756] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:36.546 [2024-09-29 16:42:36.958789] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:36.546 [2024-09-29 16:42:36.958866] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:34:37.112 16:42:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:37.112 16:42:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:34:37.112 16:42:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:34:37.112 16:42:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:37.112 16:42:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:37.112 16:42:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:37.112 16:42:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:34:37.113 16:42:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.113 16:42:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:37.113 [2024-09-29 16:42:37.575913] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:37.113 [2024-09-29 16:42:37.584162] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:34:37.113 null0 00:34:37.113 [2024-09-29 16:42:37.616066] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:37.113 16:42:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.113 16:42:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3299124 00:34:37.113 16:42:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:34:37.113 16:42:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3299124 /tmp/host.sock 00:34:37.113 16:42:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 3299124 ']' 00:34:37.113 16:42:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:34:37.113 16:42:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:37.113 16:42:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:34:37.113 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:34:37.113 16:42:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:37.113 16:42:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:37.371 [2024-09-29 16:42:37.729435] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:34:37.371 [2024-09-29 16:42:37.729568] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3299124 ] 00:34:37.371 [2024-09-29 16:42:37.859765] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:37.629 [2024-09-29 16:42:38.103072] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:34:38.194 16:42:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:38.194 16:42:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:34:38.195 16:42:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:38.195 16:42:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:34:38.195 16:42:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.195 16:42:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:38.195 16:42:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:38.195 16:42:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:34:38.195 16:42:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.195 16:42:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:38.760 16:42:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:38.760 16:42:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:34:38.760 16:42:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.760 16:42:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:39.694 [2024-09-29 16:42:40.168578] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:39.694 [2024-09-29 16:42:40.168651] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:39.695 [2024-09-29 16:42:40.168726] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:39.953 [2024-09-29 16:42:40.296241] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:34:39.953 [2024-09-29 16:42:40.400487] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:34:39.953 [2024-09-29 16:42:40.400581] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:34:39.953 [2024-09-29 16:42:40.400690] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:34:39.953 [2024-09-29 16:42:40.400728] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:39.953 [2024-09-29 16:42:40.400779] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:39.953 16:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.953 16:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:34:39.953 16:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:39.953 16:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:39.953 16:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:39.953 16:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.953 16:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:39.953 16:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:39.953 16:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:39.953 [2024-09-29 16:42:40.416248] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x6150001f2c80 was disconnected and freed. delete nvme_qpair. 00:34:39.953 16:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.953 16:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:34:39.953 16:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:34:39.953 16:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:34:39.953 16:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:34:39.953 16:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:39.953 16:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:39.953 16:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:39.953 16:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.953 16:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:39.953 16:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:39.953 16:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:40.211 16:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:40.211 16:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:40.211 16:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:41.148 16:42:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:41.148 16:42:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:41.148 16:42:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:41.148 16:42:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.148 16:42:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:41.148 16:42:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:41.148 16:42:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:41.148 16:42:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.148 16:42:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:41.148 16:42:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:42.080 16:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:42.080 16:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:42.080 16:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:42.080 16:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:42.080 16:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:42.080 16:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:42.080 16:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:42.080 16:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:42.080 16:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:42.080 16:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:43.470 16:42:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:43.470 16:42:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:43.470 16:42:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:43.470 16:42:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.470 16:42:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:43.470 16:42:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:43.470 16:42:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:43.470 16:42:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.470 16:42:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:43.470 16:42:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:44.403 16:42:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:44.403 16:42:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:44.403 16:42:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:44.403 16:42:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:44.403 16:42:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:44.403 16:42:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:44.403 16:42:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:44.403 16:42:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:44.403 16:42:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:44.403 16:42:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:45.338 16:42:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:45.338 16:42:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:45.338 16:42:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:45.338 16:42:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:45.338 16:42:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:45.338 16:42:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:45.338 16:42:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:45.338 16:42:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:45.338 16:42:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:45.338 16:42:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:45.338 [2024-09-29 16:42:45.841797] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:34:45.338 [2024-09-29 16:42:45.841882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:45.338 [2024-09-29 16:42:45.841912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:45.338 [2024-09-29 16:42:45.841940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:45.338 [2024-09-29 16:42:45.841961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:45.338 [2024-09-29 16:42:45.841981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:45.338 [2024-09-29 16:42:45.842000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:45.338 [2024-09-29 16:42:45.842039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:45.338 [2024-09-29 16:42:45.842062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:45.338 [2024-09-29 16:42:45.842086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:34:45.338 [2024-09-29 16:42:45.842115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:45.338 [2024-09-29 16:42:45.842138] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2780 is same with the state(6) to be set 00:34:45.338 [2024-09-29 16:42:45.851809] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2780 (9): Bad file descriptor 00:34:45.338 [2024-09-29 16:42:45.861861] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:46.273 16:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:46.273 16:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:46.273 16:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:46.273 16:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:46.273 16:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:46.273 16:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:46.273 16:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:46.531 [2024-09-29 16:42:46.917716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:34:46.531 [2024-09-29 16:42:46.917797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:34:46.531 [2024-09-29 16:42:46.917833] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2780 is same with the state(6) to be set 00:34:46.531 [2024-09-29 16:42:46.917894] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2780 (9): Bad file descriptor 00:34:46.531 [2024-09-29 16:42:46.918633] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:34:46.531 [2024-09-29 16:42:46.918744] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:46.531 [2024-09-29 16:42:46.918783] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:46.531 [2024-09-29 16:42:46.918808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:46.531 [2024-09-29 16:42:46.918860] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:46.531 [2024-09-29 16:42:46.918888] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:46.531 16:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:46.531 16:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:46.531 16:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:47.466 [2024-09-29 16:42:47.921406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:47.466 [2024-09-29 16:42:47.921448] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:47.466 [2024-09-29 16:42:47.921472] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:47.466 [2024-09-29 16:42:47.921492] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:34:47.466 [2024-09-29 16:42:47.921528] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:47.466 [2024-09-29 16:42:47.921595] bdev_nvme.c:6913:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:34:47.466 [2024-09-29 16:42:47.921657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:47.466 [2024-09-29 16:42:47.921727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.466 [2024-09-29 16:42:47.921760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:47.466 [2024-09-29 16:42:47.921780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.466 [2024-09-29 16:42:47.921799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:47.466 [2024-09-29 16:42:47.921818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.466 [2024-09-29 16:42:47.921837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:47.466 [2024-09-29 16:42:47.921856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.466 [2024-09-29 16:42:47.921875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:34:47.466 [2024-09-29 16:42:47.921894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:47.466 [2024-09-29 16:42:47.921911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:34:47.466 [2024-09-29 16:42:47.921990] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:34:47.466 [2024-09-29 16:42:47.922993] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:34:47.466 [2024-09-29 16:42:47.923036] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:34:47.466 16:42:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:47.466 16:42:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:47.466 16:42:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:47.466 16:42:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:47.466 16:42:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:47.466 16:42:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:47.466 16:42:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:47.466 16:42:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:47.466 16:42:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:34:47.466 16:42:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:47.466 16:42:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:47.466 16:42:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:34:47.466 16:42:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:47.466 16:42:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:47.466 16:42:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:47.466 16:42:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:47.466 16:42:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:47.466 16:42:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:47.466 16:42:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:47.725 16:42:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:47.725 16:42:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:34:47.725 16:42:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:48.660 16:42:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:48.660 16:42:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:48.660 16:42:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:48.660 16:42:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:48.660 16:42:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:48.660 16:42:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:48.660 16:42:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:48.660 16:42:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:48.660 16:42:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:34:48.660 16:42:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:49.594 [2024-09-29 16:42:49.977873] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:49.594 [2024-09-29 16:42:49.977919] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:49.594 [2024-09-29 16:42:49.977980] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:49.594 [2024-09-29 16:42:50.064289] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:34:49.594 16:42:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:49.594 16:42:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:49.594 16:42:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:49.594 16:42:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:49.594 16:42:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:49.594 16:42:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:49.594 16:42:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:49.594 16:42:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:49.594 [2024-09-29 16:42:50.128302] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:34:49.594 [2024-09-29 16:42:50.128379] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:34:49.594 [2024-09-29 16:42:50.128461] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:34:49.594 [2024-09-29 16:42:50.128499] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:34:49.594 [2024-09-29 16:42:50.128524] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:49.594 [2024-09-29 16:42:50.134825] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x6150001f3900 was disconnected and freed. delete nvme_qpair. 00:34:49.594 16:42:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:34:49.594 16:42:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:50.969 16:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:50.969 16:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:50.969 16:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:50.969 16:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:50.969 16:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:50.969 16:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:50.969 16:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:50.969 16:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:50.969 16:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:34:50.969 16:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:34:50.969 16:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3299124 00:34:50.969 16:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 3299124 ']' 00:34:50.969 16:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 3299124 00:34:50.969 16:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:34:50.969 16:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:50.969 16:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3299124 00:34:50.969 16:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:50.969 16:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:50.969 16:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3299124' 00:34:50.969 killing process with pid 3299124 00:34:50.969 16:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 3299124 00:34:50.969 16:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 3299124 00:34:51.903 16:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:34:51.903 16:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # nvmfcleanup 00:34:51.903 16:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:34:51.903 16:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:51.903 16:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:34:51.903 16:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:51.903 16:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:51.903 rmmod nvme_tcp 00:34:51.903 rmmod nvme_fabrics 00:34:51.903 rmmod nvme_keyring 00:34:51.903 16:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:51.903 16:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:34:51.903 16:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:34:51.903 16:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@513 -- # '[' -n 3298941 ']' 00:34:51.903 16:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@514 -- # killprocess 3298941 00:34:51.903 16:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 3298941 ']' 00:34:51.903 16:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 3298941 00:34:51.903 16:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:34:51.903 16:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:51.903 16:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3298941 00:34:51.903 16:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:34:51.903 16:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:34:51.903 16:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3298941' 00:34:51.903 killing process with pid 3298941 00:34:51.903 16:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 3298941 00:34:51.903 16:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 3298941 00:34:53.275 16:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:34:53.275 16:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:34:53.275 16:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:34:53.275 16:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:34:53.275 16:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # iptables-save 00:34:53.275 16:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:34:53.275 16:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # iptables-restore 00:34:53.275 16:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:53.275 16:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:53.275 16:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:53.275 16:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:53.275 16:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:55.810 16:42:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:55.810 00:34:55.810 real 0m21.558s 00:34:55.810 user 0m31.658s 00:34:55.810 sys 0m3.349s 00:34:55.810 16:42:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:55.810 16:42:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:55.810 ************************************ 00:34:55.810 END TEST nvmf_discovery_remove_ifc 00:34:55.810 ************************************ 00:34:55.810 16:42:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:34:55.810 16:42:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:34:55.810 16:42:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:55.810 16:42:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.810 ************************************ 00:34:55.810 START TEST nvmf_identify_kernel_target 00:34:55.810 ************************************ 00:34:55.810 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:34:55.810 * Looking for test storage... 00:34:55.810 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:55.810 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:34:55.810 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lcov --version 00:34:55.810 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:34:55.810 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:34:55.810 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:55.810 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:55.810 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:55.810 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:34:55.810 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:34:55.810 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:34:55.810 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:34:55.810 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:34:55.810 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:34:55.810 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:34:55.810 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:55.810 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:34:55.810 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:34:55.810 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:55.810 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:55.810 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:34:55.810 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:34:55.810 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:55.810 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:34:55.810 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:34:55.810 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:34:55.810 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:34:55.810 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:55.810 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:34:55.810 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:34:55.810 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:55.810 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:55.810 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:34:55.810 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:55.810 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:34:55.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:55.810 --rc genhtml_branch_coverage=1 00:34:55.810 --rc genhtml_function_coverage=1 00:34:55.810 --rc genhtml_legend=1 00:34:55.810 --rc geninfo_all_blocks=1 00:34:55.810 --rc geninfo_unexecuted_blocks=1 00:34:55.811 00:34:55.811 ' 00:34:55.811 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:34:55.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:55.811 --rc genhtml_branch_coverage=1 00:34:55.811 --rc genhtml_function_coverage=1 00:34:55.811 --rc genhtml_legend=1 00:34:55.811 --rc geninfo_all_blocks=1 00:34:55.811 --rc geninfo_unexecuted_blocks=1 00:34:55.811 00:34:55.811 ' 00:34:55.811 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:34:55.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:55.811 --rc genhtml_branch_coverage=1 00:34:55.811 --rc genhtml_function_coverage=1 00:34:55.811 --rc genhtml_legend=1 00:34:55.811 --rc geninfo_all_blocks=1 00:34:55.811 --rc geninfo_unexecuted_blocks=1 00:34:55.811 00:34:55.811 ' 00:34:55.811 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:34:55.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:55.811 --rc genhtml_branch_coverage=1 00:34:55.811 --rc genhtml_function_coverage=1 00:34:55.811 --rc genhtml_legend=1 00:34:55.811 --rc geninfo_all_blocks=1 00:34:55.811 --rc geninfo_unexecuted_blocks=1 00:34:55.811 00:34:55.811 ' 00:34:55.811 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:55.811 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:34:55.811 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:55.811 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:55.811 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:55.811 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:55.811 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:55.811 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:55.811 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:55.811 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:55.811 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:55.811 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:55.811 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:55.811 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:55.811 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:55.811 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:55.811 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:55.811 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:55.811 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:55.811 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:34:55.811 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:55.811 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:55.811 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:55.811 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:55.811 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:55.811 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:55.811 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:34:55.811 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:55.811 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:34:55.811 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:55.811 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:55.811 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:55.811 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:55.811 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:55.811 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:55.811 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:55.811 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:55.811 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:55.811 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:55.811 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:34:55.811 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:34:55.811 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:55.811 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:34:55.811 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:34:55.811 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:34:55.811 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:55.811 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:55.811 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:55.811 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:34:55.811 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:34:55.811 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:34:55.811 16:42:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:57.713 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:57.713 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:57.713 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:57.713 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # is_hw=yes 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:57.713 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:57.714 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:57.714 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:57.714 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:57.714 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:57.714 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:57.714 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:57.714 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:57.714 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:57.714 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:34:57.714 00:34:57.714 --- 10.0.0.2 ping statistics --- 00:34:57.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:57.714 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:34:57.714 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:57.714 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:57.714 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.061 ms 00:34:57.714 00:34:57.714 --- 10.0.0.1 ping statistics --- 00:34:57.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:57.714 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:34:57.714 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:57.714 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # return 0 00:34:57.714 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:34:57.714 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:57.714 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:34:57.714 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:34:57.714 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:57.714 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:34:57.714 16:42:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:34:57.714 16:42:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:34:57.714 16:42:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:34:57.714 16:42:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@765 -- # local ip 00:34:57.714 16:42:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@766 -- # ip_candidates=() 00:34:57.714 16:42:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@766 -- # local -A ip_candidates 00:34:57.714 16:42:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:57.714 16:42:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:57.714 16:42:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:34:57.714 16:42:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:57.714 16:42:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:34:57.714 16:42:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:34:57.714 16:42:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:34:57.714 16:42:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:34:57.714 16:42:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:34:57.714 16:42:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:34:57.714 16:42:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:34:57.714 16:42:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:57.714 16:42:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:57.714 16:42:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:57.714 16:42:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # local block nvme 00:34:57.714 16:42:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:34:57.714 16:42:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@666 -- # modprobe nvmet 00:34:57.714 16:42:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:57.714 16:42:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:58.651 Waiting for block devices as requested 00:34:58.651 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:34:58.914 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:58.914 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:59.171 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:59.171 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:59.171 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:59.171 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:59.429 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:59.429 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:59.429 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:59.429 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:59.688 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:59.688 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:59.688 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:59.688 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:59.947 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:59.947 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:59.947 16:43:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:34:59.947 16:43:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:59.947 16:43:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:34:59.947 16:43:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:34:59.947 16:43:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:59.947 16:43:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:34:59.947 16:43:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:34:59.947 16:43:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:59.947 16:43:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:00.206 No valid GPT data, bailing 00:35:00.206 16:43:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:00.206 16:43:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:35:00.206 16:43:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:35:00.206 16:43:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:35:00.206 16:43:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # [[ -b /dev/nvme0n1 ]] 00:35:00.206 16:43:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:00.206 16:43:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:00.206 16:43:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:00.206 16:43:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:35:00.206 16:43:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # echo 1 00:35:00.206 16:43:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@692 -- # echo /dev/nvme0n1 00:35:00.206 16:43:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo 1 00:35:00.206 16:43:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:35:00.206 16:43:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo tcp 00:35:00.206 16:43:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 4420 00:35:00.206 16:43:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # echo ipv4 00:35:00.206 16:43:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:00.206 16:43:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:35:00.206 00:35:00.206 Discovery Log Number of Records 2, Generation counter 2 00:35:00.206 =====Discovery Log Entry 0====== 00:35:00.206 trtype: tcp 00:35:00.206 adrfam: ipv4 00:35:00.206 subtype: current discovery subsystem 00:35:00.206 treq: not specified, sq flow control disable supported 00:35:00.206 portid: 1 00:35:00.206 trsvcid: 4420 00:35:00.206 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:00.206 traddr: 10.0.0.1 00:35:00.206 eflags: none 00:35:00.206 sectype: none 00:35:00.206 =====Discovery Log Entry 1====== 00:35:00.206 trtype: tcp 00:35:00.206 adrfam: ipv4 00:35:00.206 subtype: nvme subsystem 00:35:00.206 treq: not specified, sq flow control disable supported 00:35:00.206 portid: 1 00:35:00.206 trsvcid: 4420 00:35:00.206 subnqn: nqn.2016-06.io.spdk:testnqn 00:35:00.206 traddr: 10.0.0.1 00:35:00.206 eflags: none 00:35:00.206 sectype: none 00:35:00.206 16:43:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:35:00.206 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:35:00.466 ===================================================== 00:35:00.466 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:35:00.466 ===================================================== 00:35:00.466 Controller Capabilities/Features 00:35:00.466 ================================ 00:35:00.466 Vendor ID: 0000 00:35:00.466 Subsystem Vendor ID: 0000 00:35:00.466 Serial Number: 9f02beb34d216fec365a 00:35:00.466 Model Number: Linux 00:35:00.466 Firmware Version: 6.8.9-20 00:35:00.466 Recommended Arb Burst: 0 00:35:00.466 IEEE OUI Identifier: 00 00 00 00:35:00.466 Multi-path I/O 00:35:00.466 May have multiple subsystem ports: No 00:35:00.466 May have multiple controllers: No 00:35:00.466 Associated with SR-IOV VF: No 00:35:00.466 Max Data Transfer Size: Unlimited 00:35:00.466 Max Number of Namespaces: 0 00:35:00.466 Max Number of I/O Queues: 1024 00:35:00.466 NVMe Specification Version (VS): 1.3 00:35:00.466 NVMe Specification Version (Identify): 1.3 00:35:00.466 Maximum Queue Entries: 1024 00:35:00.466 Contiguous Queues Required: No 00:35:00.466 Arbitration Mechanisms Supported 00:35:00.466 Weighted Round Robin: Not Supported 00:35:00.466 Vendor Specific: Not Supported 00:35:00.466 Reset Timeout: 7500 ms 00:35:00.466 Doorbell Stride: 4 bytes 00:35:00.466 NVM Subsystem Reset: Not Supported 00:35:00.466 Command Sets Supported 00:35:00.466 NVM Command Set: Supported 00:35:00.466 Boot Partition: Not Supported 00:35:00.466 Memory Page Size Minimum: 4096 bytes 00:35:00.466 Memory Page Size Maximum: 4096 bytes 00:35:00.466 Persistent Memory Region: Not Supported 00:35:00.466 Optional Asynchronous Events Supported 00:35:00.466 Namespace Attribute Notices: Not Supported 00:35:00.466 Firmware Activation Notices: Not Supported 00:35:00.466 ANA Change Notices: Not Supported 00:35:00.466 PLE Aggregate Log Change Notices: Not Supported 00:35:00.466 LBA Status Info Alert Notices: Not Supported 00:35:00.466 EGE Aggregate Log Change Notices: Not Supported 00:35:00.466 Normal NVM Subsystem Shutdown event: Not Supported 00:35:00.466 Zone Descriptor Change Notices: Not Supported 00:35:00.466 Discovery Log Change Notices: Supported 00:35:00.466 Controller Attributes 00:35:00.466 128-bit Host Identifier: Not Supported 00:35:00.466 Non-Operational Permissive Mode: Not Supported 00:35:00.466 NVM Sets: Not Supported 00:35:00.466 Read Recovery Levels: Not Supported 00:35:00.466 Endurance Groups: Not Supported 00:35:00.466 Predictable Latency Mode: Not Supported 00:35:00.466 Traffic Based Keep ALive: Not Supported 00:35:00.466 Namespace Granularity: Not Supported 00:35:00.466 SQ Associations: Not Supported 00:35:00.466 UUID List: Not Supported 00:35:00.466 Multi-Domain Subsystem: Not Supported 00:35:00.466 Fixed Capacity Management: Not Supported 00:35:00.466 Variable Capacity Management: Not Supported 00:35:00.466 Delete Endurance Group: Not Supported 00:35:00.466 Delete NVM Set: Not Supported 00:35:00.466 Extended LBA Formats Supported: Not Supported 00:35:00.466 Flexible Data Placement Supported: Not Supported 00:35:00.466 00:35:00.466 Controller Memory Buffer Support 00:35:00.466 ================================ 00:35:00.466 Supported: No 00:35:00.466 00:35:00.466 Persistent Memory Region Support 00:35:00.466 ================================ 00:35:00.466 Supported: No 00:35:00.466 00:35:00.466 Admin Command Set Attributes 00:35:00.466 ============================ 00:35:00.466 Security Send/Receive: Not Supported 00:35:00.466 Format NVM: Not Supported 00:35:00.466 Firmware Activate/Download: Not Supported 00:35:00.466 Namespace Management: Not Supported 00:35:00.466 Device Self-Test: Not Supported 00:35:00.466 Directives: Not Supported 00:35:00.466 NVMe-MI: Not Supported 00:35:00.466 Virtualization Management: Not Supported 00:35:00.466 Doorbell Buffer Config: Not Supported 00:35:00.467 Get LBA Status Capability: Not Supported 00:35:00.467 Command & Feature Lockdown Capability: Not Supported 00:35:00.467 Abort Command Limit: 1 00:35:00.467 Async Event Request Limit: 1 00:35:00.467 Number of Firmware Slots: N/A 00:35:00.467 Firmware Slot 1 Read-Only: N/A 00:35:00.467 Firmware Activation Without Reset: N/A 00:35:00.467 Multiple Update Detection Support: N/A 00:35:00.467 Firmware Update Granularity: No Information Provided 00:35:00.467 Per-Namespace SMART Log: No 00:35:00.467 Asymmetric Namespace Access Log Page: Not Supported 00:35:00.467 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:35:00.467 Command Effects Log Page: Not Supported 00:35:00.467 Get Log Page Extended Data: Supported 00:35:00.467 Telemetry Log Pages: Not Supported 00:35:00.467 Persistent Event Log Pages: Not Supported 00:35:00.467 Supported Log Pages Log Page: May Support 00:35:00.467 Commands Supported & Effects Log Page: Not Supported 00:35:00.467 Feature Identifiers & Effects Log Page:May Support 00:35:00.467 NVMe-MI Commands & Effects Log Page: May Support 00:35:00.467 Data Area 4 for Telemetry Log: Not Supported 00:35:00.467 Error Log Page Entries Supported: 1 00:35:00.467 Keep Alive: Not Supported 00:35:00.467 00:35:00.467 NVM Command Set Attributes 00:35:00.467 ========================== 00:35:00.467 Submission Queue Entry Size 00:35:00.467 Max: 1 00:35:00.467 Min: 1 00:35:00.467 Completion Queue Entry Size 00:35:00.467 Max: 1 00:35:00.467 Min: 1 00:35:00.467 Number of Namespaces: 0 00:35:00.467 Compare Command: Not Supported 00:35:00.467 Write Uncorrectable Command: Not Supported 00:35:00.467 Dataset Management Command: Not Supported 00:35:00.467 Write Zeroes Command: Not Supported 00:35:00.467 Set Features Save Field: Not Supported 00:35:00.467 Reservations: Not Supported 00:35:00.467 Timestamp: Not Supported 00:35:00.467 Copy: Not Supported 00:35:00.467 Volatile Write Cache: Not Present 00:35:00.467 Atomic Write Unit (Normal): 1 00:35:00.467 Atomic Write Unit (PFail): 1 00:35:00.467 Atomic Compare & Write Unit: 1 00:35:00.467 Fused Compare & Write: Not Supported 00:35:00.467 Scatter-Gather List 00:35:00.467 SGL Command Set: Supported 00:35:00.467 SGL Keyed: Not Supported 00:35:00.467 SGL Bit Bucket Descriptor: Not Supported 00:35:00.467 SGL Metadata Pointer: Not Supported 00:35:00.467 Oversized SGL: Not Supported 00:35:00.467 SGL Metadata Address: Not Supported 00:35:00.467 SGL Offset: Supported 00:35:00.467 Transport SGL Data Block: Not Supported 00:35:00.467 Replay Protected Memory Block: Not Supported 00:35:00.467 00:35:00.467 Firmware Slot Information 00:35:00.467 ========================= 00:35:00.467 Active slot: 0 00:35:00.467 00:35:00.467 00:35:00.467 Error Log 00:35:00.467 ========= 00:35:00.467 00:35:00.467 Active Namespaces 00:35:00.467 ================= 00:35:00.467 Discovery Log Page 00:35:00.467 ================== 00:35:00.467 Generation Counter: 2 00:35:00.467 Number of Records: 2 00:35:00.467 Record Format: 0 00:35:00.467 00:35:00.467 Discovery Log Entry 0 00:35:00.467 ---------------------- 00:35:00.467 Transport Type: 3 (TCP) 00:35:00.467 Address Family: 1 (IPv4) 00:35:00.467 Subsystem Type: 3 (Current Discovery Subsystem) 00:35:00.467 Entry Flags: 00:35:00.467 Duplicate Returned Information: 0 00:35:00.467 Explicit Persistent Connection Support for Discovery: 0 00:35:00.467 Transport Requirements: 00:35:00.467 Secure Channel: Not Specified 00:35:00.467 Port ID: 1 (0x0001) 00:35:00.467 Controller ID: 65535 (0xffff) 00:35:00.467 Admin Max SQ Size: 32 00:35:00.467 Transport Service Identifier: 4420 00:35:00.467 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:35:00.467 Transport Address: 10.0.0.1 00:35:00.467 Discovery Log Entry 1 00:35:00.467 ---------------------- 00:35:00.467 Transport Type: 3 (TCP) 00:35:00.467 Address Family: 1 (IPv4) 00:35:00.467 Subsystem Type: 2 (NVM Subsystem) 00:35:00.467 Entry Flags: 00:35:00.467 Duplicate Returned Information: 0 00:35:00.467 Explicit Persistent Connection Support for Discovery: 0 00:35:00.467 Transport Requirements: 00:35:00.467 Secure Channel: Not Specified 00:35:00.467 Port ID: 1 (0x0001) 00:35:00.467 Controller ID: 65535 (0xffff) 00:35:00.467 Admin Max SQ Size: 32 00:35:00.467 Transport Service Identifier: 4420 00:35:00.467 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:35:00.467 Transport Address: 10.0.0.1 00:35:00.467 16:43:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:00.467 get_feature(0x01) failed 00:35:00.467 get_feature(0x02) failed 00:35:00.467 get_feature(0x04) failed 00:35:00.467 ===================================================== 00:35:00.467 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:00.467 ===================================================== 00:35:00.467 Controller Capabilities/Features 00:35:00.467 ================================ 00:35:00.467 Vendor ID: 0000 00:35:00.467 Subsystem Vendor ID: 0000 00:35:00.467 Serial Number: b098a759bf279f0e89b6 00:35:00.467 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:35:00.467 Firmware Version: 6.8.9-20 00:35:00.467 Recommended Arb Burst: 6 00:35:00.467 IEEE OUI Identifier: 00 00 00 00:35:00.467 Multi-path I/O 00:35:00.467 May have multiple subsystem ports: Yes 00:35:00.467 May have multiple controllers: Yes 00:35:00.467 Associated with SR-IOV VF: No 00:35:00.467 Max Data Transfer Size: Unlimited 00:35:00.467 Max Number of Namespaces: 1024 00:35:00.467 Max Number of I/O Queues: 128 00:35:00.467 NVMe Specification Version (VS): 1.3 00:35:00.467 NVMe Specification Version (Identify): 1.3 00:35:00.467 Maximum Queue Entries: 1024 00:35:00.467 Contiguous Queues Required: No 00:35:00.467 Arbitration Mechanisms Supported 00:35:00.467 Weighted Round Robin: Not Supported 00:35:00.467 Vendor Specific: Not Supported 00:35:00.467 Reset Timeout: 7500 ms 00:35:00.467 Doorbell Stride: 4 bytes 00:35:00.467 NVM Subsystem Reset: Not Supported 00:35:00.467 Command Sets Supported 00:35:00.467 NVM Command Set: Supported 00:35:00.467 Boot Partition: Not Supported 00:35:00.467 Memory Page Size Minimum: 4096 bytes 00:35:00.467 Memory Page Size Maximum: 4096 bytes 00:35:00.467 Persistent Memory Region: Not Supported 00:35:00.467 Optional Asynchronous Events Supported 00:35:00.467 Namespace Attribute Notices: Supported 00:35:00.467 Firmware Activation Notices: Not Supported 00:35:00.467 ANA Change Notices: Supported 00:35:00.467 PLE Aggregate Log Change Notices: Not Supported 00:35:00.467 LBA Status Info Alert Notices: Not Supported 00:35:00.467 EGE Aggregate Log Change Notices: Not Supported 00:35:00.467 Normal NVM Subsystem Shutdown event: Not Supported 00:35:00.467 Zone Descriptor Change Notices: Not Supported 00:35:00.467 Discovery Log Change Notices: Not Supported 00:35:00.467 Controller Attributes 00:35:00.467 128-bit Host Identifier: Supported 00:35:00.467 Non-Operational Permissive Mode: Not Supported 00:35:00.467 NVM Sets: Not Supported 00:35:00.467 Read Recovery Levels: Not Supported 00:35:00.467 Endurance Groups: Not Supported 00:35:00.467 Predictable Latency Mode: Not Supported 00:35:00.467 Traffic Based Keep ALive: Supported 00:35:00.467 Namespace Granularity: Not Supported 00:35:00.467 SQ Associations: Not Supported 00:35:00.467 UUID List: Not Supported 00:35:00.467 Multi-Domain Subsystem: Not Supported 00:35:00.467 Fixed Capacity Management: Not Supported 00:35:00.467 Variable Capacity Management: Not Supported 00:35:00.467 Delete Endurance Group: Not Supported 00:35:00.467 Delete NVM Set: Not Supported 00:35:00.467 Extended LBA Formats Supported: Not Supported 00:35:00.467 Flexible Data Placement Supported: Not Supported 00:35:00.467 00:35:00.467 Controller Memory Buffer Support 00:35:00.467 ================================ 00:35:00.467 Supported: No 00:35:00.467 00:35:00.467 Persistent Memory Region Support 00:35:00.467 ================================ 00:35:00.467 Supported: No 00:35:00.467 00:35:00.467 Admin Command Set Attributes 00:35:00.467 ============================ 00:35:00.467 Security Send/Receive: Not Supported 00:35:00.467 Format NVM: Not Supported 00:35:00.467 Firmware Activate/Download: Not Supported 00:35:00.467 Namespace Management: Not Supported 00:35:00.467 Device Self-Test: Not Supported 00:35:00.467 Directives: Not Supported 00:35:00.467 NVMe-MI: Not Supported 00:35:00.467 Virtualization Management: Not Supported 00:35:00.467 Doorbell Buffer Config: Not Supported 00:35:00.467 Get LBA Status Capability: Not Supported 00:35:00.467 Command & Feature Lockdown Capability: Not Supported 00:35:00.467 Abort Command Limit: 4 00:35:00.467 Async Event Request Limit: 4 00:35:00.467 Number of Firmware Slots: N/A 00:35:00.467 Firmware Slot 1 Read-Only: N/A 00:35:00.467 Firmware Activation Without Reset: N/A 00:35:00.467 Multiple Update Detection Support: N/A 00:35:00.468 Firmware Update Granularity: No Information Provided 00:35:00.468 Per-Namespace SMART Log: Yes 00:35:00.468 Asymmetric Namespace Access Log Page: Supported 00:35:00.468 ANA Transition Time : 10 sec 00:35:00.468 00:35:00.468 Asymmetric Namespace Access Capabilities 00:35:00.468 ANA Optimized State : Supported 00:35:00.468 ANA Non-Optimized State : Supported 00:35:00.468 ANA Inaccessible State : Supported 00:35:00.468 ANA Persistent Loss State : Supported 00:35:00.468 ANA Change State : Supported 00:35:00.468 ANAGRPID is not changed : No 00:35:00.468 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:35:00.468 00:35:00.468 ANA Group Identifier Maximum : 128 00:35:00.468 Number of ANA Group Identifiers : 128 00:35:00.468 Max Number of Allowed Namespaces : 1024 00:35:00.468 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:35:00.468 Command Effects Log Page: Supported 00:35:00.468 Get Log Page Extended Data: Supported 00:35:00.468 Telemetry Log Pages: Not Supported 00:35:00.468 Persistent Event Log Pages: Not Supported 00:35:00.468 Supported Log Pages Log Page: May Support 00:35:00.468 Commands Supported & Effects Log Page: Not Supported 00:35:00.468 Feature Identifiers & Effects Log Page:May Support 00:35:00.468 NVMe-MI Commands & Effects Log Page: May Support 00:35:00.468 Data Area 4 for Telemetry Log: Not Supported 00:35:00.468 Error Log Page Entries Supported: 128 00:35:00.468 Keep Alive: Supported 00:35:00.468 Keep Alive Granularity: 1000 ms 00:35:00.468 00:35:00.468 NVM Command Set Attributes 00:35:00.468 ========================== 00:35:00.468 Submission Queue Entry Size 00:35:00.468 Max: 64 00:35:00.468 Min: 64 00:35:00.468 Completion Queue Entry Size 00:35:00.468 Max: 16 00:35:00.468 Min: 16 00:35:00.468 Number of Namespaces: 1024 00:35:00.468 Compare Command: Not Supported 00:35:00.468 Write Uncorrectable Command: Not Supported 00:35:00.468 Dataset Management Command: Supported 00:35:00.468 Write Zeroes Command: Supported 00:35:00.468 Set Features Save Field: Not Supported 00:35:00.468 Reservations: Not Supported 00:35:00.468 Timestamp: Not Supported 00:35:00.468 Copy: Not Supported 00:35:00.468 Volatile Write Cache: Present 00:35:00.468 Atomic Write Unit (Normal): 1 00:35:00.468 Atomic Write Unit (PFail): 1 00:35:00.468 Atomic Compare & Write Unit: 1 00:35:00.468 Fused Compare & Write: Not Supported 00:35:00.468 Scatter-Gather List 00:35:00.468 SGL Command Set: Supported 00:35:00.468 SGL Keyed: Not Supported 00:35:00.468 SGL Bit Bucket Descriptor: Not Supported 00:35:00.468 SGL Metadata Pointer: Not Supported 00:35:00.468 Oversized SGL: Not Supported 00:35:00.468 SGL Metadata Address: Not Supported 00:35:00.468 SGL Offset: Supported 00:35:00.468 Transport SGL Data Block: Not Supported 00:35:00.468 Replay Protected Memory Block: Not Supported 00:35:00.468 00:35:00.468 Firmware Slot Information 00:35:00.468 ========================= 00:35:00.468 Active slot: 0 00:35:00.468 00:35:00.468 Asymmetric Namespace Access 00:35:00.468 =========================== 00:35:00.468 Change Count : 0 00:35:00.468 Number of ANA Group Descriptors : 1 00:35:00.468 ANA Group Descriptor : 0 00:35:00.468 ANA Group ID : 1 00:35:00.468 Number of NSID Values : 1 00:35:00.468 Change Count : 0 00:35:00.468 ANA State : 1 00:35:00.468 Namespace Identifier : 1 00:35:00.468 00:35:00.468 Commands Supported and Effects 00:35:00.468 ============================== 00:35:00.468 Admin Commands 00:35:00.468 -------------- 00:35:00.468 Get Log Page (02h): Supported 00:35:00.468 Identify (06h): Supported 00:35:00.468 Abort (08h): Supported 00:35:00.468 Set Features (09h): Supported 00:35:00.468 Get Features (0Ah): Supported 00:35:00.468 Asynchronous Event Request (0Ch): Supported 00:35:00.468 Keep Alive (18h): Supported 00:35:00.468 I/O Commands 00:35:00.468 ------------ 00:35:00.468 Flush (00h): Supported 00:35:00.468 Write (01h): Supported LBA-Change 00:35:00.468 Read (02h): Supported 00:35:00.468 Write Zeroes (08h): Supported LBA-Change 00:35:00.468 Dataset Management (09h): Supported 00:35:00.468 00:35:00.468 Error Log 00:35:00.468 ========= 00:35:00.468 Entry: 0 00:35:00.468 Error Count: 0x3 00:35:00.468 Submission Queue Id: 0x0 00:35:00.468 Command Id: 0x5 00:35:00.468 Phase Bit: 0 00:35:00.468 Status Code: 0x2 00:35:00.468 Status Code Type: 0x0 00:35:00.468 Do Not Retry: 1 00:35:00.468 Error Location: 0x28 00:35:00.468 LBA: 0x0 00:35:00.468 Namespace: 0x0 00:35:00.468 Vendor Log Page: 0x0 00:35:00.468 ----------- 00:35:00.468 Entry: 1 00:35:00.468 Error Count: 0x2 00:35:00.468 Submission Queue Id: 0x0 00:35:00.468 Command Id: 0x5 00:35:00.468 Phase Bit: 0 00:35:00.468 Status Code: 0x2 00:35:00.468 Status Code Type: 0x0 00:35:00.468 Do Not Retry: 1 00:35:00.468 Error Location: 0x28 00:35:00.468 LBA: 0x0 00:35:00.468 Namespace: 0x0 00:35:00.468 Vendor Log Page: 0x0 00:35:00.468 ----------- 00:35:00.468 Entry: 2 00:35:00.468 Error Count: 0x1 00:35:00.468 Submission Queue Id: 0x0 00:35:00.468 Command Id: 0x4 00:35:00.468 Phase Bit: 0 00:35:00.468 Status Code: 0x2 00:35:00.468 Status Code Type: 0x0 00:35:00.468 Do Not Retry: 1 00:35:00.468 Error Location: 0x28 00:35:00.468 LBA: 0x0 00:35:00.468 Namespace: 0x0 00:35:00.468 Vendor Log Page: 0x0 00:35:00.468 00:35:00.468 Number of Queues 00:35:00.468 ================ 00:35:00.468 Number of I/O Submission Queues: 128 00:35:00.468 Number of I/O Completion Queues: 128 00:35:00.468 00:35:00.468 ZNS Specific Controller Data 00:35:00.468 ============================ 00:35:00.468 Zone Append Size Limit: 0 00:35:00.468 00:35:00.468 00:35:00.468 Active Namespaces 00:35:00.468 ================= 00:35:00.468 get_feature(0x05) failed 00:35:00.468 Namespace ID:1 00:35:00.468 Command Set Identifier: NVM (00h) 00:35:00.468 Deallocate: Supported 00:35:00.468 Deallocated/Unwritten Error: Not Supported 00:35:00.468 Deallocated Read Value: Unknown 00:35:00.468 Deallocate in Write Zeroes: Not Supported 00:35:00.468 Deallocated Guard Field: 0xFFFF 00:35:00.468 Flush: Supported 00:35:00.468 Reservation: Not Supported 00:35:00.468 Namespace Sharing Capabilities: Multiple Controllers 00:35:00.468 Size (in LBAs): 1953525168 (931GiB) 00:35:00.468 Capacity (in LBAs): 1953525168 (931GiB) 00:35:00.468 Utilization (in LBAs): 1953525168 (931GiB) 00:35:00.468 UUID: 24c47205-5637-4ad1-938b-abefbbfa84ae 00:35:00.468 Thin Provisioning: Not Supported 00:35:00.468 Per-NS Atomic Units: Yes 00:35:00.468 Atomic Boundary Size (Normal): 0 00:35:00.468 Atomic Boundary Size (PFail): 0 00:35:00.468 Atomic Boundary Offset: 0 00:35:00.468 NGUID/EUI64 Never Reused: No 00:35:00.468 ANA group ID: 1 00:35:00.468 Namespace Write Protected: No 00:35:00.468 Number of LBA Formats: 1 00:35:00.468 Current LBA Format: LBA Format #00 00:35:00.468 LBA Format #00: Data Size: 512 Metadata Size: 0 00:35:00.468 00:35:00.468 16:43:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:35:00.468 16:43:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:35:00.468 16:43:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:35:00.468 16:43:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:00.468 16:43:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:35:00.468 16:43:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:00.468 16:43:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:00.468 rmmod nvme_tcp 00:35:00.468 rmmod nvme_fabrics 00:35:00.468 16:43:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:00.468 16:43:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:35:00.468 16:43:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:35:00.468 16:43:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:35:00.468 16:43:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:35:00.468 16:43:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:35:00.468 16:43:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:35:00.468 16:43:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:35:00.468 16:43:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # iptables-save 00:35:00.728 16:43:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:35:00.728 16:43:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # iptables-restore 00:35:00.728 16:43:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:00.728 16:43:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:00.728 16:43:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:00.728 16:43:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:00.728 16:43:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:02.631 16:43:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:02.631 16:43:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:35:02.631 16:43:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:35:02.631 16:43:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@710 -- # echo 0 00:35:02.631 16:43:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:02.632 16:43:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:02.632 16:43:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:02.632 16:43:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:02.632 16:43:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:35:02.632 16:43:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:35:02.632 16:43:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@722 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:04.008 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:04.008 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:04.008 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:04.008 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:04.008 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:04.008 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:04.008 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:04.008 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:04.008 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:04.008 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:04.008 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:04.008 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:04.008 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:04.008 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:04.008 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:04.008 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:04.944 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:35:04.944 00:35:04.944 real 0m9.657s 00:35:04.944 user 0m2.076s 00:35:04.944 sys 0m3.600s 00:35:04.944 16:43:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:04.944 16:43:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:35:04.944 ************************************ 00:35:04.944 END TEST nvmf_identify_kernel_target 00:35:04.944 ************************************ 00:35:04.944 16:43:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:35:04.944 16:43:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:04.944 16:43:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:04.944 16:43:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.944 ************************************ 00:35:04.944 START TEST nvmf_auth_host 00:35:04.944 ************************************ 00:35:04.944 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:35:05.204 * Looking for test storage... 00:35:05.204 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lcov --version 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:35:05.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:05.205 --rc genhtml_branch_coverage=1 00:35:05.205 --rc genhtml_function_coverage=1 00:35:05.205 --rc genhtml_legend=1 00:35:05.205 --rc geninfo_all_blocks=1 00:35:05.205 --rc geninfo_unexecuted_blocks=1 00:35:05.205 00:35:05.205 ' 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:35:05.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:05.205 --rc genhtml_branch_coverage=1 00:35:05.205 --rc genhtml_function_coverage=1 00:35:05.205 --rc genhtml_legend=1 00:35:05.205 --rc geninfo_all_blocks=1 00:35:05.205 --rc geninfo_unexecuted_blocks=1 00:35:05.205 00:35:05.205 ' 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:35:05.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:05.205 --rc genhtml_branch_coverage=1 00:35:05.205 --rc genhtml_function_coverage=1 00:35:05.205 --rc genhtml_legend=1 00:35:05.205 --rc geninfo_all_blocks=1 00:35:05.205 --rc geninfo_unexecuted_blocks=1 00:35:05.205 00:35:05.205 ' 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:35:05.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:05.205 --rc genhtml_branch_coverage=1 00:35:05.205 --rc genhtml_function_coverage=1 00:35:05.205 --rc genhtml_legend=1 00:35:05.205 --rc geninfo_all_blocks=1 00:35:05.205 --rc geninfo_unexecuted_blocks=1 00:35:05.205 00:35:05.205 ' 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:05.205 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:05.205 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:35:05.206 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:35:05.206 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:35:05.206 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:35:05.206 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:05.206 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@472 -- # prepare_net_devs 00:35:05.206 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@434 -- # local -g is_hw=no 00:35:05.206 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # remove_spdk_ns 00:35:05.206 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:05.206 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:05.206 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:05.206 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:35:05.206 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:35:05.206 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:35:05.206 16:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.158 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:07.158 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:35:07.158 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:07.158 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:07.158 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:07.158 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:07.158 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:07.158 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:35:07.158 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:07.158 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:35:07.158 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:35:07.158 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:35:07.158 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:35:07.158 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:35:07.158 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:35:07.158 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:07.158 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:07.158 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:07.158 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:07.158 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:07.158 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:07.158 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:07.158 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:07.158 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:07.158 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:07.158 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:07.158 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:35:07.158 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:35:07.158 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:35:07.158 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:35:07.158 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:35:07.158 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:35:07.158 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:35:07.158 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:07.158 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:07.158 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:35:07.158 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:35:07.158 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:07.158 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:07.158 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:35:07.158 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:35:07.158 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:07.158 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:07.158 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:35:07.158 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:35:07.158 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:07.158 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:07.158 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:35:07.158 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:35:07.158 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:35:07.158 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:35:07.158 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:35:07.158 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:07.158 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:35:07.158 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:07.158 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ up == up ]] 00:35:07.158 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:35:07.158 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:07.158 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:07.158 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:07.158 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:35:07.158 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:35:07.158 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:07.158 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:35:07.158 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:07.158 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ up == up ]] 00:35:07.158 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:35:07.158 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:07.159 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:07.159 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:07.159 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:35:07.159 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:35:07.159 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # is_hw=yes 00:35:07.159 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:35:07.159 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:35:07.159 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:35:07.159 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:07.159 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:07.159 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:07.159 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:07.159 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:07.159 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:07.159 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:07.159 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:07.159 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:07.159 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:07.159 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:07.159 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:07.159 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:07.159 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:07.159 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:07.159 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:07.159 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:07.159 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:07.159 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:07.417 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:07.417 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:07.417 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:07.417 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:07.417 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:07.417 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:35:07.417 00:35:07.417 --- 10.0.0.2 ping statistics --- 00:35:07.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:07.418 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:35:07.418 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:07.418 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:07.418 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:35:07.418 00:35:07.418 --- 10.0.0.1 ping statistics --- 00:35:07.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:07.418 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:35:07.418 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:07.418 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # return 0 00:35:07.418 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:35:07.418 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:07.418 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:35:07.418 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:35:07.418 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:07.418 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:35:07.418 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:35:07.418 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:35:07.418 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:35:07.418 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:07.418 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.418 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@505 -- # nvmfpid=3306700 00:35:07.418 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:35:07.418 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # waitforlisten 3306700 00:35:07.418 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 3306700 ']' 00:35:07.418 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:07.418 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:07.418 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:07.418 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:07.418 16:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.353 16:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:08.353 16:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:35:08.353 16:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:35:08.353 16:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:08.353 16:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.353 16:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:08.353 16:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:35:08.353 16:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:35:08.353 16:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:35:08.353 16:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:08.353 16:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:35:08.353 16:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:35:08.353 16:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:35:08.353 16:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:08.353 16:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=62f1eccf06e19e826446cb934c714517 00:35:08.353 16:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:35:08.353 16:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.W9G 00:35:08.353 16:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 62f1eccf06e19e826446cb934c714517 0 00:35:08.353 16:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 62f1eccf06e19e826446cb934c714517 0 00:35:08.353 16:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:35:08.353 16:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:35:08.353 16:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=62f1eccf06e19e826446cb934c714517 00:35:08.353 16:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:35:08.353 16:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:35:08.353 16:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.W9G 00:35:08.353 16:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.W9G 00:35:08.353 16:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.W9G 00:35:08.353 16:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:35:08.353 16:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:35:08.353 16:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:08.353 16:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:35:08.613 16:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha512 00:35:08.614 16:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=64 00:35:08.614 16:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:35:08.614 16:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=e1724025c2a3d7742138be4e9f60ab14bf49477dbe9d807ce6238b1ec6e72b25 00:35:08.614 16:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:35:08.614 16:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.H6e 00:35:08.614 16:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key e1724025c2a3d7742138be4e9f60ab14bf49477dbe9d807ce6238b1ec6e72b25 3 00:35:08.614 16:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 e1724025c2a3d7742138be4e9f60ab14bf49477dbe9d807ce6238b1ec6e72b25 3 00:35:08.614 16:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:35:08.614 16:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:35:08.614 16:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=e1724025c2a3d7742138be4e9f60ab14bf49477dbe9d807ce6238b1ec6e72b25 00:35:08.614 16:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=3 00:35:08.614 16:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:35:08.614 16:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.H6e 00:35:08.614 16:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.H6e 00:35:08.614 16:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.H6e 00:35:08.614 16:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:35:08.614 16:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:35:08.614 16:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:08.614 16:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:35:08.614 16:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:35:08.614 16:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:35:08.614 16:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:08.614 16:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=25f01efc71cdfa87d972744ce62a3effbb65f269f604c379 00:35:08.614 16:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:35:08.614 16:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.qvB 00:35:08.614 16:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 25f01efc71cdfa87d972744ce62a3effbb65f269f604c379 0 00:35:08.614 16:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 25f01efc71cdfa87d972744ce62a3effbb65f269f604c379 0 00:35:08.614 16:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:35:08.614 16:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:35:08.614 16:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=25f01efc71cdfa87d972744ce62a3effbb65f269f604c379 00:35:08.614 16:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:35:08.614 16:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:35:08.614 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.qvB 00:35:08.614 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.qvB 00:35:08.614 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.qvB 00:35:08.614 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:35:08.614 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:35:08.614 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:08.614 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:35:08.614 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha384 00:35:08.614 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:35:08.614 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:08.615 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=c6fc3fec1aba35796bb6b1f8f1ef2947121da92dc14635cb 00:35:08.615 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:35:08.615 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.qsr 00:35:08.615 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key c6fc3fec1aba35796bb6b1f8f1ef2947121da92dc14635cb 2 00:35:08.615 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 c6fc3fec1aba35796bb6b1f8f1ef2947121da92dc14635cb 2 00:35:08.615 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:35:08.615 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:35:08.615 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=c6fc3fec1aba35796bb6b1f8f1ef2947121da92dc14635cb 00:35:08.615 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=2 00:35:08.615 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:35:08.615 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.qsr 00:35:08.615 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.qsr 00:35:08.615 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.qsr 00:35:08.615 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:35:08.615 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:35:08.615 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:08.615 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:35:08.615 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha256 00:35:08.615 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:35:08.615 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:08.615 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=3f967db3c5e611d3a17594446a285bfd 00:35:08.615 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:35:08.615 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.3zo 00:35:08.615 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 3f967db3c5e611d3a17594446a285bfd 1 00:35:08.615 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 3f967db3c5e611d3a17594446a285bfd 1 00:35:08.615 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:35:08.615 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:35:08.615 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=3f967db3c5e611d3a17594446a285bfd 00:35:08.615 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=1 00:35:08.615 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:35:08.615 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.3zo 00:35:08.615 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.3zo 00:35:08.615 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.3zo 00:35:08.615 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:35:08.615 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:35:08.615 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:08.615 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:35:08.615 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha256 00:35:08.615 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:35:08.615 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:08.615 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=58234597427bd6978be612aada367da8 00:35:08.615 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:35:08.615 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.rVm 00:35:08.615 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 58234597427bd6978be612aada367da8 1 00:35:08.615 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 58234597427bd6978be612aada367da8 1 00:35:08.615 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:35:08.616 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:35:08.616 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=58234597427bd6978be612aada367da8 00:35:08.616 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=1 00:35:08.616 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:35:08.616 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.rVm 00:35:08.616 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.rVm 00:35:08.616 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.rVm 00:35:08.616 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:35:08.616 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:35:08.616 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:08.616 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:35:08.616 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha384 00:35:08.616 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:35:08.616 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:08.616 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=774433067ab54cb2d75b3c683c0a5c0a495e1bc539c0714f 00:35:08.616 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:35:08.616 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.e1F 00:35:08.616 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 774433067ab54cb2d75b3c683c0a5c0a495e1bc539c0714f 2 00:35:08.616 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 774433067ab54cb2d75b3c683c0a5c0a495e1bc539c0714f 2 00:35:08.616 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:35:08.616 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:35:08.616 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=774433067ab54cb2d75b3c683c0a5c0a495e1bc539c0714f 00:35:08.616 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=2 00:35:08.616 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:35:08.879 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.e1F 00:35:08.879 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.e1F 00:35:08.879 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.e1F 00:35:08.879 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:35:08.879 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:35:08.879 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:08.879 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:35:08.879 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:35:08.879 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:35:08.879 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:08.879 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=7413402aa003c5881217a14728c3514f 00:35:08.879 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:35:08.879 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.hvv 00:35:08.879 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 7413402aa003c5881217a14728c3514f 0 00:35:08.879 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 7413402aa003c5881217a14728c3514f 0 00:35:08.879 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:35:08.879 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:35:08.879 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=7413402aa003c5881217a14728c3514f 00:35:08.879 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:35:08.879 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:35:08.879 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.hvv 00:35:08.879 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.hvv 00:35:08.879 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.hvv 00:35:08.879 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:35:08.879 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:35:08.879 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:08.879 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:35:08.879 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha512 00:35:08.879 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=64 00:35:08.879 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:35:08.879 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=5c652720d3a2e0c7be487e336b0c83667a755ca8b4d844028d0264b44605df22 00:35:08.879 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:35:08.879 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.8ub 00:35:08.879 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 5c652720d3a2e0c7be487e336b0c83667a755ca8b4d844028d0264b44605df22 3 00:35:08.879 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 5c652720d3a2e0c7be487e336b0c83667a755ca8b4d844028d0264b44605df22 3 00:35:08.879 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:35:08.879 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:35:08.879 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=5c652720d3a2e0c7be487e336b0c83667a755ca8b4d844028d0264b44605df22 00:35:08.879 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=3 00:35:08.879 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:35:08.879 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.8ub 00:35:08.879 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.8ub 00:35:08.879 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.8ub 00:35:08.879 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:35:08.879 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3306700 00:35:08.879 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 3306700 ']' 00:35:08.879 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:08.879 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:08.879 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:08.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:08.879 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:08.879 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.137 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:09.137 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:35:09.137 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:09.137 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.W9G 00:35:09.137 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:09.137 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.137 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:09.137 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.H6e ]] 00:35:09.137 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.H6e 00:35:09.137 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:09.137 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.137 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:09.137 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:09.137 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.qvB 00:35:09.137 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:09.137 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.137 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:09.137 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.qsr ]] 00:35:09.137 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.qsr 00:35:09.137 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:09.137 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.137 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:09.137 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:09.137 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.3zo 00:35:09.137 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:09.137 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.137 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:09.137 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.rVm ]] 00:35:09.137 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.rVm 00:35:09.137 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:09.137 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.137 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:09.137 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:09.137 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.e1F 00:35:09.138 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:09.138 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.138 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:09.138 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.hvv ]] 00:35:09.138 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.hvv 00:35:09.138 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:09.138 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.138 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:09.138 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:09.138 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.8ub 00:35:09.138 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:09.138 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.138 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:09.138 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:35:09.138 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:35:09.138 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:35:09.138 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:09.138 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:09.138 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:09.138 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:09.138 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:09.138 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:09.138 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:09.138 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:09.138 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:09.138 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:09.138 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:35:09.138 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:35:09.138 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:35:09.138 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:09.138 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:35:09.138 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:09.138 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # local block nvme 00:35:09.138 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:35:09.138 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@666 -- # modprobe nvmet 00:35:09.138 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:09.138 16:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:10.511 Waiting for block devices as requested 00:35:10.511 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:35:10.511 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:10.511 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:10.769 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:10.769 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:10.769 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:10.769 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:11.028 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:11.028 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:11.028 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:11.028 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:11.286 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:11.286 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:11.286 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:11.286 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:11.286 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:11.544 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:11.802 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:35:11.802 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:11.802 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:35:11.802 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:35:11.802 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:11.802 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:35:11.802 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:35:11.802 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:35:11.802 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:12.061 No valid GPT data, bailing 00:35:12.061 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:12.061 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:35:12.061 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:35:12.061 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:35:12.061 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # [[ -b /dev/nvme0n1 ]] 00:35:12.061 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:12.061 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:35:12.061 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:12.061 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:35:12.061 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # echo 1 00:35:12.061 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@692 -- # echo /dev/nvme0n1 00:35:12.061 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo 1 00:35:12.061 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:35:12.061 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo tcp 00:35:12.061 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 4420 00:35:12.061 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # echo ipv4 00:35:12.061 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:12.061 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:35:12.061 00:35:12.061 Discovery Log Number of Records 2, Generation counter 2 00:35:12.061 =====Discovery Log Entry 0====== 00:35:12.061 trtype: tcp 00:35:12.061 adrfam: ipv4 00:35:12.061 subtype: current discovery subsystem 00:35:12.061 treq: not specified, sq flow control disable supported 00:35:12.061 portid: 1 00:35:12.061 trsvcid: 4420 00:35:12.061 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:12.061 traddr: 10.0.0.1 00:35:12.061 eflags: none 00:35:12.061 sectype: none 00:35:12.061 =====Discovery Log Entry 1====== 00:35:12.061 trtype: tcp 00:35:12.061 adrfam: ipv4 00:35:12.061 subtype: nvme subsystem 00:35:12.061 treq: not specified, sq flow control disable supported 00:35:12.061 portid: 1 00:35:12.061 trsvcid: 4420 00:35:12.061 subnqn: nqn.2024-02.io.spdk:cnode0 00:35:12.061 traddr: 10.0.0.1 00:35:12.061 eflags: none 00:35:12.061 sectype: none 00:35:12.061 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:12.061 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:35:12.061 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:35:12.061 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:12.061 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:12.061 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:12.061 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:12.061 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:12.061 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjVmMDFlZmM3MWNkZmE4N2Q5NzI3NDRjZTYyYTNlZmZiYjY1ZjI2OWY2MDRjMzc5d3s3ZA==: 00:35:12.061 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzZmYzNmZWMxYWJhMzU3OTZiYjZiMWY4ZjFlZjI5NDcxMjFkYTkyZGMxNDYzNWNiEcgndg==: 00:35:12.061 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:12.061 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:12.061 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjVmMDFlZmM3MWNkZmE4N2Q5NzI3NDRjZTYyYTNlZmZiYjY1ZjI2OWY2MDRjMzc5d3s3ZA==: 00:35:12.061 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzZmYzNmZWMxYWJhMzU3OTZiYjZiMWY4ZjFlZjI5NDcxMjFkYTkyZGMxNDYzNWNiEcgndg==: ]] 00:35:12.061 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzZmYzNmZWMxYWJhMzU3OTZiYjZiMWY4ZjFlZjI5NDcxMjFkYTkyZGMxNDYzNWNiEcgndg==: 00:35:12.061 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:35:12.061 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:35:12.061 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:35:12.061 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:12.061 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:35:12.061 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:12.061 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:35:12.061 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:12.061 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:12.061 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:12.061 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:12.061 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.061 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.061 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.061 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:12.061 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:12.061 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:12.061 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:12.061 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:12.061 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:12.061 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:12.061 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:12.061 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:12.061 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:12.061 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:12.061 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:12.061 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.061 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.320 nvme0n1 00:35:12.320 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.320 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:12.320 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.320 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.320 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:12.320 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.320 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:12.320 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:12.320 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.320 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.320 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.320 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:12.320 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:12.320 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:12.320 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:35:12.320 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:12.320 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:12.320 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:12.320 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:12.320 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjJmMWVjY2YwNmUxOWU4MjY0NDZjYjkzNGM3MTQ1MTcu6tsr: 00:35:12.320 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTE3MjQwMjVjMmEzZDc3NDIxMzhiZTRlOWY2MGFiMTRiZjQ5NDc3ZGJlOWQ4MDdjZTYyMzhiMWVjNmU3MmIyNUyXBzU=: 00:35:12.320 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:12.320 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:12.320 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjJmMWVjY2YwNmUxOWU4MjY0NDZjYjkzNGM3MTQ1MTcu6tsr: 00:35:12.320 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTE3MjQwMjVjMmEzZDc3NDIxMzhiZTRlOWY2MGFiMTRiZjQ5NDc3ZGJlOWQ4MDdjZTYyMzhiMWVjNmU3MmIyNUyXBzU=: ]] 00:35:12.320 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTE3MjQwMjVjMmEzZDc3NDIxMzhiZTRlOWY2MGFiMTRiZjQ5NDc3ZGJlOWQ4MDdjZTYyMzhiMWVjNmU3MmIyNUyXBzU=: 00:35:12.320 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:35:12.320 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:12.320 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:12.320 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:12.320 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:12.320 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:12.320 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:12.320 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.320 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.320 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.320 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:12.320 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:12.320 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:12.320 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:12.320 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:12.320 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:12.320 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:12.320 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:12.320 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:12.320 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:12.320 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:12.320 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:12.320 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.320 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.579 nvme0n1 00:35:12.579 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.579 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:12.579 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.579 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.579 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:12.579 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.579 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:12.579 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:12.579 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.579 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.579 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.579 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:12.579 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:12.579 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:12.579 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:12.579 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:12.579 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:12.579 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjVmMDFlZmM3MWNkZmE4N2Q5NzI3NDRjZTYyYTNlZmZiYjY1ZjI2OWY2MDRjMzc5d3s3ZA==: 00:35:12.579 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzZmYzNmZWMxYWJhMzU3OTZiYjZiMWY4ZjFlZjI5NDcxMjFkYTkyZGMxNDYzNWNiEcgndg==: 00:35:12.579 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:12.579 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:12.579 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjVmMDFlZmM3MWNkZmE4N2Q5NzI3NDRjZTYyYTNlZmZiYjY1ZjI2OWY2MDRjMzc5d3s3ZA==: 00:35:12.579 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzZmYzNmZWMxYWJhMzU3OTZiYjZiMWY4ZjFlZjI5NDcxMjFkYTkyZGMxNDYzNWNiEcgndg==: ]] 00:35:12.579 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzZmYzNmZWMxYWJhMzU3OTZiYjZiMWY4ZjFlZjI5NDcxMjFkYTkyZGMxNDYzNWNiEcgndg==: 00:35:12.579 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:35:12.579 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:12.579 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:12.579 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:12.579 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:12.579 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:12.579 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:12.579 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.579 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.579 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.579 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:12.579 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:12.579 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:12.579 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:12.579 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:12.579 16:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:12.579 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:12.579 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:12.579 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:12.579 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:12.579 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:12.579 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:12.579 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.579 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.838 nvme0n1 00:35:12.838 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.838 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:12.838 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.838 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:12.838 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.838 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.838 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:12.838 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:12.838 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.838 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.838 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.838 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:12.838 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:35:12.838 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:12.838 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:12.838 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:12.838 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:12.838 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2Y5NjdkYjNjNWU2MTFkM2ExNzU5NDQ0NmEyODViZmRPiSXu: 00:35:12.838 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTgyMzQ1OTc0MjdiZDY5NzhiZTYxMmFhZGEzNjdkYTjxCEg7: 00:35:12.838 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:12.838 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:12.838 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2Y5NjdkYjNjNWU2MTFkM2ExNzU5NDQ0NmEyODViZmRPiSXu: 00:35:12.839 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTgyMzQ1OTc0MjdiZDY5NzhiZTYxMmFhZGEzNjdkYTjxCEg7: ]] 00:35:12.839 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTgyMzQ1OTc0MjdiZDY5NzhiZTYxMmFhZGEzNjdkYTjxCEg7: 00:35:12.839 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:35:12.839 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:12.839 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:12.839 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:12.839 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:12.839 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:12.839 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:12.839 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.839 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.839 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.839 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:12.839 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:12.839 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:12.839 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:12.839 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:12.839 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:12.839 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:12.839 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:12.839 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:12.839 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:12.839 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:12.839 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:12.839 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.839 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.097 nvme0n1 00:35:13.097 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.097 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:13.097 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:13.097 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.097 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.097 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.097 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:13.097 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:13.097 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.097 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.097 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.097 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:13.097 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:35:13.097 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:13.097 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:13.097 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:13.097 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:13.097 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzc0NDMzMDY3YWI1NGNiMmQ3NWIzYzY4M2MwYTVjMGE0OTVlMWJjNTM5YzA3MTRmegWaDg==: 00:35:13.097 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzQxMzQwMmFhMDAzYzU4ODEyMTdhMTQ3MjhjMzUxNGYXUSx4: 00:35:13.097 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:13.097 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:13.097 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzc0NDMzMDY3YWI1NGNiMmQ3NWIzYzY4M2MwYTVjMGE0OTVlMWJjNTM5YzA3MTRmegWaDg==: 00:35:13.097 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzQxMzQwMmFhMDAzYzU4ODEyMTdhMTQ3MjhjMzUxNGYXUSx4: ]] 00:35:13.097 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzQxMzQwMmFhMDAzYzU4ODEyMTdhMTQ3MjhjMzUxNGYXUSx4: 00:35:13.097 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:35:13.097 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:13.097 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:13.097 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:13.097 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:13.097 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:13.097 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:13.097 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.097 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.097 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.097 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:13.097 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:13.097 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:13.097 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:13.097 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:13.097 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:13.097 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:13.098 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:13.098 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:13.098 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:13.098 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:13.098 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:13.098 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.098 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.098 nvme0n1 00:35:13.098 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.098 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:13.098 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.098 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:13.098 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.098 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.356 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:13.356 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:13.356 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.356 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.356 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.356 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:13.356 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:35:13.356 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:13.356 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:13.356 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:13.356 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:13.356 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWM2NTI3MjBkM2EyZTBjN2JlNDg3ZTMzNmIwYzgzNjY3YTc1NWNhOGI0ZDg0NDAyOGQwMjY0YjQ0NjA1ZGYyMlmgPtc=: 00:35:13.356 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:13.356 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:13.356 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:13.356 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWM2NTI3MjBkM2EyZTBjN2JlNDg3ZTMzNmIwYzgzNjY3YTc1NWNhOGI0ZDg0NDAyOGQwMjY0YjQ0NjA1ZGYyMlmgPtc=: 00:35:13.356 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:13.356 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:35:13.356 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:13.356 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:13.356 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:13.356 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:13.357 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:13.357 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:13.357 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.357 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.357 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.357 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:13.357 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:13.357 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:13.357 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:13.357 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:13.357 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:13.357 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:13.357 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:13.357 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:13.357 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:13.357 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:13.357 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:13.357 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.357 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.357 nvme0n1 00:35:13.357 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.357 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:13.357 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.357 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:13.357 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.357 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.357 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:13.357 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:13.357 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.357 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.615 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.615 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:13.615 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:13.615 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:35:13.615 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:13.615 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:13.615 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:13.615 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:13.615 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjJmMWVjY2YwNmUxOWU4MjY0NDZjYjkzNGM3MTQ1MTcu6tsr: 00:35:13.615 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTE3MjQwMjVjMmEzZDc3NDIxMzhiZTRlOWY2MGFiMTRiZjQ5NDc3ZGJlOWQ4MDdjZTYyMzhiMWVjNmU3MmIyNUyXBzU=: 00:35:13.615 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:13.615 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:13.615 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjJmMWVjY2YwNmUxOWU4MjY0NDZjYjkzNGM3MTQ1MTcu6tsr: 00:35:13.615 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTE3MjQwMjVjMmEzZDc3NDIxMzhiZTRlOWY2MGFiMTRiZjQ5NDc3ZGJlOWQ4MDdjZTYyMzhiMWVjNmU3MmIyNUyXBzU=: ]] 00:35:13.615 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTE3MjQwMjVjMmEzZDc3NDIxMzhiZTRlOWY2MGFiMTRiZjQ5NDc3ZGJlOWQ4MDdjZTYyMzhiMWVjNmU3MmIyNUyXBzU=: 00:35:13.615 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:35:13.615 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:13.615 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:13.615 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:13.615 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:13.615 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:13.615 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:13.615 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.615 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.615 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.615 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:13.615 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:13.615 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:13.615 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:13.615 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:13.615 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:13.615 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:13.615 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:13.615 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:13.615 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:13.615 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:13.615 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:13.615 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.615 16:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.615 nvme0n1 00:35:13.615 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.615 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:13.615 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:13.615 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.615 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.615 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.615 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:13.615 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:13.615 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.615 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.874 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.874 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:13.874 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:35:13.874 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:13.874 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:13.874 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:13.874 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:13.874 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjVmMDFlZmM3MWNkZmE4N2Q5NzI3NDRjZTYyYTNlZmZiYjY1ZjI2OWY2MDRjMzc5d3s3ZA==: 00:35:13.874 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzZmYzNmZWMxYWJhMzU3OTZiYjZiMWY4ZjFlZjI5NDcxMjFkYTkyZGMxNDYzNWNiEcgndg==: 00:35:13.874 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:13.874 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:13.874 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjVmMDFlZmM3MWNkZmE4N2Q5NzI3NDRjZTYyYTNlZmZiYjY1ZjI2OWY2MDRjMzc5d3s3ZA==: 00:35:13.874 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzZmYzNmZWMxYWJhMzU3OTZiYjZiMWY4ZjFlZjI5NDcxMjFkYTkyZGMxNDYzNWNiEcgndg==: ]] 00:35:13.874 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzZmYzNmZWMxYWJhMzU3OTZiYjZiMWY4ZjFlZjI5NDcxMjFkYTkyZGMxNDYzNWNiEcgndg==: 00:35:13.874 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:35:13.874 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:13.874 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:13.874 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:13.874 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:13.874 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:13.874 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:13.874 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.874 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.874 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.874 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:13.874 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:13.874 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:13.874 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:13.874 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:13.874 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:13.874 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:13.874 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:13.874 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:13.874 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:13.874 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:13.874 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:13.874 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.874 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.874 nvme0n1 00:35:13.874 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.874 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:13.874 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:13.874 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.874 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.874 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:14.133 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:14.133 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:14.133 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:14.133 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.133 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:14.133 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:14.133 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:35:14.133 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:14.133 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:14.133 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:14.133 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:14.133 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2Y5NjdkYjNjNWU2MTFkM2ExNzU5NDQ0NmEyODViZmRPiSXu: 00:35:14.133 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTgyMzQ1OTc0MjdiZDY5NzhiZTYxMmFhZGEzNjdkYTjxCEg7: 00:35:14.133 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:14.133 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:14.133 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2Y5NjdkYjNjNWU2MTFkM2ExNzU5NDQ0NmEyODViZmRPiSXu: 00:35:14.133 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTgyMzQ1OTc0MjdiZDY5NzhiZTYxMmFhZGEzNjdkYTjxCEg7: ]] 00:35:14.133 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTgyMzQ1OTc0MjdiZDY5NzhiZTYxMmFhZGEzNjdkYTjxCEg7: 00:35:14.133 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:35:14.133 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:14.133 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:14.133 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:14.133 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:14.133 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:14.133 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:14.133 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:14.133 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.133 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:14.133 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:14.133 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:14.133 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:14.133 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:14.133 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:14.133 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:14.133 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:14.133 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:14.133 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:14.133 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:14.133 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:14.133 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:14.133 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:14.133 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.133 nvme0n1 00:35:14.133 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:14.133 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:14.133 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:14.133 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.133 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:14.133 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:14.392 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:14.392 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:14.392 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:14.392 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.392 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:14.392 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:14.392 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:35:14.392 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:14.392 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:14.392 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:14.392 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:14.392 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzc0NDMzMDY3YWI1NGNiMmQ3NWIzYzY4M2MwYTVjMGE0OTVlMWJjNTM5YzA3MTRmegWaDg==: 00:35:14.392 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzQxMzQwMmFhMDAzYzU4ODEyMTdhMTQ3MjhjMzUxNGYXUSx4: 00:35:14.392 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:14.392 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:14.392 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzc0NDMzMDY3YWI1NGNiMmQ3NWIzYzY4M2MwYTVjMGE0OTVlMWJjNTM5YzA3MTRmegWaDg==: 00:35:14.392 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzQxMzQwMmFhMDAzYzU4ODEyMTdhMTQ3MjhjMzUxNGYXUSx4: ]] 00:35:14.393 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzQxMzQwMmFhMDAzYzU4ODEyMTdhMTQ3MjhjMzUxNGYXUSx4: 00:35:14.393 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:35:14.393 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:14.393 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:14.393 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:14.393 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:14.393 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:14.393 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:14.393 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:14.393 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.393 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:14.393 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:14.393 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:14.393 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:14.393 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:14.393 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:14.393 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:14.393 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:14.393 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:14.393 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:14.393 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:14.393 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:14.393 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:14.393 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:14.393 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.393 nvme0n1 00:35:14.393 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:14.393 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:14.393 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:14.393 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.393 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:14.393 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:14.651 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:14.651 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:14.651 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:14.651 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.651 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:14.651 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:14.651 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:35:14.651 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:14.651 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:14.651 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:14.651 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:14.651 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWM2NTI3MjBkM2EyZTBjN2JlNDg3ZTMzNmIwYzgzNjY3YTc1NWNhOGI0ZDg0NDAyOGQwMjY0YjQ0NjA1ZGYyMlmgPtc=: 00:35:14.651 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:14.651 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:14.651 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:14.651 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWM2NTI3MjBkM2EyZTBjN2JlNDg3ZTMzNmIwYzgzNjY3YTc1NWNhOGI0ZDg0NDAyOGQwMjY0YjQ0NjA1ZGYyMlmgPtc=: 00:35:14.651 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:14.651 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:35:14.651 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:14.651 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:14.651 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:14.651 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:14.651 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:14.651 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:14.651 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:14.651 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.651 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:14.651 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:14.651 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:14.651 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:14.651 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:14.651 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:14.651 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:14.651 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:14.651 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:14.651 16:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:14.651 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:14.651 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:14.651 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:14.651 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:14.651 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.651 nvme0n1 00:35:14.651 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:14.651 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:14.651 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:14.651 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.651 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:14.651 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:14.911 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:14.911 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:14.911 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:14.911 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.911 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:14.911 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:14.911 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:14.911 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:35:14.911 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:14.911 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:14.911 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:14.911 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:14.911 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjJmMWVjY2YwNmUxOWU4MjY0NDZjYjkzNGM3MTQ1MTcu6tsr: 00:35:14.911 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTE3MjQwMjVjMmEzZDc3NDIxMzhiZTRlOWY2MGFiMTRiZjQ5NDc3ZGJlOWQ4MDdjZTYyMzhiMWVjNmU3MmIyNUyXBzU=: 00:35:14.911 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:14.911 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:14.911 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjJmMWVjY2YwNmUxOWU4MjY0NDZjYjkzNGM3MTQ1MTcu6tsr: 00:35:14.911 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTE3MjQwMjVjMmEzZDc3NDIxMzhiZTRlOWY2MGFiMTRiZjQ5NDc3ZGJlOWQ4MDdjZTYyMzhiMWVjNmU3MmIyNUyXBzU=: ]] 00:35:14.911 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTE3MjQwMjVjMmEzZDc3NDIxMzhiZTRlOWY2MGFiMTRiZjQ5NDc3ZGJlOWQ4MDdjZTYyMzhiMWVjNmU3MmIyNUyXBzU=: 00:35:14.911 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:35:14.911 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:14.911 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:14.911 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:14.911 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:14.911 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:14.911 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:14.911 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:14.911 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.911 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:14.911 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:14.911 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:14.911 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:14.911 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:14.911 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:14.911 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:14.911 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:14.911 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:14.911 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:14.911 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:14.911 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:14.911 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:14.911 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:14.911 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.168 nvme0n1 00:35:15.168 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:15.168 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:15.168 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:15.168 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:15.168 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.168 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:15.168 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:15.168 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:15.168 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:15.168 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.168 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:15.168 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:15.168 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:35:15.168 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:15.168 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:15.168 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:15.168 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:15.168 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjVmMDFlZmM3MWNkZmE4N2Q5NzI3NDRjZTYyYTNlZmZiYjY1ZjI2OWY2MDRjMzc5d3s3ZA==: 00:35:15.168 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzZmYzNmZWMxYWJhMzU3OTZiYjZiMWY4ZjFlZjI5NDcxMjFkYTkyZGMxNDYzNWNiEcgndg==: 00:35:15.168 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:15.168 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:15.168 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjVmMDFlZmM3MWNkZmE4N2Q5NzI3NDRjZTYyYTNlZmZiYjY1ZjI2OWY2MDRjMzc5d3s3ZA==: 00:35:15.168 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzZmYzNmZWMxYWJhMzU3OTZiYjZiMWY4ZjFlZjI5NDcxMjFkYTkyZGMxNDYzNWNiEcgndg==: ]] 00:35:15.168 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzZmYzNmZWMxYWJhMzU3OTZiYjZiMWY4ZjFlZjI5NDcxMjFkYTkyZGMxNDYzNWNiEcgndg==: 00:35:15.168 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:35:15.168 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:15.168 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:15.168 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:15.168 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:15.168 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:15.168 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:15.168 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:15.168 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.168 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:15.168 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:15.168 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:15.168 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:15.168 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:15.168 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:15.168 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:15.168 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:15.168 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:15.168 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:15.168 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:15.168 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:15.168 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:15.168 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:15.168 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.426 nvme0n1 00:35:15.426 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:15.426 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:15.426 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:15.426 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.426 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:15.426 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:15.426 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:15.426 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:15.426 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:15.426 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.426 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:15.426 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:15.426 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:35:15.426 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:15.426 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:15.426 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:15.426 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:15.426 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2Y5NjdkYjNjNWU2MTFkM2ExNzU5NDQ0NmEyODViZmRPiSXu: 00:35:15.426 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTgyMzQ1OTc0MjdiZDY5NzhiZTYxMmFhZGEzNjdkYTjxCEg7: 00:35:15.426 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:15.426 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:15.426 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2Y5NjdkYjNjNWU2MTFkM2ExNzU5NDQ0NmEyODViZmRPiSXu: 00:35:15.426 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTgyMzQ1OTc0MjdiZDY5NzhiZTYxMmFhZGEzNjdkYTjxCEg7: ]] 00:35:15.426 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTgyMzQ1OTc0MjdiZDY5NzhiZTYxMmFhZGEzNjdkYTjxCEg7: 00:35:15.426 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:35:15.426 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:15.426 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:15.426 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:15.426 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:15.426 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:15.426 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:15.426 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:15.426 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.684 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:15.684 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:15.684 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:15.684 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:15.684 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:15.684 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:15.684 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:15.684 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:15.684 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:15.684 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:15.684 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:15.684 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:15.684 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:15.684 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:15.684 16:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.942 nvme0n1 00:35:15.942 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:15.942 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:15.942 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:15.942 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:15.942 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.942 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:15.942 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:15.942 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:15.942 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:15.942 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.942 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:15.942 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:15.942 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:35:15.942 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:15.942 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:15.942 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:15.942 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:15.942 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzc0NDMzMDY3YWI1NGNiMmQ3NWIzYzY4M2MwYTVjMGE0OTVlMWJjNTM5YzA3MTRmegWaDg==: 00:35:15.942 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzQxMzQwMmFhMDAzYzU4ODEyMTdhMTQ3MjhjMzUxNGYXUSx4: 00:35:15.942 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:15.942 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:15.942 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzc0NDMzMDY3YWI1NGNiMmQ3NWIzYzY4M2MwYTVjMGE0OTVlMWJjNTM5YzA3MTRmegWaDg==: 00:35:15.942 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzQxMzQwMmFhMDAzYzU4ODEyMTdhMTQ3MjhjMzUxNGYXUSx4: ]] 00:35:15.942 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzQxMzQwMmFhMDAzYzU4ODEyMTdhMTQ3MjhjMzUxNGYXUSx4: 00:35:15.942 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:35:15.942 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:15.942 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:15.942 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:15.942 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:15.942 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:15.942 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:15.942 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:15.942 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.942 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:15.942 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:15.942 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:15.942 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:15.942 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:15.942 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:15.942 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:15.942 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:15.942 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:15.942 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:15.942 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:15.942 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:15.942 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:15.942 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:15.942 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.200 nvme0n1 00:35:16.200 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:16.200 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:16.200 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:16.200 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.200 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:16.200 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:16.200 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:16.200 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:16.200 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:16.200 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.200 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:16.200 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:16.200 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:35:16.200 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:16.200 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:16.200 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:16.200 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:16.200 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWM2NTI3MjBkM2EyZTBjN2JlNDg3ZTMzNmIwYzgzNjY3YTc1NWNhOGI0ZDg0NDAyOGQwMjY0YjQ0NjA1ZGYyMlmgPtc=: 00:35:16.200 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:16.200 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:16.200 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:16.200 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWM2NTI3MjBkM2EyZTBjN2JlNDg3ZTMzNmIwYzgzNjY3YTc1NWNhOGI0ZDg0NDAyOGQwMjY0YjQ0NjA1ZGYyMlmgPtc=: 00:35:16.200 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:16.200 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:35:16.200 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:16.200 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:16.200 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:16.200 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:16.200 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:16.200 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:16.200 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:16.200 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.200 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:16.200 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:16.200 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:16.200 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:16.200 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:16.200 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:16.200 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:16.200 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:16.200 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:16.200 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:16.200 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:16.200 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:16.200 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:16.200 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:16.200 16:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.458 nvme0n1 00:35:16.458 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:16.458 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:16.458 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:16.458 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.458 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:16.716 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:16.716 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:16.716 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:16.716 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:16.716 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.716 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:16.716 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:16.716 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:16.716 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:35:16.716 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:16.716 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:16.716 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:16.716 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:16.716 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjJmMWVjY2YwNmUxOWU4MjY0NDZjYjkzNGM3MTQ1MTcu6tsr: 00:35:16.716 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTE3MjQwMjVjMmEzZDc3NDIxMzhiZTRlOWY2MGFiMTRiZjQ5NDc3ZGJlOWQ4MDdjZTYyMzhiMWVjNmU3MmIyNUyXBzU=: 00:35:16.716 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:16.716 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:16.716 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjJmMWVjY2YwNmUxOWU4MjY0NDZjYjkzNGM3MTQ1MTcu6tsr: 00:35:16.716 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTE3MjQwMjVjMmEzZDc3NDIxMzhiZTRlOWY2MGFiMTRiZjQ5NDc3ZGJlOWQ4MDdjZTYyMzhiMWVjNmU3MmIyNUyXBzU=: ]] 00:35:16.716 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTE3MjQwMjVjMmEzZDc3NDIxMzhiZTRlOWY2MGFiMTRiZjQ5NDc3ZGJlOWQ4MDdjZTYyMzhiMWVjNmU3MmIyNUyXBzU=: 00:35:16.716 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:35:16.716 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:16.716 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:16.716 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:16.716 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:16.716 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:16.716 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:16.716 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:16.716 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.716 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:16.716 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:16.716 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:16.716 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:16.716 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:16.716 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:16.716 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:16.716 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:16.716 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:16.716 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:16.716 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:16.716 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:16.716 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:16.716 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:16.716 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.282 nvme0n1 00:35:17.282 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:17.282 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:17.282 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:17.282 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.282 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:17.282 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:17.282 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:17.282 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:17.282 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:17.282 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.282 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:17.282 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:17.282 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:35:17.282 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:17.282 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:17.282 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:17.282 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:17.282 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjVmMDFlZmM3MWNkZmE4N2Q5NzI3NDRjZTYyYTNlZmZiYjY1ZjI2OWY2MDRjMzc5d3s3ZA==: 00:35:17.282 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzZmYzNmZWMxYWJhMzU3OTZiYjZiMWY4ZjFlZjI5NDcxMjFkYTkyZGMxNDYzNWNiEcgndg==: 00:35:17.282 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:17.282 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:17.282 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjVmMDFlZmM3MWNkZmE4N2Q5NzI3NDRjZTYyYTNlZmZiYjY1ZjI2OWY2MDRjMzc5d3s3ZA==: 00:35:17.282 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzZmYzNmZWMxYWJhMzU3OTZiYjZiMWY4ZjFlZjI5NDcxMjFkYTkyZGMxNDYzNWNiEcgndg==: ]] 00:35:17.282 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzZmYzNmZWMxYWJhMzU3OTZiYjZiMWY4ZjFlZjI5NDcxMjFkYTkyZGMxNDYzNWNiEcgndg==: 00:35:17.282 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:35:17.282 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:17.282 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:17.282 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:17.282 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:17.282 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:17.282 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:17.282 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:17.282 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.282 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:17.282 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:17.282 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:17.282 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:17.282 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:17.282 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:17.282 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:17.282 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:17.282 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:17.282 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:17.282 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:17.282 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:17.282 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:17.282 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:17.282 16:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.848 nvme0n1 00:35:17.848 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:17.848 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:17.848 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:17.848 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:17.848 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.848 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:17.848 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:17.848 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:17.848 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:17.848 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.848 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:17.848 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:17.848 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:35:17.848 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:17.848 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:17.848 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:17.848 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:17.848 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2Y5NjdkYjNjNWU2MTFkM2ExNzU5NDQ0NmEyODViZmRPiSXu: 00:35:17.848 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTgyMzQ1OTc0MjdiZDY5NzhiZTYxMmFhZGEzNjdkYTjxCEg7: 00:35:17.848 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:17.848 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:17.848 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2Y5NjdkYjNjNWU2MTFkM2ExNzU5NDQ0NmEyODViZmRPiSXu: 00:35:17.848 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTgyMzQ1OTc0MjdiZDY5NzhiZTYxMmFhZGEzNjdkYTjxCEg7: ]] 00:35:17.848 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTgyMzQ1OTc0MjdiZDY5NzhiZTYxMmFhZGEzNjdkYTjxCEg7: 00:35:17.848 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:35:17.848 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:17.848 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:17.848 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:17.848 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:17.848 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:17.848 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:17.848 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:17.848 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.848 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:17.848 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:17.848 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:17.849 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:17.849 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:17.849 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:17.849 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:17.849 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:17.849 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:17.849 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:17.849 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:17.849 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:17.849 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:17.849 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:17.849 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.415 nvme0n1 00:35:18.415 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.415 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:18.415 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:18.415 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.415 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.415 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.415 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:18.415 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:18.415 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.415 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.415 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.415 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:18.415 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:35:18.415 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:18.415 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:18.415 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:18.415 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:18.415 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzc0NDMzMDY3YWI1NGNiMmQ3NWIzYzY4M2MwYTVjMGE0OTVlMWJjNTM5YzA3MTRmegWaDg==: 00:35:18.415 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzQxMzQwMmFhMDAzYzU4ODEyMTdhMTQ3MjhjMzUxNGYXUSx4: 00:35:18.415 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:18.415 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:18.415 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzc0NDMzMDY3YWI1NGNiMmQ3NWIzYzY4M2MwYTVjMGE0OTVlMWJjNTM5YzA3MTRmegWaDg==: 00:35:18.415 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzQxMzQwMmFhMDAzYzU4ODEyMTdhMTQ3MjhjMzUxNGYXUSx4: ]] 00:35:18.415 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzQxMzQwMmFhMDAzYzU4ODEyMTdhMTQ3MjhjMzUxNGYXUSx4: 00:35:18.415 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:35:18.415 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:18.415 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:18.415 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:18.415 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:18.415 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:18.415 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:18.415 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.415 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.415 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.415 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:18.415 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:18.415 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:18.415 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:18.415 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:18.415 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:18.415 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:18.415 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:18.415 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:18.415 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:18.416 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:18.416 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:18.416 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.416 16:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.979 nvme0n1 00:35:18.979 16:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.979 16:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:18.979 16:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.979 16:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.979 16:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:18.979 16:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.979 16:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:18.979 16:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:18.979 16:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.979 16:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.979 16:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.979 16:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:18.979 16:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:35:18.979 16:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:18.979 16:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:18.979 16:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:18.979 16:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:18.979 16:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWM2NTI3MjBkM2EyZTBjN2JlNDg3ZTMzNmIwYzgzNjY3YTc1NWNhOGI0ZDg0NDAyOGQwMjY0YjQ0NjA1ZGYyMlmgPtc=: 00:35:18.979 16:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:18.979 16:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:18.979 16:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:18.979 16:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWM2NTI3MjBkM2EyZTBjN2JlNDg3ZTMzNmIwYzgzNjY3YTc1NWNhOGI0ZDg0NDAyOGQwMjY0YjQ0NjA1ZGYyMlmgPtc=: 00:35:18.979 16:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:18.979 16:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:35:18.979 16:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:18.979 16:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:18.979 16:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:18.979 16:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:18.980 16:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:18.980 16:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:18.980 16:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.980 16:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.980 16:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:19.237 16:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:19.237 16:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:19.237 16:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:19.237 16:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:19.237 16:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:19.237 16:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:19.237 16:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:19.237 16:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:19.237 16:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:19.237 16:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:19.237 16:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:19.237 16:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:19.237 16:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:19.237 16:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.802 nvme0n1 00:35:19.802 16:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:19.802 16:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:19.802 16:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:19.802 16:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:19.802 16:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.802 16:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:19.802 16:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:19.802 16:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:19.802 16:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:19.802 16:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.802 16:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:19.802 16:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:19.802 16:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:19.802 16:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:35:19.802 16:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:19.802 16:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:19.802 16:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:19.802 16:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:19.802 16:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjJmMWVjY2YwNmUxOWU4MjY0NDZjYjkzNGM3MTQ1MTcu6tsr: 00:35:19.802 16:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTE3MjQwMjVjMmEzZDc3NDIxMzhiZTRlOWY2MGFiMTRiZjQ5NDc3ZGJlOWQ4MDdjZTYyMzhiMWVjNmU3MmIyNUyXBzU=: 00:35:19.802 16:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:19.802 16:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:19.802 16:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjJmMWVjY2YwNmUxOWU4MjY0NDZjYjkzNGM3MTQ1MTcu6tsr: 00:35:19.802 16:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTE3MjQwMjVjMmEzZDc3NDIxMzhiZTRlOWY2MGFiMTRiZjQ5NDc3ZGJlOWQ4MDdjZTYyMzhiMWVjNmU3MmIyNUyXBzU=: ]] 00:35:19.802 16:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTE3MjQwMjVjMmEzZDc3NDIxMzhiZTRlOWY2MGFiMTRiZjQ5NDc3ZGJlOWQ4MDdjZTYyMzhiMWVjNmU3MmIyNUyXBzU=: 00:35:19.802 16:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:35:19.802 16:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:19.802 16:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:19.802 16:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:19.802 16:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:19.802 16:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:19.802 16:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:19.802 16:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:19.802 16:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.802 16:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:19.802 16:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:19.802 16:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:19.802 16:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:19.802 16:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:19.802 16:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:19.802 16:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:19.802 16:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:19.802 16:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:19.802 16:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:19.802 16:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:19.802 16:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:19.802 16:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:19.802 16:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:19.802 16:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.735 nvme0n1 00:35:20.735 16:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.735 16:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:20.735 16:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:20.735 16:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.735 16:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.735 16:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.735 16:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:20.735 16:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:20.735 16:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.735 16:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.735 16:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.735 16:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:20.735 16:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:35:20.735 16:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:20.735 16:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:20.735 16:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:20.735 16:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:20.735 16:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjVmMDFlZmM3MWNkZmE4N2Q5NzI3NDRjZTYyYTNlZmZiYjY1ZjI2OWY2MDRjMzc5d3s3ZA==: 00:35:20.735 16:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzZmYzNmZWMxYWJhMzU3OTZiYjZiMWY4ZjFlZjI5NDcxMjFkYTkyZGMxNDYzNWNiEcgndg==: 00:35:20.735 16:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:20.735 16:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:20.735 16:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjVmMDFlZmM3MWNkZmE4N2Q5NzI3NDRjZTYyYTNlZmZiYjY1ZjI2OWY2MDRjMzc5d3s3ZA==: 00:35:20.735 16:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzZmYzNmZWMxYWJhMzU3OTZiYjZiMWY4ZjFlZjI5NDcxMjFkYTkyZGMxNDYzNWNiEcgndg==: ]] 00:35:20.735 16:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzZmYzNmZWMxYWJhMzU3OTZiYjZiMWY4ZjFlZjI5NDcxMjFkYTkyZGMxNDYzNWNiEcgndg==: 00:35:20.735 16:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:35:20.735 16:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:20.735 16:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:20.735 16:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:20.735 16:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:20.735 16:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:20.735 16:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:20.735 16:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.736 16:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.736 16:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.736 16:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:20.736 16:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:20.736 16:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:20.736 16:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:20.736 16:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:20.736 16:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:20.736 16:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:20.736 16:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:20.736 16:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:20.736 16:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:20.736 16:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:20.736 16:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:20.736 16:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.736 16:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.669 nvme0n1 00:35:21.669 16:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:21.669 16:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:21.669 16:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:21.669 16:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:21.669 16:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.669 16:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:21.669 16:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:21.669 16:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:21.669 16:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:21.669 16:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.927 16:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:21.927 16:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:21.927 16:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:35:21.927 16:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:21.927 16:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:21.927 16:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:21.927 16:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:21.927 16:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2Y5NjdkYjNjNWU2MTFkM2ExNzU5NDQ0NmEyODViZmRPiSXu: 00:35:21.927 16:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTgyMzQ1OTc0MjdiZDY5NzhiZTYxMmFhZGEzNjdkYTjxCEg7: 00:35:21.927 16:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:21.927 16:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:21.927 16:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2Y5NjdkYjNjNWU2MTFkM2ExNzU5NDQ0NmEyODViZmRPiSXu: 00:35:21.927 16:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTgyMzQ1OTc0MjdiZDY5NzhiZTYxMmFhZGEzNjdkYTjxCEg7: ]] 00:35:21.927 16:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTgyMzQ1OTc0MjdiZDY5NzhiZTYxMmFhZGEzNjdkYTjxCEg7: 00:35:21.927 16:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:35:21.927 16:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:21.927 16:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:21.927 16:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:21.927 16:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:21.927 16:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:21.927 16:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:21.927 16:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:21.927 16:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.927 16:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:21.927 16:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:21.927 16:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:21.927 16:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:21.927 16:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:21.927 16:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:21.927 16:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:21.927 16:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:21.927 16:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:21.927 16:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:21.927 16:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:21.927 16:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:21.927 16:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:21.927 16:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:21.927 16:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.859 nvme0n1 00:35:22.859 16:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:22.859 16:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:22.859 16:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:22.859 16:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:22.859 16:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.859 16:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:22.859 16:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:22.859 16:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:22.859 16:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:22.859 16:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.859 16:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:22.859 16:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:22.859 16:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:35:22.859 16:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:22.859 16:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:22.859 16:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:22.859 16:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:22.859 16:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzc0NDMzMDY3YWI1NGNiMmQ3NWIzYzY4M2MwYTVjMGE0OTVlMWJjNTM5YzA3MTRmegWaDg==: 00:35:22.859 16:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzQxMzQwMmFhMDAzYzU4ODEyMTdhMTQ3MjhjMzUxNGYXUSx4: 00:35:22.859 16:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:22.859 16:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:22.859 16:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzc0NDMzMDY3YWI1NGNiMmQ3NWIzYzY4M2MwYTVjMGE0OTVlMWJjNTM5YzA3MTRmegWaDg==: 00:35:22.859 16:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzQxMzQwMmFhMDAzYzU4ODEyMTdhMTQ3MjhjMzUxNGYXUSx4: ]] 00:35:22.859 16:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzQxMzQwMmFhMDAzYzU4ODEyMTdhMTQ3MjhjMzUxNGYXUSx4: 00:35:22.859 16:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:35:22.859 16:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:22.859 16:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:22.859 16:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:22.859 16:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:22.859 16:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:22.859 16:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:22.859 16:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:22.859 16:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.859 16:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:22.859 16:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:22.859 16:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:22.859 16:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:22.859 16:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:22.859 16:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:22.859 16:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:22.859 16:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:22.859 16:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:22.859 16:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:22.859 16:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:22.859 16:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:22.859 16:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:22.859 16:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:22.859 16:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.792 nvme0n1 00:35:23.792 16:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:23.792 16:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:23.792 16:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:23.792 16:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.792 16:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:23.792 16:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:23.792 16:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:23.792 16:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:23.792 16:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:23.792 16:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.792 16:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:23.792 16:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:23.792 16:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:35:23.792 16:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:23.792 16:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:23.792 16:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:23.792 16:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:23.792 16:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWM2NTI3MjBkM2EyZTBjN2JlNDg3ZTMzNmIwYzgzNjY3YTc1NWNhOGI0ZDg0NDAyOGQwMjY0YjQ0NjA1ZGYyMlmgPtc=: 00:35:23.792 16:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:23.792 16:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:23.792 16:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:23.792 16:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWM2NTI3MjBkM2EyZTBjN2JlNDg3ZTMzNmIwYzgzNjY3YTc1NWNhOGI0ZDg0NDAyOGQwMjY0YjQ0NjA1ZGYyMlmgPtc=: 00:35:24.049 16:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:24.049 16:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:35:24.049 16:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:24.049 16:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:24.049 16:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:24.049 16:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:24.049 16:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:24.049 16:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:24.050 16:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:24.050 16:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.050 16:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:24.050 16:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:24.050 16:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:24.050 16:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:24.050 16:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:24.050 16:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:24.050 16:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:24.050 16:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:24.050 16:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:24.050 16:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:24.050 16:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:24.050 16:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:24.050 16:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:24.050 16:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:24.050 16:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.982 nvme0n1 00:35:24.982 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:24.982 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:24.982 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:24.982 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:24.982 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.982 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:24.982 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:24.982 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:24.982 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:24.982 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.982 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:24.982 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:24.982 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:24.982 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:24.982 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:35:24.982 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:24.982 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:24.982 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:24.982 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:24.982 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjJmMWVjY2YwNmUxOWU4MjY0NDZjYjkzNGM3MTQ1MTcu6tsr: 00:35:24.982 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTE3MjQwMjVjMmEzZDc3NDIxMzhiZTRlOWY2MGFiMTRiZjQ5NDc3ZGJlOWQ4MDdjZTYyMzhiMWVjNmU3MmIyNUyXBzU=: 00:35:24.982 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:24.982 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:24.982 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjJmMWVjY2YwNmUxOWU4MjY0NDZjYjkzNGM3MTQ1MTcu6tsr: 00:35:24.982 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTE3MjQwMjVjMmEzZDc3NDIxMzhiZTRlOWY2MGFiMTRiZjQ5NDc3ZGJlOWQ4MDdjZTYyMzhiMWVjNmU3MmIyNUyXBzU=: ]] 00:35:24.982 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTE3MjQwMjVjMmEzZDc3NDIxMzhiZTRlOWY2MGFiMTRiZjQ5NDc3ZGJlOWQ4MDdjZTYyMzhiMWVjNmU3MmIyNUyXBzU=: 00:35:24.983 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:35:24.983 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:24.983 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:24.983 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:24.983 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:24.983 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:24.983 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:24.983 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:24.983 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.983 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:24.983 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:24.983 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:24.983 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:24.983 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:24.983 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:24.983 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:24.983 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:24.983 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:24.983 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:24.983 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:24.983 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:24.983 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:24.983 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:24.983 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.983 nvme0n1 00:35:24.983 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:24.983 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:24.983 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:24.983 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:24.983 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.983 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:24.983 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:24.983 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:24.983 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:24.983 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.241 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:25.241 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:25.241 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:35:25.241 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:25.241 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:25.241 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:25.241 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:25.241 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjVmMDFlZmM3MWNkZmE4N2Q5NzI3NDRjZTYyYTNlZmZiYjY1ZjI2OWY2MDRjMzc5d3s3ZA==: 00:35:25.241 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzZmYzNmZWMxYWJhMzU3OTZiYjZiMWY4ZjFlZjI5NDcxMjFkYTkyZGMxNDYzNWNiEcgndg==: 00:35:25.241 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:25.241 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:25.241 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjVmMDFlZmM3MWNkZmE4N2Q5NzI3NDRjZTYyYTNlZmZiYjY1ZjI2OWY2MDRjMzc5d3s3ZA==: 00:35:25.241 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzZmYzNmZWMxYWJhMzU3OTZiYjZiMWY4ZjFlZjI5NDcxMjFkYTkyZGMxNDYzNWNiEcgndg==: ]] 00:35:25.241 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzZmYzNmZWMxYWJhMzU3OTZiYjZiMWY4ZjFlZjI5NDcxMjFkYTkyZGMxNDYzNWNiEcgndg==: 00:35:25.241 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:35:25.241 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:25.241 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:25.241 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:25.241 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:25.241 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:25.241 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:25.241 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:25.241 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.241 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:25.241 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:25.241 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:25.241 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:25.241 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:25.241 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:25.241 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:25.241 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:25.241 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:25.241 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:25.241 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:25.241 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:25.241 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:25.241 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:25.241 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.241 nvme0n1 00:35:25.241 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:25.241 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:25.241 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:25.241 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:25.241 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.241 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:25.241 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:25.241 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:25.241 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:25.241 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.241 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:25.241 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:25.241 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:35:25.242 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:25.242 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:25.242 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:25.242 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:25.242 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2Y5NjdkYjNjNWU2MTFkM2ExNzU5NDQ0NmEyODViZmRPiSXu: 00:35:25.242 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTgyMzQ1OTc0MjdiZDY5NzhiZTYxMmFhZGEzNjdkYTjxCEg7: 00:35:25.242 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:25.242 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:25.242 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2Y5NjdkYjNjNWU2MTFkM2ExNzU5NDQ0NmEyODViZmRPiSXu: 00:35:25.242 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTgyMzQ1OTc0MjdiZDY5NzhiZTYxMmFhZGEzNjdkYTjxCEg7: ]] 00:35:25.242 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTgyMzQ1OTc0MjdiZDY5NzhiZTYxMmFhZGEzNjdkYTjxCEg7: 00:35:25.242 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:35:25.242 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:25.242 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:25.242 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:25.242 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:25.242 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:25.242 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:25.242 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:25.242 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.242 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:25.242 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:25.242 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:25.242 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:25.242 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:25.242 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:25.242 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:25.242 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:25.242 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:25.242 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:25.242 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:25.242 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:25.242 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:25.242 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:25.242 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.500 nvme0n1 00:35:25.500 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:25.500 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:25.500 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:25.500 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:25.500 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.500 16:43:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:25.500 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:25.500 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:25.500 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:25.500 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.500 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:25.500 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:25.500 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:35:25.500 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:25.500 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:25.500 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:25.500 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:25.500 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzc0NDMzMDY3YWI1NGNiMmQ3NWIzYzY4M2MwYTVjMGE0OTVlMWJjNTM5YzA3MTRmegWaDg==: 00:35:25.500 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzQxMzQwMmFhMDAzYzU4ODEyMTdhMTQ3MjhjMzUxNGYXUSx4: 00:35:25.500 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:25.500 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:25.500 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzc0NDMzMDY3YWI1NGNiMmQ3NWIzYzY4M2MwYTVjMGE0OTVlMWJjNTM5YzA3MTRmegWaDg==: 00:35:25.500 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzQxMzQwMmFhMDAzYzU4ODEyMTdhMTQ3MjhjMzUxNGYXUSx4: ]] 00:35:25.500 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzQxMzQwMmFhMDAzYzU4ODEyMTdhMTQ3MjhjMzUxNGYXUSx4: 00:35:25.500 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:35:25.500 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:25.500 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:25.500 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:25.500 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:25.500 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:25.500 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:25.500 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:25.500 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.500 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:25.500 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:25.500 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:25.500 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:25.500 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:25.500 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:25.500 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:25.500 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:25.500 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:25.500 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:25.500 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:25.500 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:25.500 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:25.500 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:25.500 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.758 nvme0n1 00:35:25.758 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:25.758 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:25.758 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:25.758 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.758 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:25.758 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:25.758 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:25.758 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:25.758 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:25.758 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.758 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:25.758 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:25.758 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:35:25.758 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:25.758 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:25.758 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:25.758 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:25.758 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWM2NTI3MjBkM2EyZTBjN2JlNDg3ZTMzNmIwYzgzNjY3YTc1NWNhOGI0ZDg0NDAyOGQwMjY0YjQ0NjA1ZGYyMlmgPtc=: 00:35:25.758 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:25.758 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:25.758 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:25.758 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWM2NTI3MjBkM2EyZTBjN2JlNDg3ZTMzNmIwYzgzNjY3YTc1NWNhOGI0ZDg0NDAyOGQwMjY0YjQ0NjA1ZGYyMlmgPtc=: 00:35:25.758 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:25.758 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:35:25.758 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:25.758 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:25.758 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:25.758 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:25.758 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:25.758 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:25.758 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:25.758 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.758 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:25.758 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:25.758 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:25.758 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:25.758 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:25.758 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:25.758 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:25.758 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:25.758 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:25.758 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:25.758 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:25.758 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:25.758 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:25.758 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:25.758 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.015 nvme0n1 00:35:26.015 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:26.015 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:26.015 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:26.015 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.015 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:26.015 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:26.015 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:26.015 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:26.015 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:26.015 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.015 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:26.015 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:26.015 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:26.015 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:35:26.015 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:26.015 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:26.015 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:26.015 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:26.015 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjJmMWVjY2YwNmUxOWU4MjY0NDZjYjkzNGM3MTQ1MTcu6tsr: 00:35:26.015 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTE3MjQwMjVjMmEzZDc3NDIxMzhiZTRlOWY2MGFiMTRiZjQ5NDc3ZGJlOWQ4MDdjZTYyMzhiMWVjNmU3MmIyNUyXBzU=: 00:35:26.015 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:26.015 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:26.015 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjJmMWVjY2YwNmUxOWU4MjY0NDZjYjkzNGM3MTQ1MTcu6tsr: 00:35:26.015 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTE3MjQwMjVjMmEzZDc3NDIxMzhiZTRlOWY2MGFiMTRiZjQ5NDc3ZGJlOWQ4MDdjZTYyMzhiMWVjNmU3MmIyNUyXBzU=: ]] 00:35:26.015 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTE3MjQwMjVjMmEzZDc3NDIxMzhiZTRlOWY2MGFiMTRiZjQ5NDc3ZGJlOWQ4MDdjZTYyMzhiMWVjNmU3MmIyNUyXBzU=: 00:35:26.015 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:35:26.015 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:26.015 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:26.015 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:26.015 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:26.015 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:26.015 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:26.015 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:26.015 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.015 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:26.015 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:26.015 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:26.015 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:26.015 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:26.015 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:26.015 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:26.015 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:26.015 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:26.015 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:26.015 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:26.015 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:26.015 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:26.015 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:26.015 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.273 nvme0n1 00:35:26.273 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:26.273 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:26.273 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:26.273 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.273 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:26.273 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:26.273 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:26.273 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:26.273 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:26.273 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.273 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:26.273 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:26.273 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:35:26.273 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:26.273 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:26.273 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:26.273 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:26.273 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjVmMDFlZmM3MWNkZmE4N2Q5NzI3NDRjZTYyYTNlZmZiYjY1ZjI2OWY2MDRjMzc5d3s3ZA==: 00:35:26.273 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzZmYzNmZWMxYWJhMzU3OTZiYjZiMWY4ZjFlZjI5NDcxMjFkYTkyZGMxNDYzNWNiEcgndg==: 00:35:26.273 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:26.273 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:26.273 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjVmMDFlZmM3MWNkZmE4N2Q5NzI3NDRjZTYyYTNlZmZiYjY1ZjI2OWY2MDRjMzc5d3s3ZA==: 00:35:26.273 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzZmYzNmZWMxYWJhMzU3OTZiYjZiMWY4ZjFlZjI5NDcxMjFkYTkyZGMxNDYzNWNiEcgndg==: ]] 00:35:26.273 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzZmYzNmZWMxYWJhMzU3OTZiYjZiMWY4ZjFlZjI5NDcxMjFkYTkyZGMxNDYzNWNiEcgndg==: 00:35:26.273 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:35:26.273 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:26.273 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:26.273 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:26.273 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:26.273 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:26.273 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:26.273 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:26.273 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.273 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:26.273 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:26.273 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:26.273 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:26.273 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:26.273 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:26.273 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:26.273 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:26.273 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:26.273 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:26.273 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:26.273 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:26.273 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:26.273 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:26.273 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.532 nvme0n1 00:35:26.532 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:26.532 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:26.532 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:26.532 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.532 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:26.532 16:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:26.532 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:26.532 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:26.532 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:26.532 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.532 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:26.532 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:26.532 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:35:26.532 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:26.532 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:26.532 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:26.532 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:26.532 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2Y5NjdkYjNjNWU2MTFkM2ExNzU5NDQ0NmEyODViZmRPiSXu: 00:35:26.532 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTgyMzQ1OTc0MjdiZDY5NzhiZTYxMmFhZGEzNjdkYTjxCEg7: 00:35:26.532 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:26.532 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:26.532 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2Y5NjdkYjNjNWU2MTFkM2ExNzU5NDQ0NmEyODViZmRPiSXu: 00:35:26.532 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTgyMzQ1OTc0MjdiZDY5NzhiZTYxMmFhZGEzNjdkYTjxCEg7: ]] 00:35:26.532 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTgyMzQ1OTc0MjdiZDY5NzhiZTYxMmFhZGEzNjdkYTjxCEg7: 00:35:26.532 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:35:26.532 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:26.532 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:26.532 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:26.532 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:26.532 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:26.532 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:26.532 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:26.532 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.532 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:26.532 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:26.532 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:26.532 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:26.532 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:26.532 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:26.532 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:26.532 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:26.532 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:26.532 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:26.532 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:26.532 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:26.532 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:26.532 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:26.532 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.791 nvme0n1 00:35:26.791 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:26.791 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:26.791 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:26.791 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:26.791 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.791 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:26.791 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:26.791 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:26.791 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:26.791 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.791 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:26.791 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:26.791 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:35:26.791 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:26.791 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:26.791 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:26.791 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:26.791 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzc0NDMzMDY3YWI1NGNiMmQ3NWIzYzY4M2MwYTVjMGE0OTVlMWJjNTM5YzA3MTRmegWaDg==: 00:35:26.791 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzQxMzQwMmFhMDAzYzU4ODEyMTdhMTQ3MjhjMzUxNGYXUSx4: 00:35:26.791 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:26.791 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:26.791 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzc0NDMzMDY3YWI1NGNiMmQ3NWIzYzY4M2MwYTVjMGE0OTVlMWJjNTM5YzA3MTRmegWaDg==: 00:35:26.791 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzQxMzQwMmFhMDAzYzU4ODEyMTdhMTQ3MjhjMzUxNGYXUSx4: ]] 00:35:26.791 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzQxMzQwMmFhMDAzYzU4ODEyMTdhMTQ3MjhjMzUxNGYXUSx4: 00:35:26.791 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:35:26.791 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:26.791 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:26.791 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:26.791 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:26.791 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:26.791 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:26.791 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:26.791 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.791 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:26.791 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:26.791 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:26.791 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:26.791 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:26.791 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:26.791 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:26.791 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:26.791 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:26.791 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:26.791 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:26.791 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:26.791 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:26.791 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:26.791 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.050 nvme0n1 00:35:27.050 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:27.050 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:27.050 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:27.050 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:27.050 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.050 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:27.050 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:27.051 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:27.051 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:27.051 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.051 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:27.051 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:27.051 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:35:27.051 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:27.051 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:27.051 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:27.051 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:27.051 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWM2NTI3MjBkM2EyZTBjN2JlNDg3ZTMzNmIwYzgzNjY3YTc1NWNhOGI0ZDg0NDAyOGQwMjY0YjQ0NjA1ZGYyMlmgPtc=: 00:35:27.051 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:27.051 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:27.051 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:27.051 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWM2NTI3MjBkM2EyZTBjN2JlNDg3ZTMzNmIwYzgzNjY3YTc1NWNhOGI0ZDg0NDAyOGQwMjY0YjQ0NjA1ZGYyMlmgPtc=: 00:35:27.051 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:27.051 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:35:27.051 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:27.051 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:27.051 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:27.051 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:27.051 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:27.051 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:27.051 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:27.051 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.051 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:27.051 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:27.051 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:27.051 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:27.051 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:27.051 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:27.051 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:27.051 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:27.051 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:27.051 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:27.051 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:27.051 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:27.051 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:27.051 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:27.051 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.309 nvme0n1 00:35:27.309 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:27.309 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:27.309 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:27.309 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:27.309 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.309 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:27.309 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:27.309 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:27.309 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:27.309 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.309 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:27.309 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:27.309 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:27.309 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:35:27.309 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:27.309 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:27.309 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:27.309 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:27.309 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjJmMWVjY2YwNmUxOWU4MjY0NDZjYjkzNGM3MTQ1MTcu6tsr: 00:35:27.309 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTE3MjQwMjVjMmEzZDc3NDIxMzhiZTRlOWY2MGFiMTRiZjQ5NDc3ZGJlOWQ4MDdjZTYyMzhiMWVjNmU3MmIyNUyXBzU=: 00:35:27.309 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:27.309 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:27.309 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjJmMWVjY2YwNmUxOWU4MjY0NDZjYjkzNGM3MTQ1MTcu6tsr: 00:35:27.309 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTE3MjQwMjVjMmEzZDc3NDIxMzhiZTRlOWY2MGFiMTRiZjQ5NDc3ZGJlOWQ4MDdjZTYyMzhiMWVjNmU3MmIyNUyXBzU=: ]] 00:35:27.309 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTE3MjQwMjVjMmEzZDc3NDIxMzhiZTRlOWY2MGFiMTRiZjQ5NDc3ZGJlOWQ4MDdjZTYyMzhiMWVjNmU3MmIyNUyXBzU=: 00:35:27.309 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:35:27.309 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:27.309 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:27.309 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:27.310 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:27.310 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:27.310 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:27.310 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:27.310 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.310 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:27.310 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:27.310 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:27.310 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:27.310 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:27.310 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:27.310 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:27.310 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:27.310 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:27.310 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:27.310 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:27.310 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:27.310 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:27.310 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:27.310 16:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.568 nvme0n1 00:35:27.568 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:27.568 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:27.568 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:27.568 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.568 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:27.568 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:27.826 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:27.826 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:27.826 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:27.826 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.826 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:27.826 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:27.826 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:35:27.826 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:27.826 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:27.826 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:27.826 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:27.826 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjVmMDFlZmM3MWNkZmE4N2Q5NzI3NDRjZTYyYTNlZmZiYjY1ZjI2OWY2MDRjMzc5d3s3ZA==: 00:35:27.826 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzZmYzNmZWMxYWJhMzU3OTZiYjZiMWY4ZjFlZjI5NDcxMjFkYTkyZGMxNDYzNWNiEcgndg==: 00:35:27.826 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:27.826 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:27.826 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjVmMDFlZmM3MWNkZmE4N2Q5NzI3NDRjZTYyYTNlZmZiYjY1ZjI2OWY2MDRjMzc5d3s3ZA==: 00:35:27.826 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzZmYzNmZWMxYWJhMzU3OTZiYjZiMWY4ZjFlZjI5NDcxMjFkYTkyZGMxNDYzNWNiEcgndg==: ]] 00:35:27.826 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzZmYzNmZWMxYWJhMzU3OTZiYjZiMWY4ZjFlZjI5NDcxMjFkYTkyZGMxNDYzNWNiEcgndg==: 00:35:27.826 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:35:27.826 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:27.826 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:27.826 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:27.826 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:27.826 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:27.826 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:27.826 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:27.826 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.826 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:27.826 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:27.826 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:27.826 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:27.826 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:27.826 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:27.826 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:27.826 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:27.826 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:27.826 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:27.826 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:27.826 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:27.826 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:27.826 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:27.826 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.085 nvme0n1 00:35:28.085 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:28.085 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:28.085 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:28.085 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.085 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:28.085 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:28.085 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:28.085 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:28.085 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:28.085 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.085 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:28.085 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:28.085 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:35:28.085 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:28.085 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:28.085 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:28.085 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:28.085 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2Y5NjdkYjNjNWU2MTFkM2ExNzU5NDQ0NmEyODViZmRPiSXu: 00:35:28.085 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTgyMzQ1OTc0MjdiZDY5NzhiZTYxMmFhZGEzNjdkYTjxCEg7: 00:35:28.085 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:28.085 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:28.085 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2Y5NjdkYjNjNWU2MTFkM2ExNzU5NDQ0NmEyODViZmRPiSXu: 00:35:28.085 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTgyMzQ1OTc0MjdiZDY5NzhiZTYxMmFhZGEzNjdkYTjxCEg7: ]] 00:35:28.085 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTgyMzQ1OTc0MjdiZDY5NzhiZTYxMmFhZGEzNjdkYTjxCEg7: 00:35:28.085 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:35:28.085 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:28.085 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:28.085 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:28.085 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:28.085 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:28.085 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:28.085 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:28.085 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.085 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:28.085 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:28.085 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:28.085 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:28.085 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:28.085 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:28.085 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:28.085 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:28.085 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:28.085 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:28.085 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:28.085 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:28.085 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:28.085 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:28.085 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.344 nvme0n1 00:35:28.344 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:28.344 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:28.344 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:28.344 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.344 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:28.344 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:28.344 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:28.344 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:28.344 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:28.344 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.344 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:28.344 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:28.344 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:35:28.344 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:28.344 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:28.344 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:28.344 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:28.344 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzc0NDMzMDY3YWI1NGNiMmQ3NWIzYzY4M2MwYTVjMGE0OTVlMWJjNTM5YzA3MTRmegWaDg==: 00:35:28.344 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzQxMzQwMmFhMDAzYzU4ODEyMTdhMTQ3MjhjMzUxNGYXUSx4: 00:35:28.344 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:28.344 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:28.344 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzc0NDMzMDY3YWI1NGNiMmQ3NWIzYzY4M2MwYTVjMGE0OTVlMWJjNTM5YzA3MTRmegWaDg==: 00:35:28.344 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzQxMzQwMmFhMDAzYzU4ODEyMTdhMTQ3MjhjMzUxNGYXUSx4: ]] 00:35:28.344 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzQxMzQwMmFhMDAzYzU4ODEyMTdhMTQ3MjhjMzUxNGYXUSx4: 00:35:28.344 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:35:28.344 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:28.344 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:28.344 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:28.344 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:28.344 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:28.344 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:28.344 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:28.344 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.602 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:28.602 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:28.602 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:28.602 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:28.602 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:28.602 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:28.602 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:28.602 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:28.602 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:28.602 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:28.602 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:28.602 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:28.602 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:28.602 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:28.602 16:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.860 nvme0n1 00:35:28.860 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:28.860 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:28.860 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:28.860 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:28.860 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.860 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:28.860 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:28.860 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:28.860 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:28.860 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.860 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:28.860 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:28.860 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:35:28.860 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:28.860 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:28.860 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:28.860 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:28.860 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWM2NTI3MjBkM2EyZTBjN2JlNDg3ZTMzNmIwYzgzNjY3YTc1NWNhOGI0ZDg0NDAyOGQwMjY0YjQ0NjA1ZGYyMlmgPtc=: 00:35:28.860 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:28.860 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:28.860 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:28.860 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWM2NTI3MjBkM2EyZTBjN2JlNDg3ZTMzNmIwYzgzNjY3YTc1NWNhOGI0ZDg0NDAyOGQwMjY0YjQ0NjA1ZGYyMlmgPtc=: 00:35:28.860 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:28.860 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:35:28.860 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:28.860 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:28.860 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:28.860 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:28.860 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:28.860 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:28.860 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:28.860 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.860 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:28.860 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:28.860 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:28.860 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:28.860 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:28.860 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:28.860 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:28.860 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:28.860 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:28.860 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:28.860 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:28.860 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:28.860 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:28.860 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:28.860 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.119 nvme0n1 00:35:29.119 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:29.119 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:29.119 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:29.119 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:29.119 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.119 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:29.119 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:29.119 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:29.119 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:29.119 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.119 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:29.119 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:29.119 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:29.119 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:35:29.119 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:29.119 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:29.119 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:29.119 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:29.119 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjJmMWVjY2YwNmUxOWU4MjY0NDZjYjkzNGM3MTQ1MTcu6tsr: 00:35:29.119 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTE3MjQwMjVjMmEzZDc3NDIxMzhiZTRlOWY2MGFiMTRiZjQ5NDc3ZGJlOWQ4MDdjZTYyMzhiMWVjNmU3MmIyNUyXBzU=: 00:35:29.119 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:29.119 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:29.119 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjJmMWVjY2YwNmUxOWU4MjY0NDZjYjkzNGM3MTQ1MTcu6tsr: 00:35:29.119 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTE3MjQwMjVjMmEzZDc3NDIxMzhiZTRlOWY2MGFiMTRiZjQ5NDc3ZGJlOWQ4MDdjZTYyMzhiMWVjNmU3MmIyNUyXBzU=: ]] 00:35:29.119 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTE3MjQwMjVjMmEzZDc3NDIxMzhiZTRlOWY2MGFiMTRiZjQ5NDc3ZGJlOWQ4MDdjZTYyMzhiMWVjNmU3MmIyNUyXBzU=: 00:35:29.119 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:35:29.119 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:29.119 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:29.119 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:29.119 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:29.119 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:29.119 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:29.119 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:29.119 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.119 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:29.119 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:29.119 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:29.119 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:29.119 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:29.119 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:29.119 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:29.119 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:29.119 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:29.119 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:29.119 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:29.119 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:29.119 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:29.119 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:29.119 16:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.685 nvme0n1 00:35:29.685 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:29.685 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:29.685 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:29.685 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:29.685 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.685 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:29.685 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:29.685 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:29.685 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:29.686 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.686 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:29.686 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:29.686 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:35:29.686 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:29.686 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:29.686 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:29.686 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:29.686 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjVmMDFlZmM3MWNkZmE4N2Q5NzI3NDRjZTYyYTNlZmZiYjY1ZjI2OWY2MDRjMzc5d3s3ZA==: 00:35:29.686 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzZmYzNmZWMxYWJhMzU3OTZiYjZiMWY4ZjFlZjI5NDcxMjFkYTkyZGMxNDYzNWNiEcgndg==: 00:35:29.686 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:29.686 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:29.686 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjVmMDFlZmM3MWNkZmE4N2Q5NzI3NDRjZTYyYTNlZmZiYjY1ZjI2OWY2MDRjMzc5d3s3ZA==: 00:35:29.686 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzZmYzNmZWMxYWJhMzU3OTZiYjZiMWY4ZjFlZjI5NDcxMjFkYTkyZGMxNDYzNWNiEcgndg==: ]] 00:35:29.686 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzZmYzNmZWMxYWJhMzU3OTZiYjZiMWY4ZjFlZjI5NDcxMjFkYTkyZGMxNDYzNWNiEcgndg==: 00:35:29.686 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:35:29.686 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:29.686 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:29.686 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:29.686 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:29.686 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:29.686 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:29.686 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:29.686 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.686 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:29.686 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:29.686 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:29.686 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:29.686 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:29.686 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:29.686 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:29.686 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:29.686 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:29.686 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:29.686 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:29.686 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:29.686 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:29.686 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:29.686 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.252 nvme0n1 00:35:30.252 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.252 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:30.252 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.252 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.252 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:30.252 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.252 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:30.252 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:30.252 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.252 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.252 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.252 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:30.252 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:35:30.252 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:30.252 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:30.252 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:30.252 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:30.252 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2Y5NjdkYjNjNWU2MTFkM2ExNzU5NDQ0NmEyODViZmRPiSXu: 00:35:30.252 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTgyMzQ1OTc0MjdiZDY5NzhiZTYxMmFhZGEzNjdkYTjxCEg7: 00:35:30.252 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:30.252 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:30.252 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2Y5NjdkYjNjNWU2MTFkM2ExNzU5NDQ0NmEyODViZmRPiSXu: 00:35:30.252 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTgyMzQ1OTc0MjdiZDY5NzhiZTYxMmFhZGEzNjdkYTjxCEg7: ]] 00:35:30.252 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTgyMzQ1OTc0MjdiZDY5NzhiZTYxMmFhZGEzNjdkYTjxCEg7: 00:35:30.252 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:35:30.252 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:30.252 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:30.252 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:30.252 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:30.252 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:30.252 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:30.252 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.252 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.252 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.252 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:30.252 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:30.252 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:30.253 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:30.253 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:30.253 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:30.253 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:30.253 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:30.253 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:30.253 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:30.253 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:30.253 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:30.253 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.253 16:43:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.819 nvme0n1 00:35:30.819 16:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.820 16:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:30.820 16:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.820 16:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.820 16:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:30.820 16:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.820 16:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:30.820 16:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:30.820 16:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.820 16:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.079 16:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.079 16:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:31.079 16:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:35:31.079 16:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:31.079 16:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:31.079 16:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:31.079 16:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:31.079 16:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzc0NDMzMDY3YWI1NGNiMmQ3NWIzYzY4M2MwYTVjMGE0OTVlMWJjNTM5YzA3MTRmegWaDg==: 00:35:31.079 16:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzQxMzQwMmFhMDAzYzU4ODEyMTdhMTQ3MjhjMzUxNGYXUSx4: 00:35:31.079 16:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:31.079 16:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:31.079 16:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzc0NDMzMDY3YWI1NGNiMmQ3NWIzYzY4M2MwYTVjMGE0OTVlMWJjNTM5YzA3MTRmegWaDg==: 00:35:31.079 16:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzQxMzQwMmFhMDAzYzU4ODEyMTdhMTQ3MjhjMzUxNGYXUSx4: ]] 00:35:31.079 16:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzQxMzQwMmFhMDAzYzU4ODEyMTdhMTQ3MjhjMzUxNGYXUSx4: 00:35:31.079 16:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:35:31.079 16:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:31.079 16:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:31.079 16:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:31.079 16:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:31.079 16:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:31.079 16:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:31.079 16:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.079 16:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.079 16:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.079 16:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:31.079 16:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:31.079 16:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:31.079 16:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:31.079 16:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:31.079 16:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:31.079 16:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:31.079 16:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:31.079 16:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:31.079 16:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:31.079 16:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:31.079 16:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:31.079 16:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.079 16:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.645 nvme0n1 00:35:31.645 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.645 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:31.645 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:31.645 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.645 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.645 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.645 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:31.645 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:31.645 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.645 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.645 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.645 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:31.645 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:35:31.645 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:31.645 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:31.645 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:31.645 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:31.645 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWM2NTI3MjBkM2EyZTBjN2JlNDg3ZTMzNmIwYzgzNjY3YTc1NWNhOGI0ZDg0NDAyOGQwMjY0YjQ0NjA1ZGYyMlmgPtc=: 00:35:31.645 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:31.645 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:31.645 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:31.645 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWM2NTI3MjBkM2EyZTBjN2JlNDg3ZTMzNmIwYzgzNjY3YTc1NWNhOGI0ZDg0NDAyOGQwMjY0YjQ0NjA1ZGYyMlmgPtc=: 00:35:31.645 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:31.645 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:35:31.645 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:31.645 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:31.645 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:31.645 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:31.645 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:31.645 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:31.645 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.645 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.645 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.645 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:31.645 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:31.645 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:31.645 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:31.645 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:31.646 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:31.646 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:31.646 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:31.646 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:31.646 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:31.646 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:31.646 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:31.646 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.646 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.211 nvme0n1 00:35:32.211 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:32.211 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:32.211 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:32.211 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:32.211 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.211 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:32.211 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:32.211 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:32.211 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:32.211 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.211 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:32.211 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:32.211 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:32.211 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:35:32.211 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:32.211 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:32.211 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:32.211 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:32.211 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjJmMWVjY2YwNmUxOWU4MjY0NDZjYjkzNGM3MTQ1MTcu6tsr: 00:35:32.211 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTE3MjQwMjVjMmEzZDc3NDIxMzhiZTRlOWY2MGFiMTRiZjQ5NDc3ZGJlOWQ4MDdjZTYyMzhiMWVjNmU3MmIyNUyXBzU=: 00:35:32.211 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:32.211 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:32.211 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjJmMWVjY2YwNmUxOWU4MjY0NDZjYjkzNGM3MTQ1MTcu6tsr: 00:35:32.211 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTE3MjQwMjVjMmEzZDc3NDIxMzhiZTRlOWY2MGFiMTRiZjQ5NDc3ZGJlOWQ4MDdjZTYyMzhiMWVjNmU3MmIyNUyXBzU=: ]] 00:35:32.211 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTE3MjQwMjVjMmEzZDc3NDIxMzhiZTRlOWY2MGFiMTRiZjQ5NDc3ZGJlOWQ4MDdjZTYyMzhiMWVjNmU3MmIyNUyXBzU=: 00:35:32.211 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:35:32.211 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:32.211 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:32.211 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:32.211 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:32.211 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:32.211 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:32.211 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:32.211 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.211 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:32.211 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:32.211 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:32.212 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:32.212 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:32.212 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:32.212 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:32.212 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:32.212 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:32.212 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:32.212 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:32.212 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:32.212 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:32.212 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:32.212 16:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.146 nvme0n1 00:35:33.146 16:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:33.146 16:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:33.146 16:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:33.146 16:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:33.146 16:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.146 16:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:33.404 16:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:33.404 16:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:33.404 16:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:33.404 16:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.404 16:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:33.404 16:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:33.404 16:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:35:33.404 16:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:33.404 16:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:33.404 16:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:33.404 16:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:33.404 16:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjVmMDFlZmM3MWNkZmE4N2Q5NzI3NDRjZTYyYTNlZmZiYjY1ZjI2OWY2MDRjMzc5d3s3ZA==: 00:35:33.404 16:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzZmYzNmZWMxYWJhMzU3OTZiYjZiMWY4ZjFlZjI5NDcxMjFkYTkyZGMxNDYzNWNiEcgndg==: 00:35:33.404 16:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:33.404 16:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:33.404 16:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjVmMDFlZmM3MWNkZmE4N2Q5NzI3NDRjZTYyYTNlZmZiYjY1ZjI2OWY2MDRjMzc5d3s3ZA==: 00:35:33.404 16:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzZmYzNmZWMxYWJhMzU3OTZiYjZiMWY4ZjFlZjI5NDcxMjFkYTkyZGMxNDYzNWNiEcgndg==: ]] 00:35:33.404 16:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzZmYzNmZWMxYWJhMzU3OTZiYjZiMWY4ZjFlZjI5NDcxMjFkYTkyZGMxNDYzNWNiEcgndg==: 00:35:33.404 16:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:35:33.404 16:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:33.404 16:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:33.404 16:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:33.404 16:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:33.404 16:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:33.404 16:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:33.404 16:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:33.404 16:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.404 16:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:33.404 16:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:33.404 16:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:33.404 16:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:33.404 16:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:33.404 16:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:33.404 16:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:33.404 16:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:33.404 16:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:33.404 16:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:33.404 16:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:33.404 16:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:33.404 16:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:33.404 16:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:33.405 16:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.375 nvme0n1 00:35:34.375 16:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.375 16:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:34.375 16:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:34.375 16:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.375 16:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.375 16:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.375 16:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:34.375 16:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:34.375 16:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.375 16:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.375 16:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.375 16:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:34.375 16:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:35:34.375 16:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:34.375 16:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:34.375 16:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:34.375 16:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:34.375 16:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2Y5NjdkYjNjNWU2MTFkM2ExNzU5NDQ0NmEyODViZmRPiSXu: 00:35:34.375 16:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTgyMzQ1OTc0MjdiZDY5NzhiZTYxMmFhZGEzNjdkYTjxCEg7: 00:35:34.375 16:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:34.375 16:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:34.375 16:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2Y5NjdkYjNjNWU2MTFkM2ExNzU5NDQ0NmEyODViZmRPiSXu: 00:35:34.375 16:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTgyMzQ1OTc0MjdiZDY5NzhiZTYxMmFhZGEzNjdkYTjxCEg7: ]] 00:35:34.375 16:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTgyMzQ1OTc0MjdiZDY5NzhiZTYxMmFhZGEzNjdkYTjxCEg7: 00:35:34.376 16:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:35:34.376 16:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:34.376 16:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:34.376 16:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:34.376 16:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:34.376 16:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:34.376 16:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:34.376 16:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.376 16:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.376 16:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.376 16:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:34.376 16:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:34.376 16:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:34.376 16:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:34.376 16:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:34.376 16:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:34.376 16:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:34.376 16:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:34.376 16:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:34.376 16:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:34.376 16:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:34.376 16:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:34.376 16:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.376 16:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.348 nvme0n1 00:35:35.348 16:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:35.348 16:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:35.348 16:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:35.348 16:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.348 16:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:35.348 16:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:35.348 16:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:35.349 16:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:35.349 16:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:35.349 16:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.349 16:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:35.349 16:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:35.349 16:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:35:35.349 16:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:35.349 16:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:35.349 16:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:35.349 16:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:35.349 16:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzc0NDMzMDY3YWI1NGNiMmQ3NWIzYzY4M2MwYTVjMGE0OTVlMWJjNTM5YzA3MTRmegWaDg==: 00:35:35.349 16:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzQxMzQwMmFhMDAzYzU4ODEyMTdhMTQ3MjhjMzUxNGYXUSx4: 00:35:35.349 16:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:35.349 16:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:35.349 16:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzc0NDMzMDY3YWI1NGNiMmQ3NWIzYzY4M2MwYTVjMGE0OTVlMWJjNTM5YzA3MTRmegWaDg==: 00:35:35.349 16:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzQxMzQwMmFhMDAzYzU4ODEyMTdhMTQ3MjhjMzUxNGYXUSx4: ]] 00:35:35.349 16:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzQxMzQwMmFhMDAzYzU4ODEyMTdhMTQ3MjhjMzUxNGYXUSx4: 00:35:35.349 16:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:35:35.349 16:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:35.349 16:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:35.349 16:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:35.349 16:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:35.349 16:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:35.349 16:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:35.349 16:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:35.349 16:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.349 16:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:35.349 16:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:35.349 16:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:35.349 16:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:35.349 16:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:35.349 16:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:35.349 16:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:35.349 16:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:35.349 16:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:35.349 16:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:35.349 16:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:35.349 16:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:35.349 16:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:35.349 16:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:35.349 16:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.283 nvme0n1 00:35:36.283 16:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.283 16:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:36.283 16:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:36.283 16:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.283 16:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.283 16:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.283 16:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:36.283 16:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:36.283 16:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.283 16:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.283 16:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.283 16:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:36.283 16:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:35:36.283 16:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:36.283 16:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:36.283 16:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:36.283 16:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:36.283 16:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWM2NTI3MjBkM2EyZTBjN2JlNDg3ZTMzNmIwYzgzNjY3YTc1NWNhOGI0ZDg0NDAyOGQwMjY0YjQ0NjA1ZGYyMlmgPtc=: 00:35:36.283 16:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:36.283 16:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:36.283 16:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:36.283 16:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWM2NTI3MjBkM2EyZTBjN2JlNDg3ZTMzNmIwYzgzNjY3YTc1NWNhOGI0ZDg0NDAyOGQwMjY0YjQ0NjA1ZGYyMlmgPtc=: 00:35:36.283 16:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:36.283 16:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:35:36.283 16:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:36.283 16:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:36.283 16:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:36.283 16:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:36.283 16:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:36.283 16:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:36.283 16:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.283 16:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.283 16:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.283 16:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:36.283 16:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:36.283 16:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:36.283 16:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:36.283 16:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:36.283 16:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:36.283 16:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:36.283 16:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:36.283 16:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:36.283 16:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:36.283 16:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:36.283 16:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:36.283 16:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.283 16:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.217 nvme0n1 00:35:37.217 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.217 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:37.217 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.217 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.217 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:37.217 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.217 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:37.217 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:37.217 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.217 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.217 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.217 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:37.217 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:37.217 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:37.217 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:35:37.217 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:37.217 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:37.217 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:37.217 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:37.217 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjJmMWVjY2YwNmUxOWU4MjY0NDZjYjkzNGM3MTQ1MTcu6tsr: 00:35:37.217 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTE3MjQwMjVjMmEzZDc3NDIxMzhiZTRlOWY2MGFiMTRiZjQ5NDc3ZGJlOWQ4MDdjZTYyMzhiMWVjNmU3MmIyNUyXBzU=: 00:35:37.217 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:37.217 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:37.217 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjJmMWVjY2YwNmUxOWU4MjY0NDZjYjkzNGM3MTQ1MTcu6tsr: 00:35:37.217 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTE3MjQwMjVjMmEzZDc3NDIxMzhiZTRlOWY2MGFiMTRiZjQ5NDc3ZGJlOWQ4MDdjZTYyMzhiMWVjNmU3MmIyNUyXBzU=: ]] 00:35:37.217 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTE3MjQwMjVjMmEzZDc3NDIxMzhiZTRlOWY2MGFiMTRiZjQ5NDc3ZGJlOWQ4MDdjZTYyMzhiMWVjNmU3MmIyNUyXBzU=: 00:35:37.217 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:35:37.217 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:37.217 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:37.217 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:37.217 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:37.217 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:37.217 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:37.217 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.217 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.217 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.217 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:37.218 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:37.218 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:37.218 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:37.218 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:37.218 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:37.218 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:37.218 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:37.218 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:37.218 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:37.218 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:37.218 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:37.218 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.218 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.476 nvme0n1 00:35:37.476 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.476 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:37.476 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.476 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.476 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:37.476 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.476 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:37.476 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:37.476 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.476 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.476 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.476 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:37.476 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:35:37.476 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:37.476 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:37.476 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:37.476 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:37.476 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjVmMDFlZmM3MWNkZmE4N2Q5NzI3NDRjZTYyYTNlZmZiYjY1ZjI2OWY2MDRjMzc5d3s3ZA==: 00:35:37.476 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzZmYzNmZWMxYWJhMzU3OTZiYjZiMWY4ZjFlZjI5NDcxMjFkYTkyZGMxNDYzNWNiEcgndg==: 00:35:37.476 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:37.476 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:37.476 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjVmMDFlZmM3MWNkZmE4N2Q5NzI3NDRjZTYyYTNlZmZiYjY1ZjI2OWY2MDRjMzc5d3s3ZA==: 00:35:37.476 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzZmYzNmZWMxYWJhMzU3OTZiYjZiMWY4ZjFlZjI5NDcxMjFkYTkyZGMxNDYzNWNiEcgndg==: ]] 00:35:37.476 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzZmYzNmZWMxYWJhMzU3OTZiYjZiMWY4ZjFlZjI5NDcxMjFkYTkyZGMxNDYzNWNiEcgndg==: 00:35:37.476 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:35:37.476 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:37.476 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:37.476 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:37.476 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:37.476 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:37.476 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:37.476 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.476 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.476 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.476 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:37.476 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:37.476 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:37.476 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:37.476 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:37.476 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:37.476 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:37.476 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:37.476 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:37.476 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:37.476 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:37.476 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:37.476 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.476 16:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.735 nvme0n1 00:35:37.735 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.735 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:37.735 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.735 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.735 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:37.735 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.735 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:37.735 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:37.735 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.735 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.735 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.735 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:37.735 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:35:37.735 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:37.735 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:37.735 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:37.735 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:37.735 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2Y5NjdkYjNjNWU2MTFkM2ExNzU5NDQ0NmEyODViZmRPiSXu: 00:35:37.735 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTgyMzQ1OTc0MjdiZDY5NzhiZTYxMmFhZGEzNjdkYTjxCEg7: 00:35:37.735 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:37.735 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:37.735 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2Y5NjdkYjNjNWU2MTFkM2ExNzU5NDQ0NmEyODViZmRPiSXu: 00:35:37.735 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTgyMzQ1OTc0MjdiZDY5NzhiZTYxMmFhZGEzNjdkYTjxCEg7: ]] 00:35:37.735 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTgyMzQ1OTc0MjdiZDY5NzhiZTYxMmFhZGEzNjdkYTjxCEg7: 00:35:37.735 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:35:37.735 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:37.735 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:37.735 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:37.735 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:37.735 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:37.735 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:37.735 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.735 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.735 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.735 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:37.735 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:37.735 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:37.735 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:37.735 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:37.735 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:37.735 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:37.735 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:37.735 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:37.735 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:37.735 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:37.735 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:37.735 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.735 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.993 nvme0n1 00:35:37.993 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.993 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:37.993 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.993 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:37.993 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.993 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.993 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:37.993 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:37.993 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.993 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.993 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.993 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:37.993 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:35:37.993 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:37.993 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:37.993 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:37.993 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:37.993 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzc0NDMzMDY3YWI1NGNiMmQ3NWIzYzY4M2MwYTVjMGE0OTVlMWJjNTM5YzA3MTRmegWaDg==: 00:35:37.993 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzQxMzQwMmFhMDAzYzU4ODEyMTdhMTQ3MjhjMzUxNGYXUSx4: 00:35:37.993 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:37.993 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:37.993 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzc0NDMzMDY3YWI1NGNiMmQ3NWIzYzY4M2MwYTVjMGE0OTVlMWJjNTM5YzA3MTRmegWaDg==: 00:35:37.993 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzQxMzQwMmFhMDAzYzU4ODEyMTdhMTQ3MjhjMzUxNGYXUSx4: ]] 00:35:37.993 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzQxMzQwMmFhMDAzYzU4ODEyMTdhMTQ3MjhjMzUxNGYXUSx4: 00:35:37.993 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:35:37.993 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:37.993 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:37.993 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:37.993 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:37.993 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:37.993 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:37.993 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.993 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.993 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.993 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:37.993 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:37.994 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:37.994 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:37.994 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:37.994 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:37.994 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:37.994 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:37.994 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:37.994 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:37.994 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:37.994 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:37.994 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.994 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.994 nvme0n1 00:35:37.994 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.994 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:37.994 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.994 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:37.994 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.252 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.252 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:38.252 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:38.252 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.252 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.253 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.253 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:38.253 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:35:38.253 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:38.253 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:38.253 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:38.253 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:38.253 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWM2NTI3MjBkM2EyZTBjN2JlNDg3ZTMzNmIwYzgzNjY3YTc1NWNhOGI0ZDg0NDAyOGQwMjY0YjQ0NjA1ZGYyMlmgPtc=: 00:35:38.253 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:38.253 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:38.253 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:38.253 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWM2NTI3MjBkM2EyZTBjN2JlNDg3ZTMzNmIwYzgzNjY3YTc1NWNhOGI0ZDg0NDAyOGQwMjY0YjQ0NjA1ZGYyMlmgPtc=: 00:35:38.253 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:38.253 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:35:38.253 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:38.253 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:38.253 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:38.253 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:38.253 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:38.253 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:38.253 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.253 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.253 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.253 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:38.253 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:38.253 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:38.253 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:38.253 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:38.253 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:38.253 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:38.253 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:38.253 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:38.253 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:38.253 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:38.253 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:38.253 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.253 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.253 nvme0n1 00:35:38.253 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.253 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:38.253 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:38.253 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.253 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.253 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.253 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:38.253 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:38.253 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.253 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.511 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.511 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:38.511 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:38.511 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:35:38.511 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:38.511 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:38.511 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:38.511 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:38.511 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjJmMWVjY2YwNmUxOWU4MjY0NDZjYjkzNGM3MTQ1MTcu6tsr: 00:35:38.511 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTE3MjQwMjVjMmEzZDc3NDIxMzhiZTRlOWY2MGFiMTRiZjQ5NDc3ZGJlOWQ4MDdjZTYyMzhiMWVjNmU3MmIyNUyXBzU=: 00:35:38.511 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:38.511 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:38.511 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjJmMWVjY2YwNmUxOWU4MjY0NDZjYjkzNGM3MTQ1MTcu6tsr: 00:35:38.511 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTE3MjQwMjVjMmEzZDc3NDIxMzhiZTRlOWY2MGFiMTRiZjQ5NDc3ZGJlOWQ4MDdjZTYyMzhiMWVjNmU3MmIyNUyXBzU=: ]] 00:35:38.511 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTE3MjQwMjVjMmEzZDc3NDIxMzhiZTRlOWY2MGFiMTRiZjQ5NDc3ZGJlOWQ4MDdjZTYyMzhiMWVjNmU3MmIyNUyXBzU=: 00:35:38.511 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:35:38.512 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:38.512 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:38.512 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:38.512 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:38.512 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:38.512 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:38.512 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.512 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.512 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.512 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:38.512 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:38.512 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:38.512 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:38.512 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:38.512 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:38.512 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:38.512 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:38.512 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:38.512 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:38.512 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:38.512 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:38.512 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.512 16:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.512 nvme0n1 00:35:38.512 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.512 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:38.512 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.512 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:38.512 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.512 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.512 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:38.512 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:38.512 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.512 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.769 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.769 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:38.769 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:35:38.769 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:38.769 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:38.769 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:38.769 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:38.769 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjVmMDFlZmM3MWNkZmE4N2Q5NzI3NDRjZTYyYTNlZmZiYjY1ZjI2OWY2MDRjMzc5d3s3ZA==: 00:35:38.769 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzZmYzNmZWMxYWJhMzU3OTZiYjZiMWY4ZjFlZjI5NDcxMjFkYTkyZGMxNDYzNWNiEcgndg==: 00:35:38.769 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:38.769 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:38.769 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjVmMDFlZmM3MWNkZmE4N2Q5NzI3NDRjZTYyYTNlZmZiYjY1ZjI2OWY2MDRjMzc5d3s3ZA==: 00:35:38.769 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzZmYzNmZWMxYWJhMzU3OTZiYjZiMWY4ZjFlZjI5NDcxMjFkYTkyZGMxNDYzNWNiEcgndg==: ]] 00:35:38.769 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzZmYzNmZWMxYWJhMzU3OTZiYjZiMWY4ZjFlZjI5NDcxMjFkYTkyZGMxNDYzNWNiEcgndg==: 00:35:38.769 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:35:38.769 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:38.769 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:38.769 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:38.769 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:38.769 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:38.769 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:38.769 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.769 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.769 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.769 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:38.769 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:38.769 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:38.769 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:38.769 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:38.769 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:38.769 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:38.769 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:38.769 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:38.769 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:38.769 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:38.769 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:38.769 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.769 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.769 nvme0n1 00:35:38.769 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.769 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:38.769 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.769 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:38.769 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.769 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.769 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:38.769 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:38.769 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.769 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.769 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.769 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:38.769 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:35:38.769 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:38.769 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:38.769 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:38.769 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:38.769 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2Y5NjdkYjNjNWU2MTFkM2ExNzU5NDQ0NmEyODViZmRPiSXu: 00:35:38.769 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTgyMzQ1OTc0MjdiZDY5NzhiZTYxMmFhZGEzNjdkYTjxCEg7: 00:35:38.769 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:38.769 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:38.769 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2Y5NjdkYjNjNWU2MTFkM2ExNzU5NDQ0NmEyODViZmRPiSXu: 00:35:38.769 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTgyMzQ1OTc0MjdiZDY5NzhiZTYxMmFhZGEzNjdkYTjxCEg7: ]] 00:35:38.770 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTgyMzQ1OTc0MjdiZDY5NzhiZTYxMmFhZGEzNjdkYTjxCEg7: 00:35:38.770 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:35:38.770 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:38.770 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:38.770 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:38.770 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:38.770 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:38.770 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:38.770 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.770 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.770 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.770 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:38.770 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:38.770 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:38.770 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:38.770 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:38.770 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:38.770 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:38.770 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:38.770 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:38.770 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:39.027 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:39.028 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:39.028 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.028 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.028 nvme0n1 00:35:39.028 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.028 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:39.028 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.028 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.028 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:39.028 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.028 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:39.028 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:39.028 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.028 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.028 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.028 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:39.028 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:35:39.028 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:39.028 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:39.028 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:39.028 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:39.028 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzc0NDMzMDY3YWI1NGNiMmQ3NWIzYzY4M2MwYTVjMGE0OTVlMWJjNTM5YzA3MTRmegWaDg==: 00:35:39.028 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzQxMzQwMmFhMDAzYzU4ODEyMTdhMTQ3MjhjMzUxNGYXUSx4: 00:35:39.028 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:39.028 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:39.028 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzc0NDMzMDY3YWI1NGNiMmQ3NWIzYzY4M2MwYTVjMGE0OTVlMWJjNTM5YzA3MTRmegWaDg==: 00:35:39.028 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzQxMzQwMmFhMDAzYzU4ODEyMTdhMTQ3MjhjMzUxNGYXUSx4: ]] 00:35:39.028 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzQxMzQwMmFhMDAzYzU4ODEyMTdhMTQ3MjhjMzUxNGYXUSx4: 00:35:39.028 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:35:39.028 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:39.028 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:39.028 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:39.028 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:39.028 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:39.028 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:39.028 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.028 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.287 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.287 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:39.287 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:39.287 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:39.287 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:39.287 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:39.287 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:39.287 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:39.287 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:39.287 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:39.287 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:39.287 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:39.287 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:39.287 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.287 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.287 nvme0n1 00:35:39.287 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.287 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:39.287 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:39.287 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.287 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.287 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.287 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:39.287 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:39.287 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.287 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.545 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.545 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:39.545 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:35:39.545 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:39.545 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:39.545 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:39.545 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:39.545 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWM2NTI3MjBkM2EyZTBjN2JlNDg3ZTMzNmIwYzgzNjY3YTc1NWNhOGI0ZDg0NDAyOGQwMjY0YjQ0NjA1ZGYyMlmgPtc=: 00:35:39.545 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:39.545 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:39.545 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:39.545 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWM2NTI3MjBkM2EyZTBjN2JlNDg3ZTMzNmIwYzgzNjY3YTc1NWNhOGI0ZDg0NDAyOGQwMjY0YjQ0NjA1ZGYyMlmgPtc=: 00:35:39.545 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:39.545 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:35:39.545 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:39.545 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:39.545 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:39.545 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:39.545 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:39.545 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:39.545 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.545 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.545 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.546 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:39.546 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:39.546 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:39.546 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:39.546 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:39.546 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:39.546 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:39.546 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:39.546 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:39.546 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:39.546 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:39.546 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:39.546 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.546 16:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.546 nvme0n1 00:35:39.546 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.546 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:39.546 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:39.546 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.546 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.546 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.804 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:39.804 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:39.804 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.804 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.804 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.804 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:39.804 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:39.804 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:35:39.804 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:39.804 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:39.804 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:39.804 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:39.804 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjJmMWVjY2YwNmUxOWU4MjY0NDZjYjkzNGM3MTQ1MTcu6tsr: 00:35:39.804 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTE3MjQwMjVjMmEzZDc3NDIxMzhiZTRlOWY2MGFiMTRiZjQ5NDc3ZGJlOWQ4MDdjZTYyMzhiMWVjNmU3MmIyNUyXBzU=: 00:35:39.804 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:39.804 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:39.804 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjJmMWVjY2YwNmUxOWU4MjY0NDZjYjkzNGM3MTQ1MTcu6tsr: 00:35:39.804 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTE3MjQwMjVjMmEzZDc3NDIxMzhiZTRlOWY2MGFiMTRiZjQ5NDc3ZGJlOWQ4MDdjZTYyMzhiMWVjNmU3MmIyNUyXBzU=: ]] 00:35:39.804 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTE3MjQwMjVjMmEzZDc3NDIxMzhiZTRlOWY2MGFiMTRiZjQ5NDc3ZGJlOWQ4MDdjZTYyMzhiMWVjNmU3MmIyNUyXBzU=: 00:35:39.804 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:35:39.804 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:39.804 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:39.804 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:39.804 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:39.804 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:39.804 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:39.804 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.804 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.804 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.804 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:39.804 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:39.804 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:39.804 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:39.804 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:39.804 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:39.805 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:39.805 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:39.805 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:39.805 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:39.805 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:39.805 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:39.805 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.805 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.063 nvme0n1 00:35:40.063 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.063 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:40.063 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:40.063 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.063 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.063 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.063 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:40.063 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:40.063 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.063 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.063 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.063 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:40.063 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:35:40.063 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:40.063 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:40.063 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:40.063 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:40.063 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjVmMDFlZmM3MWNkZmE4N2Q5NzI3NDRjZTYyYTNlZmZiYjY1ZjI2OWY2MDRjMzc5d3s3ZA==: 00:35:40.063 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzZmYzNmZWMxYWJhMzU3OTZiYjZiMWY4ZjFlZjI5NDcxMjFkYTkyZGMxNDYzNWNiEcgndg==: 00:35:40.063 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:40.063 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:40.064 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjVmMDFlZmM3MWNkZmE4N2Q5NzI3NDRjZTYyYTNlZmZiYjY1ZjI2OWY2MDRjMzc5d3s3ZA==: 00:35:40.064 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzZmYzNmZWMxYWJhMzU3OTZiYjZiMWY4ZjFlZjI5NDcxMjFkYTkyZGMxNDYzNWNiEcgndg==: ]] 00:35:40.064 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzZmYzNmZWMxYWJhMzU3OTZiYjZiMWY4ZjFlZjI5NDcxMjFkYTkyZGMxNDYzNWNiEcgndg==: 00:35:40.064 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:35:40.064 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:40.064 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:40.064 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:40.064 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:40.064 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:40.064 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:40.064 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.064 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.064 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.064 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:40.064 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:40.064 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:40.064 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:40.064 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:40.064 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:40.064 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:40.064 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:40.064 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:40.064 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:40.064 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:40.064 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:40.064 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.064 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.322 nvme0n1 00:35:40.322 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.322 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:40.322 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:40.322 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.322 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.322 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.322 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:40.322 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:40.322 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.322 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.322 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.322 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:40.322 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:35:40.322 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:40.322 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:40.322 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:40.322 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:40.322 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2Y5NjdkYjNjNWU2MTFkM2ExNzU5NDQ0NmEyODViZmRPiSXu: 00:35:40.322 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTgyMzQ1OTc0MjdiZDY5NzhiZTYxMmFhZGEzNjdkYTjxCEg7: 00:35:40.322 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:40.322 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:40.322 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2Y5NjdkYjNjNWU2MTFkM2ExNzU5NDQ0NmEyODViZmRPiSXu: 00:35:40.322 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTgyMzQ1OTc0MjdiZDY5NzhiZTYxMmFhZGEzNjdkYTjxCEg7: ]] 00:35:40.322 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTgyMzQ1OTc0MjdiZDY5NzhiZTYxMmFhZGEzNjdkYTjxCEg7: 00:35:40.322 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:35:40.322 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:40.322 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:40.322 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:40.322 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:40.322 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:40.322 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:40.322 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.322 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.322 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.322 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:40.322 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:40.322 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:40.322 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:40.322 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:40.322 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:40.322 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:40.322 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:40.322 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:40.322 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:40.322 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:40.323 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:40.323 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.323 16:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.888 nvme0n1 00:35:40.888 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.888 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:40.888 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:40.888 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.888 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.888 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.888 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:40.888 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:40.888 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.888 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.888 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.888 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:40.888 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:35:40.888 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:40.888 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:40.888 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:40.888 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:40.888 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzc0NDMzMDY3YWI1NGNiMmQ3NWIzYzY4M2MwYTVjMGE0OTVlMWJjNTM5YzA3MTRmegWaDg==: 00:35:40.888 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzQxMzQwMmFhMDAzYzU4ODEyMTdhMTQ3MjhjMzUxNGYXUSx4: 00:35:40.888 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:40.888 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:40.888 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzc0NDMzMDY3YWI1NGNiMmQ3NWIzYzY4M2MwYTVjMGE0OTVlMWJjNTM5YzA3MTRmegWaDg==: 00:35:40.888 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzQxMzQwMmFhMDAzYzU4ODEyMTdhMTQ3MjhjMzUxNGYXUSx4: ]] 00:35:40.889 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzQxMzQwMmFhMDAzYzU4ODEyMTdhMTQ3MjhjMzUxNGYXUSx4: 00:35:40.889 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:35:40.889 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:40.889 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:40.889 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:40.889 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:40.889 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:40.889 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:40.889 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.889 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.889 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.889 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:40.889 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:40.889 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:40.889 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:40.889 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:40.889 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:40.889 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:40.889 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:40.889 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:40.889 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:40.889 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:40.889 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:40.889 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.889 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.146 nvme0n1 00:35:41.146 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:41.146 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:41.146 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:41.146 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:41.146 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.146 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:41.146 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:41.146 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:41.146 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:41.146 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.146 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:41.146 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:41.146 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:35:41.146 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:41.146 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:41.146 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:41.146 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:41.146 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWM2NTI3MjBkM2EyZTBjN2JlNDg3ZTMzNmIwYzgzNjY3YTc1NWNhOGI0ZDg0NDAyOGQwMjY0YjQ0NjA1ZGYyMlmgPtc=: 00:35:41.146 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:41.147 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:41.147 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:41.147 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWM2NTI3MjBkM2EyZTBjN2JlNDg3ZTMzNmIwYzgzNjY3YTc1NWNhOGI0ZDg0NDAyOGQwMjY0YjQ0NjA1ZGYyMlmgPtc=: 00:35:41.147 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:41.147 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:35:41.147 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:41.147 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:41.147 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:41.147 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:41.147 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:41.147 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:41.147 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:41.147 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.147 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:41.147 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:41.147 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:41.147 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:41.147 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:41.147 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:41.147 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:41.147 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:41.147 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:41.147 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:41.147 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:41.147 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:41.147 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:41.147 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:41.147 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.404 nvme0n1 00:35:41.404 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:41.404 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:41.404 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:41.404 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:41.405 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.405 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:41.405 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:41.405 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:41.405 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:41.405 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.663 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:41.663 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:41.663 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:41.663 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:35:41.663 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:41.663 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:41.663 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:41.663 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:41.663 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjJmMWVjY2YwNmUxOWU4MjY0NDZjYjkzNGM3MTQ1MTcu6tsr: 00:35:41.663 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTE3MjQwMjVjMmEzZDc3NDIxMzhiZTRlOWY2MGFiMTRiZjQ5NDc3ZGJlOWQ4MDdjZTYyMzhiMWVjNmU3MmIyNUyXBzU=: 00:35:41.663 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:41.663 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:41.663 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjJmMWVjY2YwNmUxOWU4MjY0NDZjYjkzNGM3MTQ1MTcu6tsr: 00:35:41.663 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTE3MjQwMjVjMmEzZDc3NDIxMzhiZTRlOWY2MGFiMTRiZjQ5NDc3ZGJlOWQ4MDdjZTYyMzhiMWVjNmU3MmIyNUyXBzU=: ]] 00:35:41.663 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTE3MjQwMjVjMmEzZDc3NDIxMzhiZTRlOWY2MGFiMTRiZjQ5NDc3ZGJlOWQ4MDdjZTYyMzhiMWVjNmU3MmIyNUyXBzU=: 00:35:41.663 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:35:41.663 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:41.663 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:41.663 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:41.663 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:41.663 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:41.663 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:41.663 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:41.663 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.663 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:41.663 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:41.663 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:41.663 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:41.663 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:41.663 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:41.663 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:41.663 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:41.663 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:41.663 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:41.663 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:41.663 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:41.663 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:41.663 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:41.663 16:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.227 nvme0n1 00:35:42.227 16:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.227 16:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:42.227 16:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.227 16:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.227 16:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:42.227 16:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.227 16:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:42.227 16:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:42.227 16:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.227 16:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.227 16:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.227 16:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:42.227 16:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:35:42.227 16:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:42.227 16:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:42.227 16:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:42.227 16:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:42.227 16:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjVmMDFlZmM3MWNkZmE4N2Q5NzI3NDRjZTYyYTNlZmZiYjY1ZjI2OWY2MDRjMzc5d3s3ZA==: 00:35:42.227 16:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzZmYzNmZWMxYWJhMzU3OTZiYjZiMWY4ZjFlZjI5NDcxMjFkYTkyZGMxNDYzNWNiEcgndg==: 00:35:42.227 16:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:42.227 16:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:42.227 16:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjVmMDFlZmM3MWNkZmE4N2Q5NzI3NDRjZTYyYTNlZmZiYjY1ZjI2OWY2MDRjMzc5d3s3ZA==: 00:35:42.227 16:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzZmYzNmZWMxYWJhMzU3OTZiYjZiMWY4ZjFlZjI5NDcxMjFkYTkyZGMxNDYzNWNiEcgndg==: ]] 00:35:42.227 16:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzZmYzNmZWMxYWJhMzU3OTZiYjZiMWY4ZjFlZjI5NDcxMjFkYTkyZGMxNDYzNWNiEcgndg==: 00:35:42.227 16:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:35:42.227 16:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:42.227 16:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:42.227 16:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:42.227 16:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:42.227 16:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:42.227 16:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:42.228 16:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.228 16:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.228 16:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.228 16:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:42.228 16:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:42.228 16:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:42.228 16:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:42.228 16:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:42.228 16:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:42.228 16:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:42.228 16:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:42.228 16:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:42.228 16:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:42.228 16:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:42.228 16:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:42.228 16:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.228 16:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.793 nvme0n1 00:35:42.793 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.793 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:42.793 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:42.793 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.793 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.793 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.793 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:42.793 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:42.793 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.793 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.793 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.793 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:42.793 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:35:42.793 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:42.793 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:42.793 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:42.793 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:42.793 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2Y5NjdkYjNjNWU2MTFkM2ExNzU5NDQ0NmEyODViZmRPiSXu: 00:35:42.793 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTgyMzQ1OTc0MjdiZDY5NzhiZTYxMmFhZGEzNjdkYTjxCEg7: 00:35:42.793 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:42.793 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:42.793 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2Y5NjdkYjNjNWU2MTFkM2ExNzU5NDQ0NmEyODViZmRPiSXu: 00:35:42.793 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTgyMzQ1OTc0MjdiZDY5NzhiZTYxMmFhZGEzNjdkYTjxCEg7: ]] 00:35:42.793 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTgyMzQ1OTc0MjdiZDY5NzhiZTYxMmFhZGEzNjdkYTjxCEg7: 00:35:42.793 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:35:42.793 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:42.793 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:42.793 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:42.793 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:42.793 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:42.793 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:42.793 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.793 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.793 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.793 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:42.793 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:42.793 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:42.793 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:42.793 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:42.793 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:42.793 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:42.793 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:42.793 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:42.793 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:42.793 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:42.793 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:42.794 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.794 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.359 nvme0n1 00:35:43.359 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.359 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:43.359 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:43.359 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.359 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.359 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.359 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:43.359 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:43.359 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.359 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.359 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.359 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:43.359 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:35:43.359 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:43.359 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:43.359 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:43.359 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:43.359 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzc0NDMzMDY3YWI1NGNiMmQ3NWIzYzY4M2MwYTVjMGE0OTVlMWJjNTM5YzA3MTRmegWaDg==: 00:35:43.359 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzQxMzQwMmFhMDAzYzU4ODEyMTdhMTQ3MjhjMzUxNGYXUSx4: 00:35:43.359 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:43.359 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:43.359 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzc0NDMzMDY3YWI1NGNiMmQ3NWIzYzY4M2MwYTVjMGE0OTVlMWJjNTM5YzA3MTRmegWaDg==: 00:35:43.359 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzQxMzQwMmFhMDAzYzU4ODEyMTdhMTQ3MjhjMzUxNGYXUSx4: ]] 00:35:43.359 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzQxMzQwMmFhMDAzYzU4ODEyMTdhMTQ3MjhjMzUxNGYXUSx4: 00:35:43.359 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:35:43.359 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:43.359 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:43.359 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:43.359 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:43.359 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:43.359 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:43.359 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.359 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.359 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.359 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:43.359 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:43.359 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:43.359 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:43.359 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:43.359 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:43.359 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:43.359 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:43.359 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:43.359 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:43.359 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:43.359 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:43.359 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.359 16:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.925 nvme0n1 00:35:43.925 16:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.925 16:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:43.925 16:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.925 16:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.925 16:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:43.925 16:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.925 16:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:43.925 16:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:43.925 16:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.925 16:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.925 16:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.925 16:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:43.925 16:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:35:43.925 16:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:43.925 16:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:43.925 16:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:43.925 16:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:43.925 16:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWM2NTI3MjBkM2EyZTBjN2JlNDg3ZTMzNmIwYzgzNjY3YTc1NWNhOGI0ZDg0NDAyOGQwMjY0YjQ0NjA1ZGYyMlmgPtc=: 00:35:43.925 16:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:43.925 16:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:43.925 16:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:43.925 16:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWM2NTI3MjBkM2EyZTBjN2JlNDg3ZTMzNmIwYzgzNjY3YTc1NWNhOGI0ZDg0NDAyOGQwMjY0YjQ0NjA1ZGYyMlmgPtc=: 00:35:43.925 16:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:43.925 16:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:35:43.925 16:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:43.925 16:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:43.925 16:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:43.925 16:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:43.925 16:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:43.925 16:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:43.925 16:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.925 16:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.925 16:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.925 16:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:43.925 16:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:43.925 16:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:43.925 16:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:43.925 16:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:43.925 16:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:43.925 16:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:43.925 16:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:43.925 16:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:43.925 16:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:43.925 16:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:43.925 16:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:43.925 16:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.925 16:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.490 nvme0n1 00:35:44.490 16:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.490 16:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:44.490 16:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.490 16:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.490 16:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:44.490 16:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.748 16:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:44.748 16:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:44.748 16:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.748 16:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.748 16:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.748 16:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:44.748 16:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:44.748 16:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:35:44.748 16:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:44.748 16:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:44.748 16:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:44.748 16:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:44.748 16:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjJmMWVjY2YwNmUxOWU4MjY0NDZjYjkzNGM3MTQ1MTcu6tsr: 00:35:44.748 16:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTE3MjQwMjVjMmEzZDc3NDIxMzhiZTRlOWY2MGFiMTRiZjQ5NDc3ZGJlOWQ4MDdjZTYyMzhiMWVjNmU3MmIyNUyXBzU=: 00:35:44.748 16:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:44.748 16:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:44.748 16:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjJmMWVjY2YwNmUxOWU4MjY0NDZjYjkzNGM3MTQ1MTcu6tsr: 00:35:44.748 16:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTE3MjQwMjVjMmEzZDc3NDIxMzhiZTRlOWY2MGFiMTRiZjQ5NDc3ZGJlOWQ4MDdjZTYyMzhiMWVjNmU3MmIyNUyXBzU=: ]] 00:35:44.748 16:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTE3MjQwMjVjMmEzZDc3NDIxMzhiZTRlOWY2MGFiMTRiZjQ5NDc3ZGJlOWQ4MDdjZTYyMzhiMWVjNmU3MmIyNUyXBzU=: 00:35:44.748 16:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:35:44.748 16:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:44.748 16:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:44.748 16:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:44.748 16:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:44.748 16:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:44.748 16:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:44.748 16:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.748 16:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.748 16:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.748 16:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:44.748 16:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:44.748 16:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:44.748 16:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:44.748 16:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:44.748 16:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:44.748 16:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:44.748 16:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:44.748 16:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:44.748 16:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:44.748 16:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:44.748 16:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:44.748 16:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.748 16:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.681 nvme0n1 00:35:45.681 16:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.681 16:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:45.681 16:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:45.681 16:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.681 16:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.681 16:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.681 16:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:45.681 16:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:45.681 16:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.681 16:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.681 16:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.681 16:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:45.681 16:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:35:45.681 16:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:45.681 16:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:45.681 16:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:45.681 16:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:45.681 16:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjVmMDFlZmM3MWNkZmE4N2Q5NzI3NDRjZTYyYTNlZmZiYjY1ZjI2OWY2MDRjMzc5d3s3ZA==: 00:35:45.681 16:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzZmYzNmZWMxYWJhMzU3OTZiYjZiMWY4ZjFlZjI5NDcxMjFkYTkyZGMxNDYzNWNiEcgndg==: 00:35:45.681 16:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:45.681 16:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:45.681 16:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjVmMDFlZmM3MWNkZmE4N2Q5NzI3NDRjZTYyYTNlZmZiYjY1ZjI2OWY2MDRjMzc5d3s3ZA==: 00:35:45.681 16:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzZmYzNmZWMxYWJhMzU3OTZiYjZiMWY4ZjFlZjI5NDcxMjFkYTkyZGMxNDYzNWNiEcgndg==: ]] 00:35:45.681 16:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzZmYzNmZWMxYWJhMzU3OTZiYjZiMWY4ZjFlZjI5NDcxMjFkYTkyZGMxNDYzNWNiEcgndg==: 00:35:45.681 16:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:35:45.681 16:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:45.681 16:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:45.681 16:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:45.681 16:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:45.681 16:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:45.681 16:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:45.681 16:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.681 16:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.681 16:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.681 16:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:45.681 16:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:45.681 16:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:45.681 16:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:45.681 16:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:45.681 16:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:45.681 16:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:45.681 16:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:45.681 16:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:45.681 16:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:45.681 16:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:45.681 16:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:45.681 16:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.681 16:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.613 nvme0n1 00:35:46.613 16:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.613 16:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:46.614 16:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.614 16:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.614 16:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:46.614 16:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.614 16:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:46.614 16:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:46.614 16:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.614 16:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.614 16:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.614 16:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:46.614 16:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:35:46.614 16:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:46.614 16:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:46.614 16:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:46.614 16:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:46.614 16:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2Y5NjdkYjNjNWU2MTFkM2ExNzU5NDQ0NmEyODViZmRPiSXu: 00:35:46.614 16:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTgyMzQ1OTc0MjdiZDY5NzhiZTYxMmFhZGEzNjdkYTjxCEg7: 00:35:46.614 16:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:46.614 16:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:46.614 16:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2Y5NjdkYjNjNWU2MTFkM2ExNzU5NDQ0NmEyODViZmRPiSXu: 00:35:46.614 16:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTgyMzQ1OTc0MjdiZDY5NzhiZTYxMmFhZGEzNjdkYTjxCEg7: ]] 00:35:46.614 16:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTgyMzQ1OTc0MjdiZDY5NzhiZTYxMmFhZGEzNjdkYTjxCEg7: 00:35:46.614 16:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:35:46.614 16:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:46.614 16:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:46.614 16:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:46.614 16:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:46.614 16:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:46.614 16:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:46.614 16:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.614 16:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.614 16:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.614 16:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:46.614 16:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:46.614 16:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:46.614 16:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:46.614 16:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:46.614 16:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:46.614 16:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:46.614 16:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:46.614 16:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:46.614 16:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:46.614 16:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:46.614 16:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:46.614 16:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.614 16:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.548 nvme0n1 00:35:47.548 16:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.548 16:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:47.548 16:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:47.548 16:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.548 16:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.548 16:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.548 16:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:47.548 16:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:47.548 16:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.548 16:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.548 16:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.548 16:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:47.548 16:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:35:47.548 16:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:47.548 16:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:47.548 16:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:47.548 16:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:47.548 16:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzc0NDMzMDY3YWI1NGNiMmQ3NWIzYzY4M2MwYTVjMGE0OTVlMWJjNTM5YzA3MTRmegWaDg==: 00:35:47.548 16:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzQxMzQwMmFhMDAzYzU4ODEyMTdhMTQ3MjhjMzUxNGYXUSx4: 00:35:47.548 16:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:47.548 16:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:47.548 16:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzc0NDMzMDY3YWI1NGNiMmQ3NWIzYzY4M2MwYTVjMGE0OTVlMWJjNTM5YzA3MTRmegWaDg==: 00:35:47.548 16:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzQxMzQwMmFhMDAzYzU4ODEyMTdhMTQ3MjhjMzUxNGYXUSx4: ]] 00:35:47.548 16:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzQxMzQwMmFhMDAzYzU4ODEyMTdhMTQ3MjhjMzUxNGYXUSx4: 00:35:47.548 16:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:35:47.548 16:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:47.548 16:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:47.548 16:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:47.548 16:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:47.548 16:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:47.548 16:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:47.548 16:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.548 16:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.548 16:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.548 16:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:47.548 16:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:47.548 16:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:47.548 16:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:47.548 16:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:47.548 16:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:47.548 16:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:47.548 16:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:47.548 16:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:47.548 16:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:47.548 16:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:47.548 16:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:47.548 16:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.548 16:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.922 nvme0n1 00:35:48.922 16:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.922 16:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:48.922 16:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.922 16:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:48.922 16:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.922 16:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.922 16:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:48.922 16:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:48.922 16:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.922 16:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.922 16:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.922 16:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:48.922 16:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:35:48.922 16:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:48.922 16:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:48.922 16:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:48.922 16:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:48.922 16:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWM2NTI3MjBkM2EyZTBjN2JlNDg3ZTMzNmIwYzgzNjY3YTc1NWNhOGI0ZDg0NDAyOGQwMjY0YjQ0NjA1ZGYyMlmgPtc=: 00:35:48.922 16:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:48.922 16:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:48.922 16:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:48.922 16:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWM2NTI3MjBkM2EyZTBjN2JlNDg3ZTMzNmIwYzgzNjY3YTc1NWNhOGI0ZDg0NDAyOGQwMjY0YjQ0NjA1ZGYyMlmgPtc=: 00:35:48.922 16:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:48.922 16:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:35:48.922 16:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:48.922 16:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:48.922 16:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:48.922 16:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:48.922 16:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:48.922 16:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:48.922 16:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.922 16:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.922 16:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.922 16:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:48.922 16:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:48.922 16:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:48.922 16:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:48.922 16:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:48.922 16:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:48.922 16:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:48.922 16:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:48.922 16:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:48.922 16:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:48.922 16:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:48.922 16:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:48.922 16:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.922 16:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.856 nvme0n1 00:35:49.856 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.856 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:49.856 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.856 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:49.856 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.856 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.856 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:49.856 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:49.856 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.856 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.856 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.856 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:49.856 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:49.856 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:49.856 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:49.856 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:49.856 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjVmMDFlZmM3MWNkZmE4N2Q5NzI3NDRjZTYyYTNlZmZiYjY1ZjI2OWY2MDRjMzc5d3s3ZA==: 00:35:49.856 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzZmYzNmZWMxYWJhMzU3OTZiYjZiMWY4ZjFlZjI5NDcxMjFkYTkyZGMxNDYzNWNiEcgndg==: 00:35:49.856 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:49.856 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:49.856 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjVmMDFlZmM3MWNkZmE4N2Q5NzI3NDRjZTYyYTNlZmZiYjY1ZjI2OWY2MDRjMzc5d3s3ZA==: 00:35:49.856 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzZmYzNmZWMxYWJhMzU3OTZiYjZiMWY4ZjFlZjI5NDcxMjFkYTkyZGMxNDYzNWNiEcgndg==: ]] 00:35:49.856 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzZmYzNmZWMxYWJhMzU3OTZiYjZiMWY4ZjFlZjI5NDcxMjFkYTkyZGMxNDYzNWNiEcgndg==: 00:35:49.856 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:49.856 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.856 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.856 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.856 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:35:49.856 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:49.856 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:49.856 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:49.856 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:49.856 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:49.856 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:49.856 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:49.856 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:49.857 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:49.857 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:49.857 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:49.857 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:35:49.857 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:49.857 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:35:49.857 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:49.857 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:35:49.857 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:49.857 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:49.857 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.857 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.857 request: 00:35:49.857 { 00:35:49.857 "name": "nvme0", 00:35:49.857 "trtype": "tcp", 00:35:49.857 "traddr": "10.0.0.1", 00:35:49.857 "adrfam": "ipv4", 00:35:49.857 "trsvcid": "4420", 00:35:49.857 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:35:49.857 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:35:49.857 "prchk_reftag": false, 00:35:49.857 "prchk_guard": false, 00:35:49.857 "hdgst": false, 00:35:49.857 "ddgst": false, 00:35:49.857 "allow_unrecognized_csi": false, 00:35:49.857 "method": "bdev_nvme_attach_controller", 00:35:49.857 "req_id": 1 00:35:49.857 } 00:35:49.857 Got JSON-RPC error response 00:35:49.857 response: 00:35:49.857 { 00:35:49.857 "code": -5, 00:35:49.857 "message": "Input/output error" 00:35:49.857 } 00:35:49.857 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:35:49.857 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:35:49.857 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:49.857 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:49.857 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:49.857 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:35:49.857 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.857 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:35:49.857 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.857 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.857 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:35:49.857 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:35:49.857 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:49.857 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:49.857 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:49.857 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:49.857 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:49.857 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:49.857 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:49.857 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:49.857 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:49.857 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:49.857 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:49.857 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:35:49.857 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:49.857 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:35:49.857 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:49.857 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:35:49.857 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:49.857 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:49.857 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.857 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.115 request: 00:35:50.115 { 00:35:50.115 "name": "nvme0", 00:35:50.115 "trtype": "tcp", 00:35:50.115 "traddr": "10.0.0.1", 00:35:50.115 "adrfam": "ipv4", 00:35:50.115 "trsvcid": "4420", 00:35:50.115 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:35:50.115 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:35:50.115 "prchk_reftag": false, 00:35:50.115 "prchk_guard": false, 00:35:50.115 "hdgst": false, 00:35:50.115 "ddgst": false, 00:35:50.115 "dhchap_key": "key2", 00:35:50.115 "allow_unrecognized_csi": false, 00:35:50.115 "method": "bdev_nvme_attach_controller", 00:35:50.115 "req_id": 1 00:35:50.115 } 00:35:50.115 Got JSON-RPC error response 00:35:50.115 response: 00:35:50.115 { 00:35:50.115 "code": -5, 00:35:50.115 "message": "Input/output error" 00:35:50.115 } 00:35:50.115 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:35:50.115 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:35:50.115 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:50.115 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:50.115 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:50.115 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:35:50.115 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.115 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.115 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:35:50.116 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.116 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:35:50.116 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:35:50.116 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:50.116 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:50.116 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:50.116 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:50.116 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:50.116 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:50.116 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:50.116 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:50.116 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:50.116 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:50.116 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:50.116 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:35:50.116 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:50.116 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:35:50.116 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:50.116 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:35:50.116 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:50.116 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:50.116 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.116 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.116 request: 00:35:50.116 { 00:35:50.116 "name": "nvme0", 00:35:50.116 "trtype": "tcp", 00:35:50.116 "traddr": "10.0.0.1", 00:35:50.116 "adrfam": "ipv4", 00:35:50.116 "trsvcid": "4420", 00:35:50.116 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:35:50.116 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:35:50.116 "prchk_reftag": false, 00:35:50.116 "prchk_guard": false, 00:35:50.116 "hdgst": false, 00:35:50.116 "ddgst": false, 00:35:50.116 "dhchap_key": "key1", 00:35:50.116 "dhchap_ctrlr_key": "ckey2", 00:35:50.116 "allow_unrecognized_csi": false, 00:35:50.116 "method": "bdev_nvme_attach_controller", 00:35:50.116 "req_id": 1 00:35:50.116 } 00:35:50.116 Got JSON-RPC error response 00:35:50.116 response: 00:35:50.116 { 00:35:50.116 "code": -5, 00:35:50.116 "message": "Input/output error" 00:35:50.116 } 00:35:50.116 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:35:50.116 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:35:50.116 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:50.116 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:50.116 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:50.116 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:35:50.116 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:50.116 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:50.116 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:50.116 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:50.116 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:50.116 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:50.116 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:50.116 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:50.116 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:50.116 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:50.116 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:35:50.116 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.116 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.375 nvme0n1 00:35:50.376 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.376 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:35:50.376 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:50.376 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:50.376 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:50.376 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:50.376 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2Y5NjdkYjNjNWU2MTFkM2ExNzU5NDQ0NmEyODViZmRPiSXu: 00:35:50.376 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTgyMzQ1OTc0MjdiZDY5NzhiZTYxMmFhZGEzNjdkYTjxCEg7: 00:35:50.376 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:50.376 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:50.376 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2Y5NjdkYjNjNWU2MTFkM2ExNzU5NDQ0NmEyODViZmRPiSXu: 00:35:50.376 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTgyMzQ1OTc0MjdiZDY5NzhiZTYxMmFhZGEzNjdkYTjxCEg7: ]] 00:35:50.376 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTgyMzQ1OTc0MjdiZDY5NzhiZTYxMmFhZGEzNjdkYTjxCEg7: 00:35:50.376 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:50.376 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.376 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.376 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.376 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:35:50.376 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.376 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:35:50.376 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.376 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.376 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:50.376 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:50.376 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:35:50.376 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:50.376 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:35:50.376 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:50.376 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:35:50.376 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:50.376 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:50.376 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.376 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.376 request: 00:35:50.376 { 00:35:50.376 "name": "nvme0", 00:35:50.376 "dhchap_key": "key1", 00:35:50.376 "dhchap_ctrlr_key": "ckey2", 00:35:50.376 "method": "bdev_nvme_set_keys", 00:35:50.376 "req_id": 1 00:35:50.376 } 00:35:50.376 Got JSON-RPC error response 00:35:50.376 response: 00:35:50.376 { 00:35:50.376 "code": -13, 00:35:50.376 "message": "Permission denied" 00:35:50.376 } 00:35:50.376 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:35:50.376 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:35:50.376 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:50.376 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:50.376 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:50.633 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:35:50.633 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.633 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:35:50.633 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.633 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.633 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:35:50.633 16:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:35:51.567 16:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:35:51.567 16:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:35:51.567 16:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:51.567 16:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.567 16:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:51.567 16:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:35:51.567 16:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:35:52.502 16:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:35:52.502 16:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:35:52.502 16:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:52.502 16:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.502 16:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:52.502 16:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:35:52.502 16:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:52.502 16:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:52.502 16:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:52.502 16:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:52.502 16:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:52.502 16:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjVmMDFlZmM3MWNkZmE4N2Q5NzI3NDRjZTYyYTNlZmZiYjY1ZjI2OWY2MDRjMzc5d3s3ZA==: 00:35:52.502 16:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzZmYzNmZWMxYWJhMzU3OTZiYjZiMWY4ZjFlZjI5NDcxMjFkYTkyZGMxNDYzNWNiEcgndg==: 00:35:52.502 16:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:52.502 16:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:52.761 16:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjVmMDFlZmM3MWNkZmE4N2Q5NzI3NDRjZTYyYTNlZmZiYjY1ZjI2OWY2MDRjMzc5d3s3ZA==: 00:35:52.761 16:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzZmYzNmZWMxYWJhMzU3OTZiYjZiMWY4ZjFlZjI5NDcxMjFkYTkyZGMxNDYzNWNiEcgndg==: ]] 00:35:52.761 16:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzZmYzNmZWMxYWJhMzU3OTZiYjZiMWY4ZjFlZjI5NDcxMjFkYTkyZGMxNDYzNWNiEcgndg==: 00:35:52.761 16:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:35:52.761 16:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:52.761 16:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:52.761 16:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:52.761 16:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:52.761 16:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:52.761 16:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:52.761 16:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:52.761 16:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:52.761 16:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:52.761 16:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:52.761 16:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:35:52.761 16:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:52.761 16:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.761 nvme0n1 00:35:52.761 16:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:52.761 16:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:35:52.761 16:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:52.761 16:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:52.761 16:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:52.761 16:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:52.761 16:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2Y5NjdkYjNjNWU2MTFkM2ExNzU5NDQ0NmEyODViZmRPiSXu: 00:35:52.761 16:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTgyMzQ1OTc0MjdiZDY5NzhiZTYxMmFhZGEzNjdkYTjxCEg7: 00:35:52.761 16:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:52.761 16:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:52.761 16:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2Y5NjdkYjNjNWU2MTFkM2ExNzU5NDQ0NmEyODViZmRPiSXu: 00:35:52.761 16:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTgyMzQ1OTc0MjdiZDY5NzhiZTYxMmFhZGEzNjdkYTjxCEg7: ]] 00:35:52.762 16:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTgyMzQ1OTc0MjdiZDY5NzhiZTYxMmFhZGEzNjdkYTjxCEg7: 00:35:52.762 16:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:35:52.762 16:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:35:52.762 16:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:35:52.762 16:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:35:52.762 16:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:52.762 16:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:35:52.762 16:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:52.762 16:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:35:52.762 16:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:52.762 16:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.762 request: 00:35:52.762 { 00:35:52.762 "name": "nvme0", 00:35:53.020 "dhchap_key": "key2", 00:35:53.020 "dhchap_ctrlr_key": "ckey1", 00:35:53.020 "method": "bdev_nvme_set_keys", 00:35:53.020 "req_id": 1 00:35:53.020 } 00:35:53.020 Got JSON-RPC error response 00:35:53.020 response: 00:35:53.020 { 00:35:53.020 "code": -13, 00:35:53.020 "message": "Permission denied" 00:35:53.020 } 00:35:53.020 16:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:35:53.020 16:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:35:53.020 16:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:53.020 16:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:53.020 16:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:53.020 16:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:35:53.020 16:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:35:53.020 16:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:53.020 16:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.020 16:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:53.020 16:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:35:53.020 16:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:35:53.955 16:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:35:53.955 16:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:35:53.955 16:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:53.955 16:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.955 16:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:53.955 16:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:35:53.955 16:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:35:53.955 16:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:35:53.955 16:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:35:53.955 16:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # nvmfcleanup 00:35:53.955 16:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:35:53.955 16:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:53.955 16:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:35:53.955 16:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:53.955 16:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:53.955 rmmod nvme_tcp 00:35:53.955 rmmod nvme_fabrics 00:35:53.955 16:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:53.955 16:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:35:53.955 16:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:35:53.955 16:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@513 -- # '[' -n 3306700 ']' 00:35:53.955 16:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@514 -- # killprocess 3306700 00:35:53.955 16:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 3306700 ']' 00:35:53.955 16:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 3306700 00:35:53.955 16:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:35:53.956 16:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:53.956 16:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3306700 00:35:53.956 16:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:53.956 16:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:53.956 16:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3306700' 00:35:53.956 killing process with pid 3306700 00:35:53.956 16:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 3306700 00:35:53.956 16:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 3306700 00:35:55.331 16:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:35:55.331 16:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:35:55.331 16:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:35:55.331 16:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:35:55.331 16:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # iptables-save 00:35:55.331 16:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:35:55.331 16:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # iptables-restore 00:35:55.331 16:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:55.331 16:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:55.331 16:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:55.331 16:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:55.331 16:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:57.232 16:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:57.232 16:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:35:57.232 16:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:57.232 16:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:35:57.232 16:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:35:57.232 16:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@710 -- # echo 0 00:35:57.232 16:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:57.232 16:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:35:57.232 16:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:57.232 16:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:57.232 16:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:35:57.232 16:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:35:57.232 16:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@722 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:58.615 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:58.615 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:58.615 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:58.615 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:58.615 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:58.615 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:58.615 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:58.615 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:58.615 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:58.615 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:58.615 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:58.615 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:58.615 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:58.615 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:58.615 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:58.615 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:59.559 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:35:59.559 16:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.W9G /tmp/spdk.key-null.qvB /tmp/spdk.key-sha256.3zo /tmp/spdk.key-sha384.e1F /tmp/spdk.key-sha512.8ub /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:35:59.818 16:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:00.753 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:36:00.753 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:36:00.753 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:36:00.753 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:36:00.753 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:36:00.753 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:36:00.753 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:36:00.753 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:36:00.753 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:36:00.753 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:36:00.753 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:36:00.753 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:36:00.753 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:36:00.753 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:36:00.753 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:36:00.753 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:36:00.753 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:36:01.011 00:36:01.011 real 0m55.953s 00:36:01.011 user 0m53.002s 00:36:01.011 sys 0m6.453s 00:36:01.011 16:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:01.011 16:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.011 ************************************ 00:36:01.011 END TEST nvmf_auth_host 00:36:01.011 ************************************ 00:36:01.011 16:44:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:36:01.011 16:44:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:36:01.011 16:44:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:36:01.011 16:44:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:01.011 16:44:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.011 ************************************ 00:36:01.011 START TEST nvmf_digest 00:36:01.011 ************************************ 00:36:01.011 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:36:01.011 * Looking for test storage... 00:36:01.011 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:01.011 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:36:01.011 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lcov --version 00:36:01.011 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:36:01.270 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:36:01.270 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:01.270 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:01.270 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:01.270 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:36:01.270 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:36:01.270 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:36:01.270 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:36:01.270 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:36:01.270 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:36:01.270 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:36:01.270 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:01.270 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:36:01.270 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:36:01.270 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:01.270 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:01.270 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:36:01.270 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:36:01.270 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:01.270 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:36:01.270 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:36:01.270 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:36:01.270 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:36:01.270 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:01.270 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:36:01.270 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:36:01.270 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:01.270 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:01.270 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:36:01.270 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:01.270 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:36:01.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:01.271 --rc genhtml_branch_coverage=1 00:36:01.271 --rc genhtml_function_coverage=1 00:36:01.271 --rc genhtml_legend=1 00:36:01.271 --rc geninfo_all_blocks=1 00:36:01.271 --rc geninfo_unexecuted_blocks=1 00:36:01.271 00:36:01.271 ' 00:36:01.271 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:36:01.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:01.271 --rc genhtml_branch_coverage=1 00:36:01.271 --rc genhtml_function_coverage=1 00:36:01.271 --rc genhtml_legend=1 00:36:01.271 --rc geninfo_all_blocks=1 00:36:01.271 --rc geninfo_unexecuted_blocks=1 00:36:01.271 00:36:01.271 ' 00:36:01.271 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:36:01.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:01.271 --rc genhtml_branch_coverage=1 00:36:01.271 --rc genhtml_function_coverage=1 00:36:01.271 --rc genhtml_legend=1 00:36:01.271 --rc geninfo_all_blocks=1 00:36:01.271 --rc geninfo_unexecuted_blocks=1 00:36:01.271 00:36:01.271 ' 00:36:01.271 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:36:01.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:01.271 --rc genhtml_branch_coverage=1 00:36:01.271 --rc genhtml_function_coverage=1 00:36:01.271 --rc genhtml_legend=1 00:36:01.271 --rc geninfo_all_blocks=1 00:36:01.271 --rc geninfo_unexecuted_blocks=1 00:36:01.271 00:36:01.271 ' 00:36:01.271 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:01.271 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:36:01.271 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:01.271 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:01.271 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:01.271 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:01.271 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:01.271 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:01.271 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:01.271 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:01.271 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:01.271 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:01.271 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:01.271 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:01.271 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:01.271 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:01.271 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:01.271 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:01.271 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:01.271 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:36:01.271 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:01.271 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:01.271 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:01.271 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:01.271 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:01.271 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:01.271 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:36:01.271 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:01.271 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:36:01.271 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:01.271 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:01.271 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:01.271 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:01.271 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:01.271 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:01.271 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:01.271 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:01.271 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:01.271 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:01.271 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:36:01.271 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:36:01.271 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:36:01.271 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:36:01.271 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:36:01.271 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:36:01.271 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:01.271 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@472 -- # prepare_net_devs 00:36:01.271 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@434 -- # local -g is_hw=no 00:36:01.271 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@436 -- # remove_spdk_ns 00:36:01.271 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:01.271 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:01.271 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:01.271 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:36:01.271 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:36:01.271 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:36:01.271 16:44:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:03.173 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:03.173 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ up == up ]] 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:03.173 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ up == up ]] 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:03.173 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # is_hw=yes 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:03.173 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:03.174 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:03.174 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:03.174 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:36:03.174 00:36:03.174 --- 10.0.0.2 ping statistics --- 00:36:03.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:03.174 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:36:03.174 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:03.174 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:03.174 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:36:03.174 00:36:03.174 --- 10.0.0.1 ping statistics --- 00:36:03.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:03.174 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:36:03.174 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:03.174 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # return 0 00:36:03.174 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:36:03.174 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:03.174 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:36:03.174 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:36:03.174 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:03.174 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:36:03.174 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:36:03.174 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:36:03.174 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:36:03.174 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:36:03.174 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:03.174 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:03.174 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:03.432 ************************************ 00:36:03.432 START TEST nvmf_digest_clean 00:36:03.432 ************************************ 00:36:03.432 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:36:03.432 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:36:03.432 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:36:03.432 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:36:03.432 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:36:03.432 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:36:03.432 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:36:03.432 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:03.432 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:03.432 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@505 -- # nvmfpid=3316849 00:36:03.432 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:36:03.432 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@506 -- # waitforlisten 3316849 00:36:03.432 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3316849 ']' 00:36:03.432 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:03.432 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:03.432 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:03.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:03.432 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:03.432 16:44:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:03.432 [2024-09-29 16:44:03.846136] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:36:03.432 [2024-09-29 16:44:03.846281] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:03.432 [2024-09-29 16:44:03.979791] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:03.691 [2024-09-29 16:44:04.208271] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:03.691 [2024-09-29 16:44:04.208367] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:03.691 [2024-09-29 16:44:04.208389] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:03.691 [2024-09-29 16:44:04.208410] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:03.691 [2024-09-29 16:44:04.208426] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:03.691 [2024-09-29 16:44:04.208475] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:36:04.623 16:44:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:04.623 16:44:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:36:04.623 16:44:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:36:04.623 16:44:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:04.623 16:44:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:04.623 16:44:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:04.623 16:44:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:36:04.623 16:44:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:36:04.623 16:44:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:36:04.623 16:44:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:04.623 16:44:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:04.881 null0 00:36:04.881 [2024-09-29 16:44:05.280463] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:04.881 [2024-09-29 16:44:05.304779] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:04.881 16:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:04.881 16:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:36:04.881 16:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:04.881 16:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:04.881 16:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:36:04.881 16:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:36:04.881 16:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:36:04.881 16:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:04.881 16:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3317008 00:36:04.881 16:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:36:04.881 16:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3317008 /var/tmp/bperf.sock 00:36:04.881 16:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3317008 ']' 00:36:04.881 16:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:04.881 16:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:04.881 16:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:04.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:04.881 16:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:04.881 16:44:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:04.881 [2024-09-29 16:44:05.393360] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:36:04.881 [2024-09-29 16:44:05.393490] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3317008 ] 00:36:05.138 [2024-09-29 16:44:05.516927] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:05.395 [2024-09-29 16:44:05.762647] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:36:05.961 16:44:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:05.961 16:44:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:36:05.961 16:44:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:05.961 16:44:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:05.961 16:44:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:06.555 16:44:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:06.555 16:44:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:07.141 nvme0n1 00:36:07.141 16:44:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:07.141 16:44:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:07.141 Running I/O for 2 seconds... 00:36:09.444 13470.00 IOPS, 52.62 MiB/s 13975.00 IOPS, 54.59 MiB/s 00:36:09.444 Latency(us) 00:36:09.444 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:09.444 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:09.444 nvme0n1 : 2.01 13982.52 54.62 0.00 0.00 9139.72 4271.98 19612.25 00:36:09.444 =================================================================================================================== 00:36:09.444 Total : 13982.52 54.62 0.00 0.00 9139.72 4271.98 19612.25 00:36:09.444 { 00:36:09.444 "results": [ 00:36:09.444 { 00:36:09.444 "job": "nvme0n1", 00:36:09.444 "core_mask": "0x2", 00:36:09.444 "workload": "randread", 00:36:09.444 "status": "finished", 00:36:09.444 "queue_depth": 128, 00:36:09.444 "io_size": 4096, 00:36:09.444 "runtime": 2.012584, 00:36:09.444 "iops": 13982.521971753726, 00:36:09.444 "mibps": 54.61922645216299, 00:36:09.444 "io_failed": 0, 00:36:09.444 "io_timeout": 0, 00:36:09.444 "avg_latency_us": 9139.717904994293, 00:36:09.444 "min_latency_us": 4271.976296296296, 00:36:09.444 "max_latency_us": 19612.254814814816 00:36:09.444 } 00:36:09.444 ], 00:36:09.444 "core_count": 1 00:36:09.444 } 00:36:09.444 16:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:09.444 16:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:09.444 16:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:09.444 16:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:09.444 | select(.opcode=="crc32c") 00:36:09.444 | "\(.module_name) \(.executed)"' 00:36:09.444 16:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:09.444 16:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:09.444 16:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:09.444 16:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:09.444 16:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:09.444 16:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3317008 00:36:09.444 16:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3317008 ']' 00:36:09.444 16:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3317008 00:36:09.444 16:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:36:09.444 16:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:09.444 16:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3317008 00:36:09.444 16:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:09.444 16:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:09.444 16:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3317008' 00:36:09.444 killing process with pid 3317008 00:36:09.444 16:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3317008 00:36:09.444 Received shutdown signal, test time was about 2.000000 seconds 00:36:09.444 00:36:09.444 Latency(us) 00:36:09.444 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:09.444 =================================================================================================================== 00:36:09.444 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:09.444 16:44:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3317008 00:36:10.819 16:44:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:36:10.819 16:44:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:10.819 16:44:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:10.819 16:44:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:36:10.819 16:44:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:36:10.819 16:44:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:36:10.819 16:44:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:10.819 16:44:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3317683 00:36:10.819 16:44:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:36:10.819 16:44:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3317683 /var/tmp/bperf.sock 00:36:10.819 16:44:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3317683 ']' 00:36:10.819 16:44:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:10.819 16:44:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:10.819 16:44:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:10.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:10.819 16:44:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:10.819 16:44:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:10.819 [2024-09-29 16:44:11.130965] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:36:10.819 [2024-09-29 16:44:11.131103] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3317683 ] 00:36:10.819 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:10.819 Zero copy mechanism will not be used. 00:36:10.819 [2024-09-29 16:44:11.253999] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:11.077 [2024-09-29 16:44:11.495013] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:36:11.642 16:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:11.642 16:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:36:11.642 16:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:11.642 16:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:11.642 16:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:12.208 16:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:12.208 16:44:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:12.774 nvme0n1 00:36:12.774 16:44:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:12.774 16:44:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:12.774 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:12.774 Zero copy mechanism will not be used. 00:36:12.774 Running I/O for 2 seconds... 00:36:15.080 4594.00 IOPS, 574.25 MiB/s 4722.00 IOPS, 590.25 MiB/s 00:36:15.080 Latency(us) 00:36:15.080 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:15.080 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:36:15.080 nvme0n1 : 2.00 4723.84 590.48 0.00 0.00 3380.69 1074.06 7718.68 00:36:15.080 =================================================================================================================== 00:36:15.080 Total : 4723.84 590.48 0.00 0.00 3380.69 1074.06 7718.68 00:36:15.080 { 00:36:15.080 "results": [ 00:36:15.080 { 00:36:15.080 "job": "nvme0n1", 00:36:15.080 "core_mask": "0x2", 00:36:15.080 "workload": "randread", 00:36:15.080 "status": "finished", 00:36:15.080 "queue_depth": 16, 00:36:15.080 "io_size": 131072, 00:36:15.080 "runtime": 2.002608, 00:36:15.080 "iops": 4723.840112493309, 00:36:15.080 "mibps": 590.4800140616636, 00:36:15.080 "io_failed": 0, 00:36:15.080 "io_timeout": 0, 00:36:15.080 "avg_latency_us": 3380.685285099053, 00:36:15.080 "min_latency_us": 1074.0622222222223, 00:36:15.080 "max_latency_us": 7718.684444444444 00:36:15.080 } 00:36:15.080 ], 00:36:15.080 "core_count": 1 00:36:15.080 } 00:36:15.080 16:44:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:15.080 16:44:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:15.080 16:44:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:15.080 16:44:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:15.080 | select(.opcode=="crc32c") 00:36:15.080 | "\(.module_name) \(.executed)"' 00:36:15.080 16:44:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:15.080 16:44:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:15.080 16:44:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:15.080 16:44:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:15.080 16:44:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:15.080 16:44:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3317683 00:36:15.080 16:44:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3317683 ']' 00:36:15.080 16:44:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3317683 00:36:15.080 16:44:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:36:15.080 16:44:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:15.080 16:44:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3317683 00:36:15.080 16:44:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:15.080 16:44:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:15.080 16:44:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3317683' 00:36:15.080 killing process with pid 3317683 00:36:15.080 16:44:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3317683 00:36:15.080 Received shutdown signal, test time was about 2.000000 seconds 00:36:15.080 00:36:15.080 Latency(us) 00:36:15.080 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:15.080 =================================================================================================================== 00:36:15.080 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:15.080 16:44:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3317683 00:36:16.015 16:44:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:36:16.015 16:44:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:16.015 16:44:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:16.015 16:44:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:36:16.015 16:44:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:36:16.015 16:44:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:36:16.015 16:44:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:16.015 16:44:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3318342 00:36:16.015 16:44:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3318342 /var/tmp/bperf.sock 00:36:16.015 16:44:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:36:16.015 16:44:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3318342 ']' 00:36:16.015 16:44:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:16.015 16:44:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:16.015 16:44:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:16.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:16.015 16:44:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:16.015 16:44:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:16.273 [2024-09-29 16:44:16.611968] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:36:16.273 [2024-09-29 16:44:16.612097] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3318342 ] 00:36:16.273 [2024-09-29 16:44:16.741587] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:16.531 [2024-09-29 16:44:16.993638] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:36:17.096 16:44:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:17.096 16:44:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:36:17.096 16:44:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:17.096 16:44:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:17.096 16:44:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:17.662 16:44:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:17.663 16:44:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:18.228 nvme0n1 00:36:18.228 16:44:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:18.228 16:44:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:18.228 Running I/O for 2 seconds... 00:36:20.533 15529.00 IOPS, 60.66 MiB/s 15952.00 IOPS, 62.31 MiB/s 00:36:20.533 Latency(us) 00:36:20.533 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:20.533 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:20.533 nvme0n1 : 2.01 15970.97 62.39 0.00 0.00 8000.85 4369.07 17185.00 00:36:20.533 =================================================================================================================== 00:36:20.533 Total : 15970.97 62.39 0.00 0.00 8000.85 4369.07 17185.00 00:36:20.533 { 00:36:20.533 "results": [ 00:36:20.533 { 00:36:20.533 "job": "nvme0n1", 00:36:20.533 "core_mask": "0x2", 00:36:20.533 "workload": "randwrite", 00:36:20.533 "status": "finished", 00:36:20.533 "queue_depth": 128, 00:36:20.533 "io_size": 4096, 00:36:20.533 "runtime": 2.005639, 00:36:20.533 "iops": 15970.969850506497, 00:36:20.533 "mibps": 62.386600978541004, 00:36:20.533 "io_failed": 0, 00:36:20.533 "io_timeout": 0, 00:36:20.533 "avg_latency_us": 8000.852954452955, 00:36:20.533 "min_latency_us": 4369.066666666667, 00:36:20.533 "max_latency_us": 17184.995555555557 00:36:20.533 } 00:36:20.533 ], 00:36:20.533 "core_count": 1 00:36:20.533 } 00:36:20.533 16:44:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:20.533 16:44:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:20.533 16:44:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:20.533 16:44:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:20.533 | select(.opcode=="crc32c") 00:36:20.533 | "\(.module_name) \(.executed)"' 00:36:20.533 16:44:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:20.533 16:44:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:20.533 16:44:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:20.533 16:44:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:20.533 16:44:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:20.533 16:44:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3318342 00:36:20.533 16:44:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3318342 ']' 00:36:20.533 16:44:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3318342 00:36:20.533 16:44:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:36:20.533 16:44:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:20.533 16:44:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3318342 00:36:20.533 16:44:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:20.533 16:44:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:20.533 16:44:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3318342' 00:36:20.533 killing process with pid 3318342 00:36:20.533 16:44:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3318342 00:36:20.533 Received shutdown signal, test time was about 2.000000 seconds 00:36:20.533 00:36:20.533 Latency(us) 00:36:20.533 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:20.533 =================================================================================================================== 00:36:20.533 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:20.533 16:44:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3318342 00:36:21.907 16:44:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:36:21.907 16:44:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:21.907 16:44:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:21.907 16:44:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:36:21.907 16:44:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:36:21.907 16:44:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:36:21.907 16:44:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:21.907 16:44:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3319005 00:36:21.907 16:44:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:36:21.907 16:44:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3319005 /var/tmp/bperf.sock 00:36:21.907 16:44:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3319005 ']' 00:36:21.907 16:44:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:21.907 16:44:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:21.907 16:44:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:21.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:21.907 16:44:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:21.907 16:44:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:21.907 [2024-09-29 16:44:22.215140] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:36:21.907 [2024-09-29 16:44:22.215279] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3319005 ] 00:36:21.907 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:21.907 Zero copy mechanism will not be used. 00:36:21.907 [2024-09-29 16:44:22.349832] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:22.164 [2024-09-29 16:44:22.600667] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:36:22.729 16:44:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:22.729 16:44:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:36:22.729 16:44:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:22.729 16:44:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:22.729 16:44:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:23.296 16:44:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:23.296 16:44:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:23.553 nvme0n1 00:36:23.811 16:44:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:23.811 16:44:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:23.811 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:23.811 Zero copy mechanism will not be used. 00:36:23.811 Running I/O for 2 seconds... 00:36:26.116 4247.00 IOPS, 530.88 MiB/s 4339.00 IOPS, 542.38 MiB/s 00:36:26.116 Latency(us) 00:36:26.116 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:26.116 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:36:26.116 nvme0n1 : 2.01 4336.54 542.07 0.00 0.00 3678.80 2754.94 13786.83 00:36:26.116 =================================================================================================================== 00:36:26.116 Total : 4336.54 542.07 0.00 0.00 3678.80 2754.94 13786.83 00:36:26.116 { 00:36:26.116 "results": [ 00:36:26.116 { 00:36:26.116 "job": "nvme0n1", 00:36:26.116 "core_mask": "0x2", 00:36:26.116 "workload": "randwrite", 00:36:26.116 "status": "finished", 00:36:26.116 "queue_depth": 16, 00:36:26.116 "io_size": 131072, 00:36:26.116 "runtime": 2.005516, 00:36:26.116 "iops": 4336.539823167704, 00:36:26.116 "mibps": 542.067477895963, 00:36:26.116 "io_failed": 0, 00:36:26.116 "io_timeout": 0, 00:36:26.116 "avg_latency_us": 3678.804872178145, 00:36:26.116 "min_latency_us": 2754.9392592592594, 00:36:26.116 "max_latency_us": 13786.832592592593 00:36:26.116 } 00:36:26.116 ], 00:36:26.116 "core_count": 1 00:36:26.116 } 00:36:26.116 16:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:26.116 16:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:26.116 16:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:26.116 16:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:26.116 | select(.opcode=="crc32c") 00:36:26.116 | "\(.module_name) \(.executed)"' 00:36:26.116 16:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:26.116 16:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:26.116 16:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:26.116 16:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:26.116 16:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:26.116 16:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3319005 00:36:26.116 16:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3319005 ']' 00:36:26.116 16:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3319005 00:36:26.116 16:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:36:26.116 16:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:26.116 16:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3319005 00:36:26.116 16:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:26.116 16:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:26.116 16:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3319005' 00:36:26.116 killing process with pid 3319005 00:36:26.116 16:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3319005 00:36:26.116 Received shutdown signal, test time was about 2.000000 seconds 00:36:26.116 00:36:26.116 Latency(us) 00:36:26.116 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:26.116 =================================================================================================================== 00:36:26.116 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:26.116 16:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3319005 00:36:27.050 16:44:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3316849 00:36:27.050 16:44:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3316849 ']' 00:36:27.050 16:44:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3316849 00:36:27.050 16:44:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:36:27.050 16:44:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:27.050 16:44:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3316849 00:36:27.307 16:44:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:27.307 16:44:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:27.307 16:44:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3316849' 00:36:27.307 killing process with pid 3316849 00:36:27.307 16:44:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3316849 00:36:27.307 16:44:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3316849 00:36:28.678 00:36:28.678 real 0m25.192s 00:36:28.678 user 0m49.350s 00:36:28.678 sys 0m4.660s 00:36:28.678 16:44:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:28.678 16:44:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:28.678 ************************************ 00:36:28.678 END TEST nvmf_digest_clean 00:36:28.678 ************************************ 00:36:28.678 16:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:36:28.678 16:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:28.678 16:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:28.678 16:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:28.678 ************************************ 00:36:28.678 START TEST nvmf_digest_error 00:36:28.678 ************************************ 00:36:28.678 16:44:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:36:28.678 16:44:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:36:28.678 16:44:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:36:28.678 16:44:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:28.678 16:44:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:28.678 16:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@505 -- # nvmfpid=3319826 00:36:28.678 16:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:36:28.678 16:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@506 -- # waitforlisten 3319826 00:36:28.678 16:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3319826 ']' 00:36:28.678 16:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:28.678 16:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:28.678 16:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:28.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:28.678 16:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:28.678 16:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:28.678 [2024-09-29 16:44:29.093861] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:36:28.678 [2024-09-29 16:44:29.094008] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:28.678 [2024-09-29 16:44:29.231361] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:28.935 [2024-09-29 16:44:29.480777] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:28.935 [2024-09-29 16:44:29.480862] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:28.935 [2024-09-29 16:44:29.480888] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:28.935 [2024-09-29 16:44:29.480912] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:28.935 [2024-09-29 16:44:29.480930] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:28.935 [2024-09-29 16:44:29.480986] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:36:29.501 16:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:29.501 16:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:36:29.501 16:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:36:29.501 16:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:29.501 16:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:29.760 16:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:29.760 16:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:36:29.760 16:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:29.760 16:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:29.760 [2024-09-29 16:44:30.083342] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:36:29.760 16:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:29.760 16:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:36:29.760 16:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:36:29.760 16:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:29.760 16:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:30.018 null0 00:36:30.018 [2024-09-29 16:44:30.468705] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:30.018 [2024-09-29 16:44:30.492986] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:30.018 16:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.018 16:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:36:30.018 16:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:30.018 16:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:36:30.018 16:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:36:30.018 16:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:36:30.018 16:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3319992 00:36:30.018 16:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:36:30.018 16:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3319992 /var/tmp/bperf.sock 00:36:30.018 16:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3319992 ']' 00:36:30.018 16:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:30.018 16:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:30.018 16:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:30.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:30.018 16:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:30.018 16:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:30.018 [2024-09-29 16:44:30.578723] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:36:30.018 [2024-09-29 16:44:30.578869] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3319992 ] 00:36:30.275 [2024-09-29 16:44:30.711383] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:30.533 [2024-09-29 16:44:30.965529] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:36:31.097 16:44:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:31.097 16:44:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:36:31.097 16:44:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:31.097 16:44:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:31.354 16:44:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:31.354 16:44:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:31.354 16:44:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:31.354 16:44:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:31.354 16:44:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:31.355 16:44:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:31.612 nvme0n1 00:36:31.870 16:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:36:31.870 16:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:31.870 16:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:31.870 16:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:31.870 16:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:31.870 16:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:31.870 Running I/O for 2 seconds... 00:36:31.870 [2024-09-29 16:44:32.332357] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.870 [2024-09-29 16:44:32.332448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:13133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.870 [2024-09-29 16:44:32.332515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.870 [2024-09-29 16:44:32.355791] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.870 [2024-09-29 16:44:32.355849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:5684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.870 [2024-09-29 16:44:32.355876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.870 [2024-09-29 16:44:32.372626] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.870 [2024-09-29 16:44:32.372686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.870 [2024-09-29 16:44:32.372733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.870 [2024-09-29 16:44:32.393110] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.870 [2024-09-29 16:44:32.393160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:2511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.870 [2024-09-29 16:44:32.393191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.870 [2024-09-29 16:44:32.411870] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.870 [2024-09-29 16:44:32.411926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:3129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.870 [2024-09-29 16:44:32.411955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.870 [2024-09-29 16:44:32.429140] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:31.870 [2024-09-29 16:44:32.429189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:7414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.870 [2024-09-29 16:44:32.429220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.128 [2024-09-29 16:44:32.449667] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.128 [2024-09-29 16:44:32.449748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:18570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.128 [2024-09-29 16:44:32.449774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.128 [2024-09-29 16:44:32.466270] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.128 [2024-09-29 16:44:32.466319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.128 [2024-09-29 16:44:32.466348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.128 [2024-09-29 16:44:32.489949] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.128 [2024-09-29 16:44:32.490009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.128 [2024-09-29 16:44:32.490038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.128 [2024-09-29 16:44:32.505285] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.128 [2024-09-29 16:44:32.505333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:13143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.128 [2024-09-29 16:44:32.505362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.128 [2024-09-29 16:44:32.527668] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.128 [2024-09-29 16:44:32.527753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.128 [2024-09-29 16:44:32.527797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.128 [2024-09-29 16:44:32.543893] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.128 [2024-09-29 16:44:32.543935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:4577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.128 [2024-09-29 16:44:32.543966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.128 [2024-09-29 16:44:32.566364] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.128 [2024-09-29 16:44:32.566414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:21190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.128 [2024-09-29 16:44:32.566443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.128 [2024-09-29 16:44:32.587702] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.128 [2024-09-29 16:44:32.587757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:18929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.128 [2024-09-29 16:44:32.587797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.128 [2024-09-29 16:44:32.604544] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.128 [2024-09-29 16:44:32.604594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:23103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.128 [2024-09-29 16:44:32.604623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.128 [2024-09-29 16:44:32.626972] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.128 [2024-09-29 16:44:32.627034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:20640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.128 [2024-09-29 16:44:32.627064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.128 [2024-09-29 16:44:32.644577] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.128 [2024-09-29 16:44:32.644626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:23411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.128 [2024-09-29 16:44:32.644655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.128 [2024-09-29 16:44:32.663557] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.128 [2024-09-29 16:44:32.663606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.128 [2024-09-29 16:44:32.663635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.128 [2024-09-29 16:44:32.679652] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.128 [2024-09-29 16:44:32.679726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:10032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.128 [2024-09-29 16:44:32.679752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.387 [2024-09-29 16:44:32.700106] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.387 [2024-09-29 16:44:32.700155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:15585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.387 [2024-09-29 16:44:32.700184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.387 [2024-09-29 16:44:32.721944] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.387 [2024-09-29 16:44:32.722004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:3451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.387 [2024-09-29 16:44:32.722034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.387 [2024-09-29 16:44:32.743497] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.387 [2024-09-29 16:44:32.743573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:6888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.387 [2024-09-29 16:44:32.743618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.387 [2024-09-29 16:44:32.760114] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.387 [2024-09-29 16:44:32.760175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:17982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.387 [2024-09-29 16:44:32.760223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.387 [2024-09-29 16:44:32.784639] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.387 [2024-09-29 16:44:32.784706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:16853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.387 [2024-09-29 16:44:32.784752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.387 [2024-09-29 16:44:32.800459] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.387 [2024-09-29 16:44:32.800509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:7420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.387 [2024-09-29 16:44:32.800538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.387 [2024-09-29 16:44:32.820272] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.387 [2024-09-29 16:44:32.820320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:20775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.387 [2024-09-29 16:44:32.820350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.387 [2024-09-29 16:44:32.837821] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.387 [2024-09-29 16:44:32.837879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:11133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.387 [2024-09-29 16:44:32.837920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.387 [2024-09-29 16:44:32.856059] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.387 [2024-09-29 16:44:32.856107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.387 [2024-09-29 16:44:32.856137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.387 [2024-09-29 16:44:32.874804] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.387 [2024-09-29 16:44:32.874858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.387 [2024-09-29 16:44:32.874885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.387 [2024-09-29 16:44:32.891167] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.387 [2024-09-29 16:44:32.891216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:11108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.387 [2024-09-29 16:44:32.891245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.387 [2024-09-29 16:44:32.911034] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.387 [2024-09-29 16:44:32.911083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:8051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.387 [2024-09-29 16:44:32.911113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.387 [2024-09-29 16:44:32.926569] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.387 [2024-09-29 16:44:32.926619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:3659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.387 [2024-09-29 16:44:32.926648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.387 [2024-09-29 16:44:32.946634] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.387 [2024-09-29 16:44:32.946692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:12813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.387 [2024-09-29 16:44:32.946739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.646 [2024-09-29 16:44:32.966812] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.646 [2024-09-29 16:44:32.966869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.646 [2024-09-29 16:44:32.966895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.646 [2024-09-29 16:44:32.985248] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.646 [2024-09-29 16:44:32.985296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:3350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.646 [2024-09-29 16:44:32.985328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.646 [2024-09-29 16:44:33.001668] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.646 [2024-09-29 16:44:33.001726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:18913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.646 [2024-09-29 16:44:33.001755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.646 [2024-09-29 16:44:33.022591] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.646 [2024-09-29 16:44:33.022647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:22281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.646 [2024-09-29 16:44:33.022684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.646 [2024-09-29 16:44:33.039501] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.646 [2024-09-29 16:44:33.039550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:9715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.646 [2024-09-29 16:44:33.039579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.646 [2024-09-29 16:44:33.059435] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.646 [2024-09-29 16:44:33.059484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:18377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.646 [2024-09-29 16:44:33.059513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.646 [2024-09-29 16:44:33.076760] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.646 [2024-09-29 16:44:33.076802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.646 [2024-09-29 16:44:33.076826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.647 [2024-09-29 16:44:33.096691] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.647 [2024-09-29 16:44:33.096745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:16467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.647 [2024-09-29 16:44:33.096779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.647 [2024-09-29 16:44:33.118804] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.647 [2024-09-29 16:44:33.118859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:17832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.647 [2024-09-29 16:44:33.118889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.647 [2024-09-29 16:44:33.135487] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.647 [2024-09-29 16:44:33.135537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.647 [2024-09-29 16:44:33.135567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.647 [2024-09-29 16:44:33.154285] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.647 [2024-09-29 16:44:33.154346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:16503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.647 [2024-09-29 16:44:33.154379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.647 [2024-09-29 16:44:33.174929] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.647 [2024-09-29 16:44:33.174970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:10554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.647 [2024-09-29 16:44:33.175010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.647 [2024-09-29 16:44:33.191558] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.647 [2024-09-29 16:44:33.191607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:16417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.647 [2024-09-29 16:44:33.191637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.905 [2024-09-29 16:44:33.214508] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.905 [2024-09-29 16:44:33.214559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.905 [2024-09-29 16:44:33.214590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.905 [2024-09-29 16:44:33.232266] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.905 [2024-09-29 16:44:33.232316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:18732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.905 [2024-09-29 16:44:33.232346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.905 [2024-09-29 16:44:33.254992] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.905 [2024-09-29 16:44:33.255058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:9339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.905 [2024-09-29 16:44:33.255120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.905 [2024-09-29 16:44:33.274229] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.905 [2024-09-29 16:44:33.274286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:19562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.905 [2024-09-29 16:44:33.274330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.905 [2024-09-29 16:44:33.289442] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.905 [2024-09-29 16:44:33.289491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.905 [2024-09-29 16:44:33.289520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.905 13198.00 IOPS, 51.55 MiB/s [2024-09-29 16:44:33.311579] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.905 [2024-09-29 16:44:33.311632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:23342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.905 [2024-09-29 16:44:33.311690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.905 [2024-09-29 16:44:33.329277] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.905 [2024-09-29 16:44:33.329327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:24130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.905 [2024-09-29 16:44:33.329357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.906 [2024-09-29 16:44:33.352251] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.906 [2024-09-29 16:44:33.352301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:17017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.906 [2024-09-29 16:44:33.352331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.906 [2024-09-29 16:44:33.371193] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.906 [2024-09-29 16:44:33.371258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.906 [2024-09-29 16:44:33.371305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.906 [2024-09-29 16:44:33.387811] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.906 [2024-09-29 16:44:33.387866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.906 [2024-09-29 16:44:33.387892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.906 [2024-09-29 16:44:33.408122] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.906 [2024-09-29 16:44:33.408172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:24110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.906 [2024-09-29 16:44:33.408201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.906 [2024-09-29 16:44:33.430594] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.906 [2024-09-29 16:44:33.430668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:11579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.906 [2024-09-29 16:44:33.430742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.906 [2024-09-29 16:44:33.446849] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.906 [2024-09-29 16:44:33.446904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:8926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.906 [2024-09-29 16:44:33.446930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.906 [2024-09-29 16:44:33.464572] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:32.906 [2024-09-29 16:44:33.464622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:15311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.906 [2024-09-29 16:44:33.464652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:33.167 [2024-09-29 16:44:33.484524] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:33.167 [2024-09-29 16:44:33.484574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:33.167 [2024-09-29 16:44:33.484605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:33.167 [2024-09-29 16:44:33.504626] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:33.167 [2024-09-29 16:44:33.504704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:3207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:33.167 [2024-09-29 16:44:33.504767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:33.167 [2024-09-29 16:44:33.521255] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:33.167 [2024-09-29 16:44:33.521305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:21459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:33.167 [2024-09-29 16:44:33.521335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:33.167 [2024-09-29 16:44:33.539305] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:33.167 [2024-09-29 16:44:33.539354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:13218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:33.167 [2024-09-29 16:44:33.539391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:33.167 [2024-09-29 16:44:33.555887] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:33.167 [2024-09-29 16:44:33.555944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:2076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:33.167 [2024-09-29 16:44:33.555969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:33.167 [2024-09-29 16:44:33.579516] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:33.167 [2024-09-29 16:44:33.579566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:7977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:33.167 [2024-09-29 16:44:33.579595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:33.167 [2024-09-29 16:44:33.595266] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:33.167 [2024-09-29 16:44:33.595315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:33.167 [2024-09-29 16:44:33.595344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:33.167 [2024-09-29 16:44:33.617896] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:33.167 [2024-09-29 16:44:33.617944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:14120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:33.167 [2024-09-29 16:44:33.617988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:33.167 [2024-09-29 16:44:33.638597] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:33.167 [2024-09-29 16:44:33.638655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:13448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:33.167 [2024-09-29 16:44:33.638707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:33.167 [2024-09-29 16:44:33.655266] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:33.167 [2024-09-29 16:44:33.655330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:33.168 [2024-09-29 16:44:33.655376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:33.168 [2024-09-29 16:44:33.677731] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:33.168 [2024-09-29 16:44:33.677788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:12265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:33.168 [2024-09-29 16:44:33.677827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:33.168 [2024-09-29 16:44:33.694910] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:33.168 [2024-09-29 16:44:33.694973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:8403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:33.168 [2024-09-29 16:44:33.695002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:33.168 [2024-09-29 16:44:33.714842] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:33.168 [2024-09-29 16:44:33.714899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:11246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:33.168 [2024-09-29 16:44:33.714925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:33.426 [2024-09-29 16:44:33.730692] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:33.426 [2024-09-29 16:44:33.730753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:4160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:33.426 [2024-09-29 16:44:33.730777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:33.426 [2024-09-29 16:44:33.750989] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:33.426 [2024-09-29 16:44:33.751064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:3911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:33.426 [2024-09-29 16:44:33.751103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:33.426 [2024-09-29 16:44:33.773353] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:33.426 [2024-09-29 16:44:33.773402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:33.426 [2024-09-29 16:44:33.773431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:33.426 [2024-09-29 16:44:33.794800] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:33.426 [2024-09-29 16:44:33.794865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:4380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:33.426 [2024-09-29 16:44:33.794913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:33.426 [2024-09-29 16:44:33.811504] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:33.426 [2024-09-29 16:44:33.811552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:33.426 [2024-09-29 16:44:33.811588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:33.426 [2024-09-29 16:44:33.836697] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:33.426 [2024-09-29 16:44:33.836754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:22974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:33.426 [2024-09-29 16:44:33.836778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:33.427 [2024-09-29 16:44:33.852445] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:33.427 [2024-09-29 16:44:33.852494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:4110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:33.427 [2024-09-29 16:44:33.852523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:33.427 [2024-09-29 16:44:33.872510] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:33.427 [2024-09-29 16:44:33.872559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:14929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:33.427 [2024-09-29 16:44:33.872589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:33.427 [2024-09-29 16:44:33.889209] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:33.427 [2024-09-29 16:44:33.889258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:33.427 [2024-09-29 16:44:33.889288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:33.427 [2024-09-29 16:44:33.911582] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:33.427 [2024-09-29 16:44:33.911631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:33.427 [2024-09-29 16:44:33.911661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:33.427 [2024-09-29 16:44:33.932796] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:33.427 [2024-09-29 16:44:33.932852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:33.427 [2024-09-29 16:44:33.932880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:33.427 [2024-09-29 16:44:33.949723] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:33.427 [2024-09-29 16:44:33.949772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:12089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:33.427 [2024-09-29 16:44:33.949802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:33.427 [2024-09-29 16:44:33.970882] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:33.427 [2024-09-29 16:44:33.970938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:3441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:33.427 [2024-09-29 16:44:33.970965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:33.427 [2024-09-29 16:44:33.987800] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:33.427 [2024-09-29 16:44:33.987849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:3389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:33.427 [2024-09-29 16:44:33.987885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:33.685 [2024-09-29 16:44:34.010412] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:33.685 [2024-09-29 16:44:34.010463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:18334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:33.685 [2024-09-29 16:44:34.010491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:33.685 [2024-09-29 16:44:34.026058] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:33.685 [2024-09-29 16:44:34.026107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:20209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:33.685 [2024-09-29 16:44:34.026137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:33.685 [2024-09-29 16:44:34.044690] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:33.685 [2024-09-29 16:44:34.044752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:17174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:33.685 [2024-09-29 16:44:34.044777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:33.685 [2024-09-29 16:44:34.062210] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:33.685 [2024-09-29 16:44:34.062259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:33.685 [2024-09-29 16:44:34.062289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:33.685 [2024-09-29 16:44:34.083240] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:33.685 [2024-09-29 16:44:34.083290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:13944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:33.685 [2024-09-29 16:44:34.083328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:33.685 [2024-09-29 16:44:34.100537] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:33.685 [2024-09-29 16:44:34.100585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:33.685 [2024-09-29 16:44:34.100614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:33.685 [2024-09-29 16:44:34.122451] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:33.685 [2024-09-29 16:44:34.122501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:33.685 [2024-09-29 16:44:34.122530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:33.685 [2024-09-29 16:44:34.140040] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:33.685 [2024-09-29 16:44:34.140119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:2385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:33.686 [2024-09-29 16:44:34.140166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:33.686 [2024-09-29 16:44:34.162766] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:33.686 [2024-09-29 16:44:34.162826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:6351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:33.686 [2024-09-29 16:44:34.162853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:33.686 [2024-09-29 16:44:34.179937] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:33.686 [2024-09-29 16:44:34.179977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:33.686 [2024-09-29 16:44:34.180019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:33.686 [2024-09-29 16:44:34.201967] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:33.686 [2024-09-29 16:44:34.202015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:23534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:33.686 [2024-09-29 16:44:34.202045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:33.686 [2024-09-29 16:44:34.223416] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:33.686 [2024-09-29 16:44:34.223480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:33.686 [2024-09-29 16:44:34.223526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:33.686 [2024-09-29 16:44:34.240443] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:33.686 [2024-09-29 16:44:34.240498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:1191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:33.686 [2024-09-29 16:44:34.240529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:33.944 [2024-09-29 16:44:34.261774] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:33.944 [2024-09-29 16:44:34.261830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:33.944 [2024-09-29 16:44:34.261855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:33.944 [2024-09-29 16:44:34.278692] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:33.944 [2024-09-29 16:44:34.278748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:33.944 [2024-09-29 16:44:34.278786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:33.944 [2024-09-29 16:44:34.297962] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:33.944 [2024-09-29 16:44:34.298010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:9191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:33.944 [2024-09-29 16:44:34.298052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:33.944 13204.00 IOPS, 51.58 MiB/s [2024-09-29 16:44:34.317377] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:33.944 [2024-09-29 16:44:34.317429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:23535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:33.944 [2024-09-29 16:44:34.317460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:33.944 00:36:33.944 Latency(us) 00:36:33.944 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:33.944 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:33.944 nvme0n1 : 2.05 12941.06 50.55 0.00 0.00 9673.18 4708.88 50098.63 00:36:33.944 =================================================================================================================== 00:36:33.944 Total : 12941.06 50.55 0.00 0.00 9673.18 4708.88 50098.63 00:36:33.944 { 00:36:33.944 "results": [ 00:36:33.944 { 00:36:33.944 "job": "nvme0n1", 00:36:33.944 "core_mask": "0x2", 00:36:33.944 "workload": "randread", 00:36:33.944 "status": "finished", 00:36:33.944 "queue_depth": 128, 00:36:33.944 "io_size": 4096, 00:36:33.944 "runtime": 2.050528, 00:36:33.944 "iops": 12941.057132601945, 00:36:33.944 "mibps": 50.55100442422635, 00:36:33.944 "io_failed": 0, 00:36:33.944 "io_timeout": 0, 00:36:33.944 "avg_latency_us": 9673.177885416319, 00:36:33.945 "min_latency_us": 4708.882962962963, 00:36:33.945 "max_latency_us": 50098.63111111111 00:36:33.945 } 00:36:33.945 ], 00:36:33.945 "core_count": 1 00:36:33.945 } 00:36:33.945 16:44:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:33.945 16:44:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:33.945 16:44:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:33.945 | .driver_specific 00:36:33.945 | .nvme_error 00:36:33.945 | .status_code 00:36:33.945 | .command_transient_transport_error' 00:36:33.945 16:44:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:34.203 16:44:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 104 > 0 )) 00:36:34.203 16:44:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3319992 00:36:34.203 16:44:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3319992 ']' 00:36:34.203 16:44:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3319992 00:36:34.203 16:44:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:36:34.203 16:44:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:34.203 16:44:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3319992 00:36:34.203 16:44:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:34.203 16:44:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:34.203 16:44:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3319992' 00:36:34.203 killing process with pid 3319992 00:36:34.203 16:44:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3319992 00:36:34.203 Received shutdown signal, test time was about 2.000000 seconds 00:36:34.203 00:36:34.203 Latency(us) 00:36:34.203 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:34.203 =================================================================================================================== 00:36:34.203 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:34.203 16:44:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3319992 00:36:35.578 16:44:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:36:35.578 16:44:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:35.578 16:44:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:36:35.578 16:44:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:36:35.578 16:44:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:36:35.578 16:44:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3320647 00:36:35.578 16:44:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:36:35.578 16:44:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3320647 /var/tmp/bperf.sock 00:36:35.578 16:44:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3320647 ']' 00:36:35.578 16:44:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:35.578 16:44:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:35.578 16:44:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:35.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:35.578 16:44:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:35.578 16:44:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:35.578 [2024-09-29 16:44:35.845902] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:36:35.578 [2024-09-29 16:44:35.846090] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3320647 ] 00:36:35.578 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:35.578 Zero copy mechanism will not be used. 00:36:35.578 [2024-09-29 16:44:35.970798] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:35.836 [2024-09-29 16:44:36.199034] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:36:36.402 16:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:36.402 16:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:36:36.402 16:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:36.402 16:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:36.659 16:44:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:36.659 16:44:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:36.659 16:44:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:36.659 16:44:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:36.659 16:44:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:36.659 16:44:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:37.222 nvme0n1 00:36:37.222 16:44:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:36:37.222 16:44:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:37.222 16:44:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:37.222 16:44:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:37.222 16:44:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:37.222 16:44:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:37.222 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:37.222 Zero copy mechanism will not be used. 00:36:37.222 Running I/O for 2 seconds... 00:36:37.480 [2024-09-29 16:44:37.795302] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.480 [2024-09-29 16:44:37.795386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.480 [2024-09-29 16:44:37.795421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:37.480 [2024-09-29 16:44:37.802747] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.480 [2024-09-29 16:44:37.802793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.480 [2024-09-29 16:44:37.802821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:37.480 [2024-09-29 16:44:37.810054] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.480 [2024-09-29 16:44:37.810104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.480 [2024-09-29 16:44:37.810133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:37.480 [2024-09-29 16:44:37.817346] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.480 [2024-09-29 16:44:37.817394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.480 [2024-09-29 16:44:37.817439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.480 [2024-09-29 16:44:37.824440] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.480 [2024-09-29 16:44:37.824489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.480 [2024-09-29 16:44:37.824528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:37.480 [2024-09-29 16:44:37.831849] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.480 [2024-09-29 16:44:37.831893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.480 [2024-09-29 16:44:37.831919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:37.480 [2024-09-29 16:44:37.838395] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.480 [2024-09-29 16:44:37.838451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.480 [2024-09-29 16:44:37.838481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:37.480 [2024-09-29 16:44:37.844198] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.480 [2024-09-29 16:44:37.844247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.480 [2024-09-29 16:44:37.844278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.480 [2024-09-29 16:44:37.848628] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.480 [2024-09-29 16:44:37.848683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.480 [2024-09-29 16:44:37.848731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:37.480 [2024-09-29 16:44:37.854496] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.480 [2024-09-29 16:44:37.854544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.480 [2024-09-29 16:44:37.854572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:37.480 [2024-09-29 16:44:37.861904] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.480 [2024-09-29 16:44:37.861962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.480 [2024-09-29 16:44:37.861989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:37.480 [2024-09-29 16:44:37.868982] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.480 [2024-09-29 16:44:37.869044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.480 [2024-09-29 16:44:37.869079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.480 [2024-09-29 16:44:37.876155] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.480 [2024-09-29 16:44:37.876203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.480 [2024-09-29 16:44:37.876233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:37.480 [2024-09-29 16:44:37.883490] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.480 [2024-09-29 16:44:37.883538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.480 [2024-09-29 16:44:37.883568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:37.480 [2024-09-29 16:44:37.890849] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.480 [2024-09-29 16:44:37.890903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.480 [2024-09-29 16:44:37.890928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:37.480 [2024-09-29 16:44:37.898219] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.480 [2024-09-29 16:44:37.898266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.480 [2024-09-29 16:44:37.898303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.480 [2024-09-29 16:44:37.905201] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.480 [2024-09-29 16:44:37.905249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.480 [2024-09-29 16:44:37.905279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:37.480 [2024-09-29 16:44:37.911293] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.480 [2024-09-29 16:44:37.911340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.480 [2024-09-29 16:44:37.911369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:37.480 [2024-09-29 16:44:37.916514] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.480 [2024-09-29 16:44:37.916560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.480 [2024-09-29 16:44:37.916589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:37.480 [2024-09-29 16:44:37.921826] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.480 [2024-09-29 16:44:37.921869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.480 [2024-09-29 16:44:37.921895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.480 [2024-09-29 16:44:37.927495] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.480 [2024-09-29 16:44:37.927541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.480 [2024-09-29 16:44:37.927579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:37.480 [2024-09-29 16:44:37.931968] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.480 [2024-09-29 16:44:37.932035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.480 [2024-09-29 16:44:37.932064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:37.481 [2024-09-29 16:44:37.937058] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.481 [2024-09-29 16:44:37.937103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.481 [2024-09-29 16:44:37.937132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:37.481 [2024-09-29 16:44:37.944077] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.481 [2024-09-29 16:44:37.944124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.481 [2024-09-29 16:44:37.944153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.481 [2024-09-29 16:44:37.951123] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.481 [2024-09-29 16:44:37.951169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.481 [2024-09-29 16:44:37.951197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:37.481 [2024-09-29 16:44:37.958179] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.481 [2024-09-29 16:44:37.958228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.481 [2024-09-29 16:44:37.958258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:37.481 [2024-09-29 16:44:37.965322] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.481 [2024-09-29 16:44:37.965369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.481 [2024-09-29 16:44:37.965398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:37.481 [2024-09-29 16:44:37.972488] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.481 [2024-09-29 16:44:37.972534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.481 [2024-09-29 16:44:37.972563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.481 [2024-09-29 16:44:37.979766] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.481 [2024-09-29 16:44:37.979810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.481 [2024-09-29 16:44:37.979836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:37.481 [2024-09-29 16:44:37.986917] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.481 [2024-09-29 16:44:37.986961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.481 [2024-09-29 16:44:37.986988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:37.481 [2024-09-29 16:44:37.993921] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.481 [2024-09-29 16:44:37.993982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.481 [2024-09-29 16:44:37.994012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:37.481 [2024-09-29 16:44:38.001069] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.481 [2024-09-29 16:44:38.001116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.481 [2024-09-29 16:44:38.001144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.481 [2024-09-29 16:44:38.008285] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.481 [2024-09-29 16:44:38.008332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.481 [2024-09-29 16:44:38.008360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:37.481 [2024-09-29 16:44:38.015174] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.481 [2024-09-29 16:44:38.015221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.481 [2024-09-29 16:44:38.015249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:37.481 [2024-09-29 16:44:38.019341] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.481 [2024-09-29 16:44:38.019386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.481 [2024-09-29 16:44:38.019414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:37.481 [2024-09-29 16:44:38.025909] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.481 [2024-09-29 16:44:38.025968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.481 [2024-09-29 16:44:38.025998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.481 [2024-09-29 16:44:38.032600] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.481 [2024-09-29 16:44:38.032646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.481 [2024-09-29 16:44:38.032686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:37.481 [2024-09-29 16:44:38.039535] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.481 [2024-09-29 16:44:38.039583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.481 [2024-09-29 16:44:38.039621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:37.740 [2024-09-29 16:44:38.046712] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.740 [2024-09-29 16:44:38.046755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.740 [2024-09-29 16:44:38.046782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:37.740 [2024-09-29 16:44:38.053852] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.740 [2024-09-29 16:44:38.053894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.740 [2024-09-29 16:44:38.053921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.740 [2024-09-29 16:44:38.061000] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.740 [2024-09-29 16:44:38.061047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.740 [2024-09-29 16:44:38.061076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:37.740 [2024-09-29 16:44:38.068100] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.740 [2024-09-29 16:44:38.068147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.740 [2024-09-29 16:44:38.068177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:37.740 [2024-09-29 16:44:38.075195] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.740 [2024-09-29 16:44:38.075243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.740 [2024-09-29 16:44:38.075271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:37.740 [2024-09-29 16:44:38.082359] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.740 [2024-09-29 16:44:38.082405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.740 [2024-09-29 16:44:38.082433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.740 [2024-09-29 16:44:38.089376] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.740 [2024-09-29 16:44:38.089423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.740 [2024-09-29 16:44:38.089452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:37.740 [2024-09-29 16:44:38.096312] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.740 [2024-09-29 16:44:38.096359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.740 [2024-09-29 16:44:38.096387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:37.740 [2024-09-29 16:44:38.103511] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.740 [2024-09-29 16:44:38.103557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.740 [2024-09-29 16:44:38.103585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:37.740 [2024-09-29 16:44:38.110936] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.740 [2024-09-29 16:44:38.110979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.740 [2024-09-29 16:44:38.111024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.740 [2024-09-29 16:44:38.118031] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.740 [2024-09-29 16:44:38.118077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.740 [2024-09-29 16:44:38.118105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:37.740 [2024-09-29 16:44:38.125187] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.740 [2024-09-29 16:44:38.125234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.740 [2024-09-29 16:44:38.125264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:37.740 [2024-09-29 16:44:38.132078] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.740 [2024-09-29 16:44:38.132124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.740 [2024-09-29 16:44:38.132153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:37.740 [2024-09-29 16:44:38.139197] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.740 [2024-09-29 16:44:38.139244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.740 [2024-09-29 16:44:38.139294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.740 [2024-09-29 16:44:38.146430] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.740 [2024-09-29 16:44:38.146476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.740 [2024-09-29 16:44:38.146505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:37.740 [2024-09-29 16:44:38.153775] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.740 [2024-09-29 16:44:38.153817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.740 [2024-09-29 16:44:38.153844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:37.740 [2024-09-29 16:44:38.160869] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.740 [2024-09-29 16:44:38.160917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.740 [2024-09-29 16:44:38.160954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:37.740 [2024-09-29 16:44:38.168045] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.740 [2024-09-29 16:44:38.168092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.740 [2024-09-29 16:44:38.168121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.740 [2024-09-29 16:44:38.175326] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.740 [2024-09-29 16:44:38.175372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.740 [2024-09-29 16:44:38.175400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:37.740 [2024-09-29 16:44:38.182599] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.740 [2024-09-29 16:44:38.182646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.740 [2024-09-29 16:44:38.182683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:37.740 [2024-09-29 16:44:38.189643] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.740 [2024-09-29 16:44:38.189698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.740 [2024-09-29 16:44:38.189741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:37.741 [2024-09-29 16:44:38.196809] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.741 [2024-09-29 16:44:38.196853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.741 [2024-09-29 16:44:38.196878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.741 [2024-09-29 16:44:38.203881] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.741 [2024-09-29 16:44:38.203924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.741 [2024-09-29 16:44:38.203949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:37.741 [2024-09-29 16:44:38.210863] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.741 [2024-09-29 16:44:38.210905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.741 [2024-09-29 16:44:38.210931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:37.741 [2024-09-29 16:44:38.217999] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.741 [2024-09-29 16:44:38.218047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.741 [2024-09-29 16:44:38.218076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:37.741 [2024-09-29 16:44:38.225257] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.741 [2024-09-29 16:44:38.225314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.741 [2024-09-29 16:44:38.225345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.741 [2024-09-29 16:44:38.232391] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.741 [2024-09-29 16:44:38.232437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.741 [2024-09-29 16:44:38.232465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:37.741 [2024-09-29 16:44:38.239416] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.741 [2024-09-29 16:44:38.239463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.741 [2024-09-29 16:44:38.239491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:37.741 [2024-09-29 16:44:38.246313] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.741 [2024-09-29 16:44:38.246366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.741 [2024-09-29 16:44:38.246395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:37.741 [2024-09-29 16:44:38.253497] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.741 [2024-09-29 16:44:38.253543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.741 [2024-09-29 16:44:38.253572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.741 [2024-09-29 16:44:38.260639] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.741 [2024-09-29 16:44:38.260720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.741 [2024-09-29 16:44:38.260752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:37.741 [2024-09-29 16:44:38.268252] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.741 [2024-09-29 16:44:38.268296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.741 [2024-09-29 16:44:38.268322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:37.741 [2024-09-29 16:44:38.274839] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.741 [2024-09-29 16:44:38.274885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.741 [2024-09-29 16:44:38.274911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:37.741 [2024-09-29 16:44:38.281659] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.741 [2024-09-29 16:44:38.281712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.741 [2024-09-29 16:44:38.281755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.741 [2024-09-29 16:44:38.288366] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.741 [2024-09-29 16:44:38.288409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.741 [2024-09-29 16:44:38.288434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:37.741 [2024-09-29 16:44:38.295347] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.741 [2024-09-29 16:44:38.295392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.741 [2024-09-29 16:44:38.295418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:38.099 [2024-09-29 16:44:38.302505] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.099 [2024-09-29 16:44:38.302559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.099 [2024-09-29 16:44:38.302588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:38.099 [2024-09-29 16:44:38.309827] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.099 [2024-09-29 16:44:38.309876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.099 [2024-09-29 16:44:38.309902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.099 [2024-09-29 16:44:38.314120] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.099 [2024-09-29 16:44:38.314164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.099 [2024-09-29 16:44:38.314191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:38.099 [2024-09-29 16:44:38.320640] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.099 [2024-09-29 16:44:38.320693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.099 [2024-09-29 16:44:38.320731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:38.099 [2024-09-29 16:44:38.326479] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.099 [2024-09-29 16:44:38.326523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.099 [2024-09-29 16:44:38.326549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:38.099 [2024-09-29 16:44:38.331401] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.099 [2024-09-29 16:44:38.331442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.099 [2024-09-29 16:44:38.331468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.099 [2024-09-29 16:44:38.336215] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.099 [2024-09-29 16:44:38.336267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.099 [2024-09-29 16:44:38.336294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:38.099 [2024-09-29 16:44:38.342783] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.099 [2024-09-29 16:44:38.342826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.099 [2024-09-29 16:44:38.342852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:38.099 [2024-09-29 16:44:38.349466] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.099 [2024-09-29 16:44:38.349523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.099 [2024-09-29 16:44:38.349550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:38.099 [2024-09-29 16:44:38.356236] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.099 [2024-09-29 16:44:38.356279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.099 [2024-09-29 16:44:38.356306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.099 [2024-09-29 16:44:38.363022] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.099 [2024-09-29 16:44:38.363079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.099 [2024-09-29 16:44:38.363104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:38.099 [2024-09-29 16:44:38.369942] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.099 [2024-09-29 16:44:38.370005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.099 [2024-09-29 16:44:38.370030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:38.099 [2024-09-29 16:44:38.376854] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.099 [2024-09-29 16:44:38.376914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.099 [2024-09-29 16:44:38.376941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:38.099 [2024-09-29 16:44:38.383619] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.099 [2024-09-29 16:44:38.383684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.099 [2024-09-29 16:44:38.383713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.099 [2024-09-29 16:44:38.390335] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.099 [2024-09-29 16:44:38.390377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.099 [2024-09-29 16:44:38.390429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:38.099 [2024-09-29 16:44:38.396898] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.099 [2024-09-29 16:44:38.396941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.099 [2024-09-29 16:44:38.396977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:38.099 [2024-09-29 16:44:38.403342] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.099 [2024-09-29 16:44:38.403387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.099 [2024-09-29 16:44:38.403412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:38.099 [2024-09-29 16:44:38.409994] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.099 [2024-09-29 16:44:38.410039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.099 [2024-09-29 16:44:38.410065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.099 [2024-09-29 16:44:38.416556] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.099 [2024-09-29 16:44:38.416615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.099 [2024-09-29 16:44:38.416641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:38.099 [2024-09-29 16:44:38.423330] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.099 [2024-09-29 16:44:38.423393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.099 [2024-09-29 16:44:38.423420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:38.099 [2024-09-29 16:44:38.430151] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.099 [2024-09-29 16:44:38.430211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.099 [2024-09-29 16:44:38.430238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:38.099 [2024-09-29 16:44:38.436945] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.099 [2024-09-29 16:44:38.436989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.100 [2024-09-29 16:44:38.437017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.100 [2024-09-29 16:44:38.443611] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.100 [2024-09-29 16:44:38.443677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.100 [2024-09-29 16:44:38.443707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:38.100 [2024-09-29 16:44:38.450166] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.100 [2024-09-29 16:44:38.450220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.100 [2024-09-29 16:44:38.450247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:38.100 [2024-09-29 16:44:38.456236] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.100 [2024-09-29 16:44:38.456279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.100 [2024-09-29 16:44:38.456304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:38.100 [2024-09-29 16:44:38.460192] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.100 [2024-09-29 16:44:38.460233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.100 [2024-09-29 16:44:38.460259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.100 [2024-09-29 16:44:38.465948] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.100 [2024-09-29 16:44:38.465990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.100 [2024-09-29 16:44:38.466016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:38.100 [2024-09-29 16:44:38.471391] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.100 [2024-09-29 16:44:38.471434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.100 [2024-09-29 16:44:38.471460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:38.100 [2024-09-29 16:44:38.475378] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.100 [2024-09-29 16:44:38.475419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.100 [2024-09-29 16:44:38.475444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:38.100 [2024-09-29 16:44:38.482166] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.100 [2024-09-29 16:44:38.482211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.100 [2024-09-29 16:44:38.482238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.100 [2024-09-29 16:44:38.490147] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.100 [2024-09-29 16:44:38.490209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.100 [2024-09-29 16:44:38.490238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:38.100 [2024-09-29 16:44:38.498208] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.100 [2024-09-29 16:44:38.498268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.100 [2024-09-29 16:44:38.498310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:38.100 [2024-09-29 16:44:38.505865] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.100 [2024-09-29 16:44:38.505911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.100 [2024-09-29 16:44:38.505939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:38.100 [2024-09-29 16:44:38.513739] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.100 [2024-09-29 16:44:38.513800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.100 [2024-09-29 16:44:38.513842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.100 [2024-09-29 16:44:38.522681] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.100 [2024-09-29 16:44:38.522728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.100 [2024-09-29 16:44:38.522754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:38.100 [2024-09-29 16:44:38.531326] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.100 [2024-09-29 16:44:38.531372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.100 [2024-09-29 16:44:38.531400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:38.100 [2024-09-29 16:44:38.540710] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.100 [2024-09-29 16:44:38.540757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.100 [2024-09-29 16:44:38.540784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:38.100 [2024-09-29 16:44:38.550507] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.100 [2024-09-29 16:44:38.550553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.100 [2024-09-29 16:44:38.550580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.100 [2024-09-29 16:44:38.559546] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.100 [2024-09-29 16:44:38.559595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.100 [2024-09-29 16:44:38.559622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:38.100 [2024-09-29 16:44:38.568061] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.100 [2024-09-29 16:44:38.568108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.100 [2024-09-29 16:44:38.568135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:38.100 [2024-09-29 16:44:38.577391] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.100 [2024-09-29 16:44:38.577448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.100 [2024-09-29 16:44:38.577477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:38.100 [2024-09-29 16:44:38.585386] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.100 [2024-09-29 16:44:38.585448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.100 [2024-09-29 16:44:38.585475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.100 [2024-09-29 16:44:38.590886] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.100 [2024-09-29 16:44:38.590930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.100 [2024-09-29 16:44:38.590956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:38.100 [2024-09-29 16:44:38.596925] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.100 [2024-09-29 16:44:38.596983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.100 [2024-09-29 16:44:38.597010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:38.100 [2024-09-29 16:44:38.603516] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.100 [2024-09-29 16:44:38.603562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.100 [2024-09-29 16:44:38.603589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:38.100 [2024-09-29 16:44:38.610732] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.100 [2024-09-29 16:44:38.610777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.100 [2024-09-29 16:44:38.610803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.100 [2024-09-29 16:44:38.618034] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.100 [2024-09-29 16:44:38.618079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.100 [2024-09-29 16:44:38.618106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:38.100 [2024-09-29 16:44:38.622917] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.100 [2024-09-29 16:44:38.622960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.100 [2024-09-29 16:44:38.622986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:38.100 [2024-09-29 16:44:38.628802] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.100 [2024-09-29 16:44:38.628845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.101 [2024-09-29 16:44:38.628872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:38.101 [2024-09-29 16:44:38.636987] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.101 [2024-09-29 16:44:38.637053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.101 [2024-09-29 16:44:38.637082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.383 [2024-09-29 16:44:38.645746] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.383 [2024-09-29 16:44:38.645800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.383 [2024-09-29 16:44:38.645829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:38.383 [2024-09-29 16:44:38.653945] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.383 [2024-09-29 16:44:38.654010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.383 [2024-09-29 16:44:38.654053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:38.383 [2024-09-29 16:44:38.661587] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.383 [2024-09-29 16:44:38.661647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.383 [2024-09-29 16:44:38.661682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:38.383 [2024-09-29 16:44:38.669294] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.383 [2024-09-29 16:44:38.669343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.383 [2024-09-29 16:44:38.669371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.383 [2024-09-29 16:44:38.677919] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.383 [2024-09-29 16:44:38.677971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.383 [2024-09-29 16:44:38.678015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:38.383 [2024-09-29 16:44:38.685379] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.383 [2024-09-29 16:44:38.685423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.383 [2024-09-29 16:44:38.685466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:38.383 [2024-09-29 16:44:38.692998] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.383 [2024-09-29 16:44:38.693055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.383 [2024-09-29 16:44:38.693082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:38.383 [2024-09-29 16:44:38.700730] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.383 [2024-09-29 16:44:38.700789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.383 [2024-09-29 16:44:38.700818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.383 [2024-09-29 16:44:38.708378] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.383 [2024-09-29 16:44:38.708435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.383 [2024-09-29 16:44:38.708461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:38.383 [2024-09-29 16:44:38.715758] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.383 [2024-09-29 16:44:38.715815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.383 [2024-09-29 16:44:38.715843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:38.383 [2024-09-29 16:44:38.723132] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.383 [2024-09-29 16:44:38.723189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.383 [2024-09-29 16:44:38.723216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:38.383 [2024-09-29 16:44:38.730735] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.383 [2024-09-29 16:44:38.730799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.383 [2024-09-29 16:44:38.730825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.383 [2024-09-29 16:44:38.738358] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.383 [2024-09-29 16:44:38.738416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.383 [2024-09-29 16:44:38.738444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:38.383 [2024-09-29 16:44:38.746285] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.383 [2024-09-29 16:44:38.746344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.383 [2024-09-29 16:44:38.746371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:38.383 [2024-09-29 16:44:38.753617] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.383 [2024-09-29 16:44:38.753682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.383 [2024-09-29 16:44:38.753712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:38.383 [2024-09-29 16:44:38.760910] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.383 [2024-09-29 16:44:38.760956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.383 [2024-09-29 16:44:38.760983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.383 [2024-09-29 16:44:38.768372] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.383 [2024-09-29 16:44:38.768417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.383 [2024-09-29 16:44:38.768444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:38.383 [2024-09-29 16:44:38.775638] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.383 [2024-09-29 16:44:38.775691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.383 [2024-09-29 16:44:38.775719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:38.383 [2024-09-29 16:44:38.783058] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.383 [2024-09-29 16:44:38.783103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.383 [2024-09-29 16:44:38.783130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:38.383 4446.00 IOPS, 555.75 MiB/s [2024-09-29 16:44:38.791138] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.383 [2024-09-29 16:44:38.791183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.383 [2024-09-29 16:44:38.791209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.383 [2024-09-29 16:44:38.798494] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.383 [2024-09-29 16:44:38.798539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.383 [2024-09-29 16:44:38.798566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:38.383 [2024-09-29 16:44:38.805994] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.383 [2024-09-29 16:44:38.806039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.383 [2024-09-29 16:44:38.806065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:38.383 [2024-09-29 16:44:38.813479] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.383 [2024-09-29 16:44:38.813524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.383 [2024-09-29 16:44:38.813551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:38.383 [2024-09-29 16:44:38.820638] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.383 [2024-09-29 16:44:38.820692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.383 [2024-09-29 16:44:38.820722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.383 [2024-09-29 16:44:38.824859] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.383 [2024-09-29 16:44:38.824912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.383 [2024-09-29 16:44:38.824941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:38.383 [2024-09-29 16:44:38.831949] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.383 [2024-09-29 16:44:38.832006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.384 [2024-09-29 16:44:38.832032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:38.384 [2024-09-29 16:44:38.838995] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.384 [2024-09-29 16:44:38.839037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.384 [2024-09-29 16:44:38.839079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:38.384 [2024-09-29 16:44:38.845990] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.384 [2024-09-29 16:44:38.846036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.384 [2024-09-29 16:44:38.846063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.384 [2024-09-29 16:44:38.853087] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.384 [2024-09-29 16:44:38.853131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.384 [2024-09-29 16:44:38.853158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:38.384 [2024-09-29 16:44:38.860616] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.384 [2024-09-29 16:44:38.860682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.384 [2024-09-29 16:44:38.860711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:38.384 [2024-09-29 16:44:38.867527] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.384 [2024-09-29 16:44:38.867585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.384 [2024-09-29 16:44:38.867610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:38.384 [2024-09-29 16:44:38.874265] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.384 [2024-09-29 16:44:38.874323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.384 [2024-09-29 16:44:38.874350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.384 [2024-09-29 16:44:38.880618] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.384 [2024-09-29 16:44:38.880661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.384 [2024-09-29 16:44:38.880697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:38.384 [2024-09-29 16:44:38.887413] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.384 [2024-09-29 16:44:38.887457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.384 [2024-09-29 16:44:38.887484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:38.384 [2024-09-29 16:44:38.891650] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.384 [2024-09-29 16:44:38.891701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.384 [2024-09-29 16:44:38.891728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:38.384 [2024-09-29 16:44:38.897861] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.384 [2024-09-29 16:44:38.897904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.384 [2024-09-29 16:44:38.897930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.384 [2024-09-29 16:44:38.904242] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.384 [2024-09-29 16:44:38.904285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.384 [2024-09-29 16:44:38.904311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:38.384 [2024-09-29 16:44:38.908944] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.384 [2024-09-29 16:44:38.908985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.384 [2024-09-29 16:44:38.909011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:38.384 [2024-09-29 16:44:38.914008] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.384 [2024-09-29 16:44:38.914050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.384 [2024-09-29 16:44:38.914076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:38.384 [2024-09-29 16:44:38.920083] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.384 [2024-09-29 16:44:38.920125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.384 [2024-09-29 16:44:38.920150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.384 [2024-09-29 16:44:38.924448] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.384 [2024-09-29 16:44:38.924490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.384 [2024-09-29 16:44:38.924516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:38.384 [2024-09-29 16:44:38.929451] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.384 [2024-09-29 16:44:38.929495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.384 [2024-09-29 16:44:38.929530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:38.384 [2024-09-29 16:44:38.935316] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.384 [2024-09-29 16:44:38.935359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.384 [2024-09-29 16:44:38.935385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:38.384 [2024-09-29 16:44:38.941597] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.384 [2024-09-29 16:44:38.941644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.384 [2024-09-29 16:44:38.941681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.641 [2024-09-29 16:44:38.945949] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.641 [2024-09-29 16:44:38.945993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.641 [2024-09-29 16:44:38.946020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:38.641 [2024-09-29 16:44:38.952402] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.641 [2024-09-29 16:44:38.952445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.641 [2024-09-29 16:44:38.952486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:38.641 [2024-09-29 16:44:38.959125] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.641 [2024-09-29 16:44:38.959182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.641 [2024-09-29 16:44:38.959208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:38.641 [2024-09-29 16:44:38.965884] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.641 [2024-09-29 16:44:38.965954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.642 [2024-09-29 16:44:38.965979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.642 [2024-09-29 16:44:38.972607] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.642 [2024-09-29 16:44:38.972664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.642 [2024-09-29 16:44:38.972703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:38.642 [2024-09-29 16:44:38.979308] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.642 [2024-09-29 16:44:38.979364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.642 [2024-09-29 16:44:38.979390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:38.642 [2024-09-29 16:44:38.986216] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.642 [2024-09-29 16:44:38.986274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.642 [2024-09-29 16:44:38.986300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:38.642 [2024-09-29 16:44:38.992778] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.642 [2024-09-29 16:44:38.992836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.642 [2024-09-29 16:44:38.992864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.642 [2024-09-29 16:44:38.999402] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.642 [2024-09-29 16:44:38.999456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.642 [2024-09-29 16:44:38.999481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:38.642 [2024-09-29 16:44:39.005950] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.642 [2024-09-29 16:44:39.006007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.642 [2024-09-29 16:44:39.006032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:38.642 [2024-09-29 16:44:39.012509] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.642 [2024-09-29 16:44:39.012565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.642 [2024-09-29 16:44:39.012590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:38.642 [2024-09-29 16:44:39.019368] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.642 [2024-09-29 16:44:39.019424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.642 [2024-09-29 16:44:39.019470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.642 [2024-09-29 16:44:39.026088] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.642 [2024-09-29 16:44:39.026141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.642 [2024-09-29 16:44:39.026168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:38.642 [2024-09-29 16:44:39.032424] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.642 [2024-09-29 16:44:39.032480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.642 [2024-09-29 16:44:39.032505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:38.642 [2024-09-29 16:44:39.039229] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.642 [2024-09-29 16:44:39.039288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.642 [2024-09-29 16:44:39.039338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:38.642 [2024-09-29 16:44:39.045872] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.642 [2024-09-29 16:44:39.045930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.642 [2024-09-29 16:44:39.045957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.642 [2024-09-29 16:44:39.052428] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.642 [2024-09-29 16:44:39.052483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.642 [2024-09-29 16:44:39.052509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:38.642 [2024-09-29 16:44:39.059167] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.642 [2024-09-29 16:44:39.059224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.642 [2024-09-29 16:44:39.059249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:38.642 [2024-09-29 16:44:39.065812] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.642 [2024-09-29 16:44:39.065870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.642 [2024-09-29 16:44:39.065897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:38.642 [2024-09-29 16:44:39.072504] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.642 [2024-09-29 16:44:39.072572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.642 [2024-09-29 16:44:39.072600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.642 [2024-09-29 16:44:39.079362] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.642 [2024-09-29 16:44:39.079421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.642 [2024-09-29 16:44:39.079448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:38.642 [2024-09-29 16:44:39.086327] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.642 [2024-09-29 16:44:39.086386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.642 [2024-09-29 16:44:39.086411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:38.642 [2024-09-29 16:44:39.093331] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.642 [2024-09-29 16:44:39.093389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.642 [2024-09-29 16:44:39.093415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:38.642 [2024-09-29 16:44:39.100087] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.642 [2024-09-29 16:44:39.100143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.642 [2024-09-29 16:44:39.100169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.642 [2024-09-29 16:44:39.106708] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.642 [2024-09-29 16:44:39.106777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.642 [2024-09-29 16:44:39.106811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:38.642 [2024-09-29 16:44:39.113447] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.642 [2024-09-29 16:44:39.113501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.642 [2024-09-29 16:44:39.113527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:38.642 [2024-09-29 16:44:39.120130] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.642 [2024-09-29 16:44:39.120186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.642 [2024-09-29 16:44:39.120212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:38.642 [2024-09-29 16:44:39.126809] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.642 [2024-09-29 16:44:39.126867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.642 [2024-09-29 16:44:39.126894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.642 [2024-09-29 16:44:39.133533] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.642 [2024-09-29 16:44:39.133590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.642 [2024-09-29 16:44:39.133616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:38.642 [2024-09-29 16:44:39.140155] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.642 [2024-09-29 16:44:39.140215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.642 [2024-09-29 16:44:39.140243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:38.642 [2024-09-29 16:44:39.147591] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.642 [2024-09-29 16:44:39.147656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.642 [2024-09-29 16:44:39.147691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:38.642 [2024-09-29 16:44:39.154456] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.642 [2024-09-29 16:44:39.154515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.642 [2024-09-29 16:44:39.154567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.642 [2024-09-29 16:44:39.161337] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.642 [2024-09-29 16:44:39.161395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.642 [2024-09-29 16:44:39.161422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:38.642 [2024-09-29 16:44:39.167648] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.642 [2024-09-29 16:44:39.167700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.642 [2024-09-29 16:44:39.167734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:38.642 [2024-09-29 16:44:39.172414] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.642 [2024-09-29 16:44:39.172457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.642 [2024-09-29 16:44:39.172483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:38.642 [2024-09-29 16:44:39.176391] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.642 [2024-09-29 16:44:39.176432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.642 [2024-09-29 16:44:39.176458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.642 [2024-09-29 16:44:39.181224] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.642 [2024-09-29 16:44:39.181267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.642 [2024-09-29 16:44:39.181292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:38.642 [2024-09-29 16:44:39.185539] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.642 [2024-09-29 16:44:39.185589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.642 [2024-09-29 16:44:39.185624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:38.642 [2024-09-29 16:44:39.189625] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.642 [2024-09-29 16:44:39.189667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.642 [2024-09-29 16:44:39.189705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:38.642 [2024-09-29 16:44:39.194443] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.642 [2024-09-29 16:44:39.194483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.642 [2024-09-29 16:44:39.194510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.642 [2024-09-29 16:44:39.198849] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.642 [2024-09-29 16:44:39.198901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.642 [2024-09-29 16:44:39.198928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:38.642 [2024-09-29 16:44:39.203058] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.642 [2024-09-29 16:44:39.203100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.642 [2024-09-29 16:44:39.203125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:38.902 [2024-09-29 16:44:39.208281] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.902 [2024-09-29 16:44:39.208339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.902 [2024-09-29 16:44:39.208381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:38.902 [2024-09-29 16:44:39.214936] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.902 [2024-09-29 16:44:39.214995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.902 [2024-09-29 16:44:39.215022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.902 [2024-09-29 16:44:39.221634] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.902 [2024-09-29 16:44:39.221697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.902 [2024-09-29 16:44:39.221726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:38.902 [2024-09-29 16:44:39.228446] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.902 [2024-09-29 16:44:39.228502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.902 [2024-09-29 16:44:39.228528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:38.902 [2024-09-29 16:44:39.235039] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.902 [2024-09-29 16:44:39.235080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.902 [2024-09-29 16:44:39.235120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:38.902 [2024-09-29 16:44:39.241308] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.902 [2024-09-29 16:44:39.241352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.902 [2024-09-29 16:44:39.241379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.902 [2024-09-29 16:44:39.247992] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.903 [2024-09-29 16:44:39.248037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.903 [2024-09-29 16:44:39.248074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:38.903 [2024-09-29 16:44:39.255246] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.903 [2024-09-29 16:44:39.255291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.903 [2024-09-29 16:44:39.255318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:38.903 [2024-09-29 16:44:39.262451] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.903 [2024-09-29 16:44:39.262497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.903 [2024-09-29 16:44:39.262524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:38.903 [2024-09-29 16:44:39.269118] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.903 [2024-09-29 16:44:39.269162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.903 [2024-09-29 16:44:39.269189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.903 [2024-09-29 16:44:39.275658] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.903 [2024-09-29 16:44:39.275709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.903 [2024-09-29 16:44:39.275735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:38.903 [2024-09-29 16:44:39.282331] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.903 [2024-09-29 16:44:39.282375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.903 [2024-09-29 16:44:39.282401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:38.903 [2024-09-29 16:44:39.289079] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.903 [2024-09-29 16:44:39.289140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.903 [2024-09-29 16:44:39.289167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:38.903 [2024-09-29 16:44:39.295702] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.903 [2024-09-29 16:44:39.295760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.903 [2024-09-29 16:44:39.295786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.903 [2024-09-29 16:44:39.302264] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.903 [2024-09-29 16:44:39.302322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.903 [2024-09-29 16:44:39.302350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:38.903 [2024-09-29 16:44:39.308963] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.903 [2024-09-29 16:44:39.309016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.903 [2024-09-29 16:44:39.309044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:38.903 [2024-09-29 16:44:39.315604] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.903 [2024-09-29 16:44:39.315649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.903 [2024-09-29 16:44:39.315681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:38.903 [2024-09-29 16:44:39.322237] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.903 [2024-09-29 16:44:39.322279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.903 [2024-09-29 16:44:39.322305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.903 [2024-09-29 16:44:39.328278] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.903 [2024-09-29 16:44:39.328323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.903 [2024-09-29 16:44:39.328350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:38.903 [2024-09-29 16:44:39.331916] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.903 [2024-09-29 16:44:39.331958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.903 [2024-09-29 16:44:39.331984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:38.903 [2024-09-29 16:44:39.337419] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.903 [2024-09-29 16:44:39.337462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.903 [2024-09-29 16:44:39.337489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:38.903 [2024-09-29 16:44:39.344093] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.903 [2024-09-29 16:44:39.344151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.903 [2024-09-29 16:44:39.344178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.903 [2024-09-29 16:44:39.350831] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.903 [2024-09-29 16:44:39.350891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.903 [2024-09-29 16:44:39.350916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:38.903 [2024-09-29 16:44:39.357610] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.903 [2024-09-29 16:44:39.357666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.903 [2024-09-29 16:44:39.357705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:38.903 [2024-09-29 16:44:39.364764] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.903 [2024-09-29 16:44:39.364810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.903 [2024-09-29 16:44:39.364835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:38.903 [2024-09-29 16:44:39.372753] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.903 [2024-09-29 16:44:39.372799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.903 [2024-09-29 16:44:39.372826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.903 [2024-09-29 16:44:39.377723] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.903 [2024-09-29 16:44:39.377766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.903 [2024-09-29 16:44:39.377792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:38.903 [2024-09-29 16:44:39.386334] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.903 [2024-09-29 16:44:39.386393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.903 [2024-09-29 16:44:39.386421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:38.903 [2024-09-29 16:44:39.395463] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.903 [2024-09-29 16:44:39.395511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.903 [2024-09-29 16:44:39.395539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:38.903 [2024-09-29 16:44:39.402139] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.903 [2024-09-29 16:44:39.402183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.903 [2024-09-29 16:44:39.402210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.903 [2024-09-29 16:44:39.409849] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.903 [2024-09-29 16:44:39.409894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.903 [2024-09-29 16:44:39.409921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:38.903 [2024-09-29 16:44:39.418137] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.904 [2024-09-29 16:44:39.418195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.904 [2024-09-29 16:44:39.418223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:38.904 [2024-09-29 16:44:39.425947] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.904 [2024-09-29 16:44:39.426010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.904 [2024-09-29 16:44:39.426038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:38.904 [2024-09-29 16:44:39.433633] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.904 [2024-09-29 16:44:39.433687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.904 [2024-09-29 16:44:39.433722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.904 [2024-09-29 16:44:39.442259] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.904 [2024-09-29 16:44:39.442315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.904 [2024-09-29 16:44:39.442341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:38.904 [2024-09-29 16:44:39.450961] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.904 [2024-09-29 16:44:39.451006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.904 [2024-09-29 16:44:39.451033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:38.904 [2024-09-29 16:44:39.459616] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.904 [2024-09-29 16:44:39.459663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.904 [2024-09-29 16:44:39.459698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:39.162 [2024-09-29 16:44:39.468298] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:39.162 [2024-09-29 16:44:39.468344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.162 [2024-09-29 16:44:39.468371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:39.162 [2024-09-29 16:44:39.476621] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:39.162 [2024-09-29 16:44:39.476666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.162 [2024-09-29 16:44:39.476701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:39.162 [2024-09-29 16:44:39.485225] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:39.162 [2024-09-29 16:44:39.485270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.162 [2024-09-29 16:44:39.485296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:39.162 [2024-09-29 16:44:39.493966] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:39.162 [2024-09-29 16:44:39.494028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.162 [2024-09-29 16:44:39.494053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:39.162 [2024-09-29 16:44:39.502548] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:39.162 [2024-09-29 16:44:39.502595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.162 [2024-09-29 16:44:39.502621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:39.162 [2024-09-29 16:44:39.511016] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:39.162 [2024-09-29 16:44:39.511074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.162 [2024-09-29 16:44:39.511101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:39.162 [2024-09-29 16:44:39.517018] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:39.162 [2024-09-29 16:44:39.517063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.162 [2024-09-29 16:44:39.517090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:39.162 [2024-09-29 16:44:39.524368] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:39.162 [2024-09-29 16:44:39.524427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.162 [2024-09-29 16:44:39.524454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:39.162 [2024-09-29 16:44:39.533262] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:39.162 [2024-09-29 16:44:39.533323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.162 [2024-09-29 16:44:39.533350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:39.162 [2024-09-29 16:44:39.540478] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:39.162 [2024-09-29 16:44:39.540537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.162 [2024-09-29 16:44:39.540562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:39.162 [2024-09-29 16:44:39.545614] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:39.162 [2024-09-29 16:44:39.545657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.162 [2024-09-29 16:44:39.545691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:39.162 [2024-09-29 16:44:39.552006] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:39.162 [2024-09-29 16:44:39.552055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.162 [2024-09-29 16:44:39.552084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:39.162 [2024-09-29 16:44:39.558855] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:39.162 [2024-09-29 16:44:39.558907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.162 [2024-09-29 16:44:39.558935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:39.162 [2024-09-29 16:44:39.565064] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:39.162 [2024-09-29 16:44:39.565110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.162 [2024-09-29 16:44:39.565139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:39.162 [2024-09-29 16:44:39.569294] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:39.162 [2024-09-29 16:44:39.569341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.162 [2024-09-29 16:44:39.569370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:39.162 [2024-09-29 16:44:39.575741] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:39.162 [2024-09-29 16:44:39.575783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.162 [2024-09-29 16:44:39.575809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:39.162 [2024-09-29 16:44:39.581697] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:39.162 [2024-09-29 16:44:39.581755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.162 [2024-09-29 16:44:39.581780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:39.162 [2024-09-29 16:44:39.585886] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:39.162 [2024-09-29 16:44:39.585934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.162 [2024-09-29 16:44:39.585977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:39.162 [2024-09-29 16:44:39.592502] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:39.162 [2024-09-29 16:44:39.592548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.162 [2024-09-29 16:44:39.592576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:39.162 [2024-09-29 16:44:39.598868] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:39.162 [2024-09-29 16:44:39.598909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.162 [2024-09-29 16:44:39.598935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:39.162 [2024-09-29 16:44:39.603199] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:39.162 [2024-09-29 16:44:39.603244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.163 [2024-09-29 16:44:39.603272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:39.163 [2024-09-29 16:44:39.610100] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:39.163 [2024-09-29 16:44:39.610146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.163 [2024-09-29 16:44:39.610175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:39.163 [2024-09-29 16:44:39.617022] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:39.163 [2024-09-29 16:44:39.617069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.163 [2024-09-29 16:44:39.617098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:39.163 [2024-09-29 16:44:39.624294] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:39.163 [2024-09-29 16:44:39.624342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.163 [2024-09-29 16:44:39.624371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:39.163 [2024-09-29 16:44:39.630289] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:39.163 [2024-09-29 16:44:39.630345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.163 [2024-09-29 16:44:39.630378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:39.163 [2024-09-29 16:44:39.634401] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:39.163 [2024-09-29 16:44:39.634447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.163 [2024-09-29 16:44:39.634477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:39.163 [2024-09-29 16:44:39.640793] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:39.163 [2024-09-29 16:44:39.640839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.163 [2024-09-29 16:44:39.640866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:39.163 [2024-09-29 16:44:39.646765] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:39.163 [2024-09-29 16:44:39.646806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.163 [2024-09-29 16:44:39.646832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:39.163 [2024-09-29 16:44:39.651180] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:39.163 [2024-09-29 16:44:39.651224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.163 [2024-09-29 16:44:39.651252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:39.163 [2024-09-29 16:44:39.657335] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:39.163 [2024-09-29 16:44:39.657382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.163 [2024-09-29 16:44:39.657419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:39.163 [2024-09-29 16:44:39.664481] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:39.163 [2024-09-29 16:44:39.664528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.163 [2024-09-29 16:44:39.664556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:39.163 [2024-09-29 16:44:39.671490] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:39.163 [2024-09-29 16:44:39.671535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.163 [2024-09-29 16:44:39.671564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:39.163 [2024-09-29 16:44:39.678584] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:39.163 [2024-09-29 16:44:39.678629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.163 [2024-09-29 16:44:39.678657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:39.163 [2024-09-29 16:44:39.685725] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:39.163 [2024-09-29 16:44:39.685766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.163 [2024-09-29 16:44:39.685806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:39.163 [2024-09-29 16:44:39.692925] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:39.163 [2024-09-29 16:44:39.692966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.163 [2024-09-29 16:44:39.693007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:39.163 [2024-09-29 16:44:39.700070] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:39.163 [2024-09-29 16:44:39.700117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.163 [2024-09-29 16:44:39.700145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:39.163 [2024-09-29 16:44:39.707052] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:39.163 [2024-09-29 16:44:39.707100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.163 [2024-09-29 16:44:39.707128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:39.163 [2024-09-29 16:44:39.714246] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:39.163 [2024-09-29 16:44:39.714293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.163 [2024-09-29 16:44:39.714321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:39.163 [2024-09-29 16:44:39.721408] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:39.163 [2024-09-29 16:44:39.721455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.163 [2024-09-29 16:44:39.721485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:39.420 [2024-09-29 16:44:39.728603] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:39.420 [2024-09-29 16:44:39.728650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.420 [2024-09-29 16:44:39.728691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:39.420 [2024-09-29 16:44:39.735728] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:39.420 [2024-09-29 16:44:39.735783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.420 [2024-09-29 16:44:39.735810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:39.420 [2024-09-29 16:44:39.743061] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:39.420 [2024-09-29 16:44:39.743107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.420 [2024-09-29 16:44:39.743135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:39.420 [2024-09-29 16:44:39.750152] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:39.420 [2024-09-29 16:44:39.750199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.420 [2024-09-29 16:44:39.750227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:39.420 [2024-09-29 16:44:39.757339] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:39.420 [2024-09-29 16:44:39.757386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.420 [2024-09-29 16:44:39.757414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:39.420 [2024-09-29 16:44:39.764324] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:39.420 [2024-09-29 16:44:39.764371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.420 [2024-09-29 16:44:39.764400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:39.420 [2024-09-29 16:44:39.771533] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:39.420 [2024-09-29 16:44:39.771579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.420 [2024-09-29 16:44:39.771608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:39.420 [2024-09-29 16:44:39.778491] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:39.420 [2024-09-29 16:44:39.778536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.420 [2024-09-29 16:44:39.778574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:39.420 [2024-09-29 16:44:39.785442] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:39.420 [2024-09-29 16:44:39.785489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.420 [2024-09-29 16:44:39.785517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:39.420 4575.00 IOPS, 571.88 MiB/s 00:36:39.420 Latency(us) 00:36:39.420 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:39.420 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:36:39.420 nvme0n1 : 2.00 4573.72 571.72 0.00 0.00 3491.78 964.84 10243.03 00:36:39.420 =================================================================================================================== 00:36:39.420 Total : 4573.72 571.72 0.00 0.00 3491.78 964.84 10243.03 00:36:39.420 { 00:36:39.420 "results": [ 00:36:39.420 { 00:36:39.420 "job": "nvme0n1", 00:36:39.420 "core_mask": "0x2", 00:36:39.420 "workload": "randread", 00:36:39.420 "status": "finished", 00:36:39.420 "queue_depth": 16, 00:36:39.420 "io_size": 131072, 00:36:39.420 "runtime": 2.004056, 00:36:39.420 "iops": 4573.72448674089, 00:36:39.420 "mibps": 571.7155608426112, 00:36:39.420 "io_failed": 0, 00:36:39.420 "io_timeout": 0, 00:36:39.420 "avg_latency_us": 3491.7816587873062, 00:36:39.420 "min_latency_us": 964.8355555555555, 00:36:39.420 "max_latency_us": 10243.034074074074 00:36:39.420 } 00:36:39.420 ], 00:36:39.420 "core_count": 1 00:36:39.420 } 00:36:39.420 16:44:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:39.420 16:44:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:39.420 16:44:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:39.420 16:44:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:39.420 | .driver_specific 00:36:39.420 | .nvme_error 00:36:39.420 | .status_code 00:36:39.420 | .command_transient_transport_error' 00:36:39.677 16:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 295 > 0 )) 00:36:39.677 16:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3320647 00:36:39.677 16:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3320647 ']' 00:36:39.677 16:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3320647 00:36:39.677 16:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:36:39.677 16:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:39.677 16:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3320647 00:36:39.677 16:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:39.677 16:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:39.677 16:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3320647' 00:36:39.677 killing process with pid 3320647 00:36:39.677 16:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3320647 00:36:39.677 Received shutdown signal, test time was about 2.000000 seconds 00:36:39.677 00:36:39.677 Latency(us) 00:36:39.677 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:39.677 =================================================================================================================== 00:36:39.677 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:39.677 16:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3320647 00:36:41.046 16:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:36:41.046 16:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:41.046 16:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:36:41.046 16:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:36:41.046 16:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:36:41.046 16:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3321201 00:36:41.046 16:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:36:41.046 16:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3321201 /var/tmp/bperf.sock 00:36:41.046 16:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3321201 ']' 00:36:41.046 16:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:41.047 16:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:41.047 16:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:41.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:41.047 16:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:41.047 16:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:41.047 [2024-09-29 16:44:41.289146] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:36:41.047 [2024-09-29 16:44:41.289270] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3321201 ] 00:36:41.047 [2024-09-29 16:44:41.435073] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:41.304 [2024-09-29 16:44:41.685049] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:36:41.869 16:44:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:41.869 16:44:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:36:41.869 16:44:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:41.869 16:44:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:42.126 16:44:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:42.126 16:44:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:42.126 16:44:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:42.126 16:44:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:42.126 16:44:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:42.126 16:44:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:42.384 nvme0n1 00:36:42.384 16:44:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:36:42.384 16:44:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:42.384 16:44:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:42.384 16:44:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:42.384 16:44:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:42.384 16:44:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:42.642 Running I/O for 2 seconds... 00:36:42.642 [2024-09-29 16:44:43.031892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:42.642 [2024-09-29 16:44:43.032252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:15649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:42.642 [2024-09-29 16:44:43.032310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:42.642 [2024-09-29 16:44:43.050374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:42.642 [2024-09-29 16:44:43.050668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:42.642 [2024-09-29 16:44:43.050737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:42.642 [2024-09-29 16:44:43.068617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:42.642 [2024-09-29 16:44:43.069003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:42.642 [2024-09-29 16:44:43.069047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:42.642 [2024-09-29 16:44:43.086797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:42.642 [2024-09-29 16:44:43.087163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:42.642 [2024-09-29 16:44:43.087206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:42.642 [2024-09-29 16:44:43.105116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:42.642 [2024-09-29 16:44:43.105428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:42.642 [2024-09-29 16:44:43.105471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:42.642 [2024-09-29 16:44:43.123352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:42.642 [2024-09-29 16:44:43.123642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:42.642 [2024-09-29 16:44:43.123697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:42.642 [2024-09-29 16:44:43.141738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:42.642 [2024-09-29 16:44:43.142008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:42.642 [2024-09-29 16:44:43.142072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:42.642 [2024-09-29 16:44:43.160472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:42.642 [2024-09-29 16:44:43.160755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:42.642 [2024-09-29 16:44:43.160794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:42.642 [2024-09-29 16:44:43.178749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:42.642 [2024-09-29 16:44:43.179011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:42.642 [2024-09-29 16:44:43.179069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:42.642 [2024-09-29 16:44:43.196870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:42.642 [2024-09-29 16:44:43.197134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:42.642 [2024-09-29 16:44:43.197187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:42.901 [2024-09-29 16:44:43.215868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:42.901 [2024-09-29 16:44:43.216135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:42.901 [2024-09-29 16:44:43.216179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:42.901 [2024-09-29 16:44:43.234148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:42.901 [2024-09-29 16:44:43.234422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:42.901 [2024-09-29 16:44:43.234465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:42.901 [2024-09-29 16:44:43.252397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:42.901 [2024-09-29 16:44:43.252659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:42.901 [2024-09-29 16:44:43.252724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:42.901 [2024-09-29 16:44:43.270735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:42.901 [2024-09-29 16:44:43.271001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:42.901 [2024-09-29 16:44:43.271075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:42.901 [2024-09-29 16:44:43.288977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:42.901 [2024-09-29 16:44:43.289255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:42.901 [2024-09-29 16:44:43.289300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:42.901 [2024-09-29 16:44:43.307291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:42.901 [2024-09-29 16:44:43.307555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:17752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:42.901 [2024-09-29 16:44:43.307599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:42.901 [2024-09-29 16:44:43.325434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:42.901 [2024-09-29 16:44:43.325720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:42.901 [2024-09-29 16:44:43.325759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:42.901 [2024-09-29 16:44:43.343700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:42.901 [2024-09-29 16:44:43.343980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:42.901 [2024-09-29 16:44:43.344037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:42.901 [2024-09-29 16:44:43.361870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:42.901 [2024-09-29 16:44:43.362149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:42.901 [2024-09-29 16:44:43.362192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:42.901 [2024-09-29 16:44:43.380061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:42.901 [2024-09-29 16:44:43.380333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:25057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:42.901 [2024-09-29 16:44:43.380376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:42.901 [2024-09-29 16:44:43.398394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:42.901 [2024-09-29 16:44:43.398655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:42.901 [2024-09-29 16:44:43.398721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:42.901 [2024-09-29 16:44:43.416594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:42.901 [2024-09-29 16:44:43.416885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:42.901 [2024-09-29 16:44:43.416923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:42.901 [2024-09-29 16:44:43.434809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:42.901 [2024-09-29 16:44:43.435086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:42.901 [2024-09-29 16:44:43.435127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:42.901 [2024-09-29 16:44:43.453180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:42.901 [2024-09-29 16:44:43.453447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:42.901 [2024-09-29 16:44:43.453499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:43.160 [2024-09-29 16:44:43.472479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:43.160 [2024-09-29 16:44:43.472763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.160 [2024-09-29 16:44:43.472802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:43.160 [2024-09-29 16:44:43.491048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:43.160 [2024-09-29 16:44:43.491316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.160 [2024-09-29 16:44:43.491359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:43.160 [2024-09-29 16:44:43.509285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:43.160 [2024-09-29 16:44:43.509547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.160 [2024-09-29 16:44:43.509588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:43.160 [2024-09-29 16:44:43.527562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:43.160 [2024-09-29 16:44:43.527861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.160 [2024-09-29 16:44:43.527898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:43.160 [2024-09-29 16:44:43.545905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:43.160 [2024-09-29 16:44:43.546153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.160 [2024-09-29 16:44:43.546213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:43.160 [2024-09-29 16:44:43.564094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:43.160 [2024-09-29 16:44:43.564364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.160 [2024-09-29 16:44:43.564406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:43.160 [2024-09-29 16:44:43.582420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:43.160 [2024-09-29 16:44:43.582694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:22184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.160 [2024-09-29 16:44:43.582750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:43.160 [2024-09-29 16:44:43.600721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:43.160 [2024-09-29 16:44:43.600986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.160 [2024-09-29 16:44:43.601044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:43.160 [2024-09-29 16:44:43.618976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:43.160 [2024-09-29 16:44:43.619238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.160 [2024-09-29 16:44:43.619290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:43.160 [2024-09-29 16:44:43.637277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:43.160 [2024-09-29 16:44:43.637538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.160 [2024-09-29 16:44:43.637579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:43.160 [2024-09-29 16:44:43.655586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:43.160 [2024-09-29 16:44:43.655870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.160 [2024-09-29 16:44:43.655908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:43.160 [2024-09-29 16:44:43.673892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:43.160 [2024-09-29 16:44:43.674152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.160 [2024-09-29 16:44:43.674209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:43.160 [2024-09-29 16:44:43.692185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:43.160 [2024-09-29 16:44:43.692461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.160 [2024-09-29 16:44:43.692504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:43.160 [2024-09-29 16:44:43.710447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:43.160 [2024-09-29 16:44:43.710728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.160 [2024-09-29 16:44:43.710767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:43.418 [2024-09-29 16:44:43.729449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:43.418 [2024-09-29 16:44:43.729740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.418 [2024-09-29 16:44:43.729782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:43.418 [2024-09-29 16:44:43.747652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:43.418 [2024-09-29 16:44:43.747947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:17286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.418 [2024-09-29 16:44:43.747987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:43.418 [2024-09-29 16:44:43.765870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:43.418 [2024-09-29 16:44:43.766138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.419 [2024-09-29 16:44:43.766192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:43.419 [2024-09-29 16:44:43.784277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:43.419 [2024-09-29 16:44:43.784542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.419 [2024-09-29 16:44:43.784585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:43.419 [2024-09-29 16:44:43.802502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:43.419 [2024-09-29 16:44:43.802798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.419 [2024-09-29 16:44:43.802837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:43.419 [2024-09-29 16:44:43.820716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:43.419 [2024-09-29 16:44:43.820972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.419 [2024-09-29 16:44:43.821031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:43.419 [2024-09-29 16:44:43.838912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:43.419 [2024-09-29 16:44:43.839176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.419 [2024-09-29 16:44:43.839235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:43.419 [2024-09-29 16:44:43.857125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:43.419 [2024-09-29 16:44:43.857395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.419 [2024-09-29 16:44:43.857438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:43.419 [2024-09-29 16:44:43.875355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:43.419 [2024-09-29 16:44:43.875625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.419 [2024-09-29 16:44:43.875667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:43.419 [2024-09-29 16:44:43.893490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:43.419 [2024-09-29 16:44:43.893765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.419 [2024-09-29 16:44:43.893820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:43.419 [2024-09-29 16:44:43.911626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:43.419 [2024-09-29 16:44:43.911910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.419 [2024-09-29 16:44:43.911966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:43.419 [2024-09-29 16:44:43.929781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:43.419 [2024-09-29 16:44:43.930068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.419 [2024-09-29 16:44:43.930112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:43.419 [2024-09-29 16:44:43.948014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:43.419 [2024-09-29 16:44:43.948286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.419 [2024-09-29 16:44:43.948328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:43.419 [2024-09-29 16:44:43.966166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:43.419 [2024-09-29 16:44:43.966437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.419 [2024-09-29 16:44:43.966479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:43.678 [2024-09-29 16:44:43.985137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:43.678 [2024-09-29 16:44:43.985417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.678 [2024-09-29 16:44:43.985460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:43.678 [2024-09-29 16:44:44.003530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:43.678 [2024-09-29 16:44:44.003826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.678 [2024-09-29 16:44:44.003865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:43.678 13804.00 IOPS, 53.92 MiB/s [2024-09-29 16:44:44.021988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:43.678 [2024-09-29 16:44:44.022252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:14701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.678 [2024-09-29 16:44:44.022295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:43.678 [2024-09-29 16:44:44.040408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:43.678 [2024-09-29 16:44:44.040679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.678 [2024-09-29 16:44:44.040733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:43.678 [2024-09-29 16:44:44.058773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:43.678 [2024-09-29 16:44:44.059032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.678 [2024-09-29 16:44:44.059091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:43.678 [2024-09-29 16:44:44.077046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:43.678 [2024-09-29 16:44:44.077319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.678 [2024-09-29 16:44:44.077361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:43.678 [2024-09-29 16:44:44.095378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:43.678 [2024-09-29 16:44:44.095637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.678 [2024-09-29 16:44:44.095688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:43.678 [2024-09-29 16:44:44.113772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:43.678 [2024-09-29 16:44:44.114043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.678 [2024-09-29 16:44:44.114100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:43.678 [2024-09-29 16:44:44.132119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:43.678 [2024-09-29 16:44:44.132389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.678 [2024-09-29 16:44:44.132432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:43.678 [2024-09-29 16:44:44.150534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:43.678 [2024-09-29 16:44:44.150833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.678 [2024-09-29 16:44:44.150872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:43.678 [2024-09-29 16:44:44.168829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:43.678 [2024-09-29 16:44:44.169097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.678 [2024-09-29 16:44:44.169140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:43.678 [2024-09-29 16:44:44.187078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:43.678 [2024-09-29 16:44:44.187353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.678 [2024-09-29 16:44:44.187395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:43.679 [2024-09-29 16:44:44.205310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:43.679 [2024-09-29 16:44:44.205573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.679 [2024-09-29 16:44:44.205615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:43.679 [2024-09-29 16:44:44.223463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:43.679 [2024-09-29 16:44:44.223747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.679 [2024-09-29 16:44:44.223786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:43.938 [2024-09-29 16:44:44.242121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:43.938 [2024-09-29 16:44:44.242408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.938 [2024-09-29 16:44:44.242451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:43.938 [2024-09-29 16:44:44.260707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:43.938 [2024-09-29 16:44:44.260984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.938 [2024-09-29 16:44:44.261043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:43.938 [2024-09-29 16:44:44.278995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:43.938 [2024-09-29 16:44:44.279266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.938 [2024-09-29 16:44:44.279307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:43.938 [2024-09-29 16:44:44.297259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:43.938 [2024-09-29 16:44:44.297519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.938 [2024-09-29 16:44:44.297562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:43.938 [2024-09-29 16:44:44.315535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:43.938 [2024-09-29 16:44:44.315828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.938 [2024-09-29 16:44:44.315866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:43.938 [2024-09-29 16:44:44.333931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:43.938 [2024-09-29 16:44:44.334190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.938 [2024-09-29 16:44:44.334248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:43.938 [2024-09-29 16:44:44.352133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:43.938 [2024-09-29 16:44:44.352398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.938 [2024-09-29 16:44:44.352441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:43.938 [2024-09-29 16:44:44.370412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:43.938 [2024-09-29 16:44:44.370679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.938 [2024-09-29 16:44:44.370737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:43.938 [2024-09-29 16:44:44.388612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:43.938 [2024-09-29 16:44:44.388895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.938 [2024-09-29 16:44:44.388933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:43.938 [2024-09-29 16:44:44.406841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:43.938 [2024-09-29 16:44:44.407104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.938 [2024-09-29 16:44:44.407158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:43.938 [2024-09-29 16:44:44.425002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:43.938 [2024-09-29 16:44:44.425277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.938 [2024-09-29 16:44:44.425318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:43.938 [2024-09-29 16:44:44.443104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:43.938 [2024-09-29 16:44:44.443370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.938 [2024-09-29 16:44:44.443429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:43.938 [2024-09-29 16:44:44.461383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:43.938 [2024-09-29 16:44:44.461647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.938 [2024-09-29 16:44:44.461700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:43.938 [2024-09-29 16:44:44.479919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:43.938 [2024-09-29 16:44:44.480179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.938 [2024-09-29 16:44:44.480238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:43.938 [2024-09-29 16:44:44.498773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:43.938 [2024-09-29 16:44:44.499049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.938 [2024-09-29 16:44:44.499088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.197 [2024-09-29 16:44:44.517587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:44.197 [2024-09-29 16:44:44.517880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.197 [2024-09-29 16:44:44.517919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.197 [2024-09-29 16:44:44.535822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:44.197 [2024-09-29 16:44:44.536102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.197 [2024-09-29 16:44:44.536143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.197 [2024-09-29 16:44:44.553951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:44.197 [2024-09-29 16:44:44.554213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.197 [2024-09-29 16:44:44.554280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.197 [2024-09-29 16:44:44.572152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:44.197 [2024-09-29 16:44:44.572424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.197 [2024-09-29 16:44:44.572465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.197 [2024-09-29 16:44:44.590557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:44.197 [2024-09-29 16:44:44.590849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.197 [2024-09-29 16:44:44.590888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.197 [2024-09-29 16:44:44.608834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:44.197 [2024-09-29 16:44:44.609106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.197 [2024-09-29 16:44:44.609147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.197 [2024-09-29 16:44:44.627074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:44.197 [2024-09-29 16:44:44.627350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.197 [2024-09-29 16:44:44.627392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.197 [2024-09-29 16:44:44.645299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:44.197 [2024-09-29 16:44:44.645563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.197 [2024-09-29 16:44:44.645605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.197 [2024-09-29 16:44:44.663500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:44.197 [2024-09-29 16:44:44.663797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.197 [2024-09-29 16:44:44.663835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.197 [2024-09-29 16:44:44.681757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:44.197 [2024-09-29 16:44:44.682030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.197 [2024-09-29 16:44:44.682073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.197 [2024-09-29 16:44:44.700072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:44.197 [2024-09-29 16:44:44.700346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.197 [2024-09-29 16:44:44.700389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.197 [2024-09-29 16:44:44.718404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:44.197 [2024-09-29 16:44:44.718662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.197 [2024-09-29 16:44:44.718730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.197 [2024-09-29 16:44:44.736701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:44.197 [2024-09-29 16:44:44.737000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.197 [2024-09-29 16:44:44.737042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.197 [2024-09-29 16:44:44.755016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:44.197 [2024-09-29 16:44:44.755336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.197 [2024-09-29 16:44:44.755379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.456 [2024-09-29 16:44:44.774122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:44.456 [2024-09-29 16:44:44.774402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.456 [2024-09-29 16:44:44.774444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.456 [2024-09-29 16:44:44.792297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:44.456 [2024-09-29 16:44:44.792575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.456 [2024-09-29 16:44:44.792617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.456 [2024-09-29 16:44:44.810751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:44.456 [2024-09-29 16:44:44.811014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.456 [2024-09-29 16:44:44.811071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.456 [2024-09-29 16:44:44.829145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:44.456 [2024-09-29 16:44:44.829419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.456 [2024-09-29 16:44:44.829460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.456 [2024-09-29 16:44:44.847546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:44.456 [2024-09-29 16:44:44.847863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.456 [2024-09-29 16:44:44.847902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.456 [2024-09-29 16:44:44.865772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:44.456 [2024-09-29 16:44:44.866036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.457 [2024-09-29 16:44:44.866103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.457 [2024-09-29 16:44:44.883970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:44.457 [2024-09-29 16:44:44.884247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.457 [2024-09-29 16:44:44.884290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.457 [2024-09-29 16:44:44.902156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:44.457 [2024-09-29 16:44:44.902428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.457 [2024-09-29 16:44:44.902471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.457 [2024-09-29 16:44:44.920371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:44.457 [2024-09-29 16:44:44.920633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.457 [2024-09-29 16:44:44.920684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.457 [2024-09-29 16:44:44.938477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:44.457 [2024-09-29 16:44:44.938762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.457 [2024-09-29 16:44:44.938800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.457 [2024-09-29 16:44:44.956624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:44.457 [2024-09-29 16:44:44.956904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.457 [2024-09-29 16:44:44.956943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.457 [2024-09-29 16:44:44.974831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:44.457 [2024-09-29 16:44:44.975093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.457 [2024-09-29 16:44:44.975146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.457 [2024-09-29 16:44:44.992998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:44.457 [2024-09-29 16:44:44.993269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.457 [2024-09-29 16:44:44.993312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.457 [2024-09-29 16:44:45.011183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:36:44.457 13879.00 IOPS, 54.21 MiB/s [2024-09-29 16:44:45.012306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.457 [2024-09-29 16:44:45.012347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.715 00:36:44.715 Latency(us) 00:36:44.715 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:44.715 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:44.715 nvme0n1 : 2.01 13878.95 54.21 0.00 0.00 9194.39 7184.69 19320.98 00:36:44.715 =================================================================================================================== 00:36:44.715 Total : 13878.95 54.21 0.00 0.00 9194.39 7184.69 19320.98 00:36:44.715 { 00:36:44.715 "results": [ 00:36:44.715 { 00:36:44.715 "job": "nvme0n1", 00:36:44.715 "core_mask": "0x2", 00:36:44.715 "workload": "randwrite", 00:36:44.715 "status": "finished", 00:36:44.715 "queue_depth": 128, 00:36:44.715 "io_size": 4096, 00:36:44.715 "runtime": 2.011535, 00:36:44.715 "iops": 13878.953137777866, 00:36:44.715 "mibps": 54.21466069444479, 00:36:44.715 "io_failed": 0, 00:36:44.715 "io_timeout": 0, 00:36:44.715 "avg_latency_us": 9194.390780619433, 00:36:44.715 "min_latency_us": 7184.687407407408, 00:36:44.715 "max_latency_us": 19320.983703703703 00:36:44.715 } 00:36:44.715 ], 00:36:44.715 "core_count": 1 00:36:44.715 } 00:36:44.715 16:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:44.715 16:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:44.715 16:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:44.715 16:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:44.715 | .driver_specific 00:36:44.715 | .nvme_error 00:36:44.715 | .status_code 00:36:44.715 | .command_transient_transport_error' 00:36:44.973 16:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 109 > 0 )) 00:36:44.973 16:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3321201 00:36:44.973 16:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3321201 ']' 00:36:44.973 16:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3321201 00:36:44.973 16:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:36:44.973 16:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:44.973 16:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3321201 00:36:44.973 16:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:44.973 16:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:44.973 16:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3321201' 00:36:44.973 killing process with pid 3321201 00:36:44.973 16:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3321201 00:36:44.973 Received shutdown signal, test time was about 2.000000 seconds 00:36:44.973 00:36:44.973 Latency(us) 00:36:44.973 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:44.973 =================================================================================================================== 00:36:44.973 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:44.973 16:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3321201 00:36:45.906 16:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:36:45.906 16:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:45.906 16:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:36:45.906 16:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:36:45.906 16:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:36:45.906 16:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3321862 00:36:45.906 16:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:36:45.906 16:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3321862 /var/tmp/bperf.sock 00:36:45.906 16:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3321862 ']' 00:36:45.906 16:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:45.906 16:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:45.906 16:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:45.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:45.907 16:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:45.907 16:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:46.165 [2024-09-29 16:44:46.526691] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:36:46.165 [2024-09-29 16:44:46.526838] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3321862 ] 00:36:46.165 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:46.165 Zero copy mechanism will not be used. 00:36:46.165 [2024-09-29 16:44:46.662782] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:46.423 [2024-09-29 16:44:46.915090] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:36:46.989 16:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:46.989 16:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:36:46.989 16:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:46.989 16:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:47.246 16:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:47.246 16:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.246 16:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:47.503 16:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.503 16:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:47.503 16:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:47.762 nvme0n1 00:36:47.762 16:44:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:36:47.762 16:44:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.762 16:44:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:47.762 16:44:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.762 16:44:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:47.762 16:44:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:47.762 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:47.762 Zero copy mechanism will not be used. 00:36:47.762 Running I/O for 2 seconds... 00:36:48.020 [2024-09-29 16:44:48.326244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.020 [2024-09-29 16:44:48.326745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.020 [2024-09-29 16:44:48.326799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.020 [2024-09-29 16:44:48.334431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.020 [2024-09-29 16:44:48.334871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.020 [2024-09-29 16:44:48.334913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.020 [2024-09-29 16:44:48.342414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.020 [2024-09-29 16:44:48.342866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.020 [2024-09-29 16:44:48.342921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.020 [2024-09-29 16:44:48.350183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.020 [2024-09-29 16:44:48.350598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.020 [2024-09-29 16:44:48.350642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.020 [2024-09-29 16:44:48.358072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.020 [2024-09-29 16:44:48.358475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.020 [2024-09-29 16:44:48.358518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.020 [2024-09-29 16:44:48.366107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.020 [2024-09-29 16:44:48.366538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.020 [2024-09-29 16:44:48.366582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.020 [2024-09-29 16:44:48.373645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.020 [2024-09-29 16:44:48.374099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.020 [2024-09-29 16:44:48.374144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.020 [2024-09-29 16:44:48.381348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.020 [2024-09-29 16:44:48.381817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.020 [2024-09-29 16:44:48.381862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.021 [2024-09-29 16:44:48.389279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.021 [2024-09-29 16:44:48.389719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.021 [2024-09-29 16:44:48.389759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.021 [2024-09-29 16:44:48.396840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.021 [2024-09-29 16:44:48.397278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.021 [2024-09-29 16:44:48.397321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.021 [2024-09-29 16:44:48.404655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.021 [2024-09-29 16:44:48.405081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.021 [2024-09-29 16:44:48.405124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.021 [2024-09-29 16:44:48.412400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.021 [2024-09-29 16:44:48.412855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.021 [2024-09-29 16:44:48.412908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.021 [2024-09-29 16:44:48.420281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.021 [2024-09-29 16:44:48.420687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.021 [2024-09-29 16:44:48.420743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.021 [2024-09-29 16:44:48.428064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.021 [2024-09-29 16:44:48.428461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.021 [2024-09-29 16:44:48.428521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.021 [2024-09-29 16:44:48.435971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.021 [2024-09-29 16:44:48.436410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.021 [2024-09-29 16:44:48.436453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.021 [2024-09-29 16:44:48.444030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.021 [2024-09-29 16:44:48.444424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.021 [2024-09-29 16:44:48.444467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.021 [2024-09-29 16:44:48.451332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.021 [2024-09-29 16:44:48.451766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.021 [2024-09-29 16:44:48.451819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.021 [2024-09-29 16:44:48.458588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.021 [2024-09-29 16:44:48.459008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.021 [2024-09-29 16:44:48.459053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.021 [2024-09-29 16:44:48.465876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.021 [2024-09-29 16:44:48.466289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.021 [2024-09-29 16:44:48.466332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.021 [2024-09-29 16:44:48.473093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.021 [2024-09-29 16:44:48.473516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.021 [2024-09-29 16:44:48.473559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.021 [2024-09-29 16:44:48.481021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.021 [2024-09-29 16:44:48.481453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.021 [2024-09-29 16:44:48.481497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.021 [2024-09-29 16:44:48.488965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.021 [2024-09-29 16:44:48.489371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.021 [2024-09-29 16:44:48.489414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.021 [2024-09-29 16:44:48.496767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.021 [2024-09-29 16:44:48.497213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.021 [2024-09-29 16:44:48.497257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.021 [2024-09-29 16:44:48.504882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.021 [2024-09-29 16:44:48.505296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.021 [2024-09-29 16:44:48.505339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.021 [2024-09-29 16:44:48.512740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.021 [2024-09-29 16:44:48.513140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.021 [2024-09-29 16:44:48.513184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.021 [2024-09-29 16:44:48.521359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.021 [2024-09-29 16:44:48.521801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.021 [2024-09-29 16:44:48.521843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.021 [2024-09-29 16:44:48.530544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.021 [2024-09-29 16:44:48.530940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.021 [2024-09-29 16:44:48.531003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.021 [2024-09-29 16:44:48.538842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.021 [2024-09-29 16:44:48.539227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.021 [2024-09-29 16:44:48.539265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.021 [2024-09-29 16:44:48.546240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.021 [2024-09-29 16:44:48.546600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.021 [2024-09-29 16:44:48.546637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.021 [2024-09-29 16:44:48.554079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.021 [2024-09-29 16:44:48.554443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.021 [2024-09-29 16:44:48.554480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.021 [2024-09-29 16:44:48.561554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.021 [2024-09-29 16:44:48.561951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.021 [2024-09-29 16:44:48.562013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.021 [2024-09-29 16:44:48.570935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.021 [2024-09-29 16:44:48.571335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.021 [2024-09-29 16:44:48.571374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.021 [2024-09-29 16:44:48.579567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.021 [2024-09-29 16:44:48.580029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.021 [2024-09-29 16:44:48.580074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.280 [2024-09-29 16:44:48.588275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.280 [2024-09-29 16:44:48.588722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.280 [2024-09-29 16:44:48.588777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.280 [2024-09-29 16:44:48.596745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.280 [2024-09-29 16:44:48.597138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.280 [2024-09-29 16:44:48.597183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.280 [2024-09-29 16:44:48.604853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.280 [2024-09-29 16:44:48.605292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.280 [2024-09-29 16:44:48.605337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.280 [2024-09-29 16:44:48.612374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.280 [2024-09-29 16:44:48.612821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.280 [2024-09-29 16:44:48.612860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.280 [2024-09-29 16:44:48.620681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.280 [2024-09-29 16:44:48.621066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.280 [2024-09-29 16:44:48.621125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.280 [2024-09-29 16:44:48.628008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.280 [2024-09-29 16:44:48.628440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.280 [2024-09-29 16:44:48.628476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.280 [2024-09-29 16:44:48.635946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.280 [2024-09-29 16:44:48.636403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.280 [2024-09-29 16:44:48.636445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.280 [2024-09-29 16:44:48.643363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.280 [2024-09-29 16:44:48.643790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.280 [2024-09-29 16:44:48.643828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.280 [2024-09-29 16:44:48.650612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.280 [2024-09-29 16:44:48.651019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.280 [2024-09-29 16:44:48.651077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.280 [2024-09-29 16:44:48.657857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.280 [2024-09-29 16:44:48.658268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.280 [2024-09-29 16:44:48.658311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.280 [2024-09-29 16:44:48.665092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.280 [2024-09-29 16:44:48.665518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.280 [2024-09-29 16:44:48.665562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.280 [2024-09-29 16:44:48.673103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.280 [2024-09-29 16:44:48.673532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.280 [2024-09-29 16:44:48.673574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.280 [2024-09-29 16:44:48.680412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.280 [2024-09-29 16:44:48.680872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.280 [2024-09-29 16:44:48.680925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.280 [2024-09-29 16:44:48.687767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.280 [2024-09-29 16:44:48.688196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.280 [2024-09-29 16:44:48.688239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.280 [2024-09-29 16:44:48.695783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.280 [2024-09-29 16:44:48.696213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.280 [2024-09-29 16:44:48.696257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.280 [2024-09-29 16:44:48.703180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.280 [2024-09-29 16:44:48.703573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.281 [2024-09-29 16:44:48.703614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.281 [2024-09-29 16:44:48.710552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.281 [2024-09-29 16:44:48.710986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.281 [2024-09-29 16:44:48.711044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.281 [2024-09-29 16:44:48.717669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.281 [2024-09-29 16:44:48.718101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.281 [2024-09-29 16:44:48.718145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.281 [2024-09-29 16:44:48.724791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.281 [2024-09-29 16:44:48.725181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.281 [2024-09-29 16:44:48.725225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.281 [2024-09-29 16:44:48.731966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.281 [2024-09-29 16:44:48.732411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.281 [2024-09-29 16:44:48.732453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.281 [2024-09-29 16:44:48.739174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.281 [2024-09-29 16:44:48.739598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.281 [2024-09-29 16:44:48.739642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.281 [2024-09-29 16:44:48.746333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.281 [2024-09-29 16:44:48.746748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.281 [2024-09-29 16:44:48.746787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.281 [2024-09-29 16:44:48.753886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.281 [2024-09-29 16:44:48.754310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.281 [2024-09-29 16:44:48.754353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.281 [2024-09-29 16:44:48.761180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.281 [2024-09-29 16:44:48.761571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.281 [2024-09-29 16:44:48.761613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.281 [2024-09-29 16:44:48.769338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.281 [2024-09-29 16:44:48.769792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.281 [2024-09-29 16:44:48.769830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.281 [2024-09-29 16:44:48.777615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.281 [2024-09-29 16:44:48.778104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.281 [2024-09-29 16:44:48.778148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.281 [2024-09-29 16:44:48.786148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.281 [2024-09-29 16:44:48.786577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.281 [2024-09-29 16:44:48.786620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.281 [2024-09-29 16:44:48.794746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.281 [2024-09-29 16:44:48.795175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.281 [2024-09-29 16:44:48.795219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.281 [2024-09-29 16:44:48.802993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.281 [2024-09-29 16:44:48.803422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.281 [2024-09-29 16:44:48.803465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.281 [2024-09-29 16:44:48.811690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.281 [2024-09-29 16:44:48.812141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.281 [2024-09-29 16:44:48.812184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.281 [2024-09-29 16:44:48.820179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.281 [2024-09-29 16:44:48.820573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.281 [2024-09-29 16:44:48.820616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.281 [2024-09-29 16:44:48.828802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.281 [2024-09-29 16:44:48.829224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.281 [2024-09-29 16:44:48.829268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.281 [2024-09-29 16:44:48.837443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.281 [2024-09-29 16:44:48.837867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.281 [2024-09-29 16:44:48.837907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.541 [2024-09-29 16:44:48.846792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.541 [2024-09-29 16:44:48.847188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.541 [2024-09-29 16:44:48.847247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.541 [2024-09-29 16:44:48.855959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.541 [2024-09-29 16:44:48.856377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.541 [2024-09-29 16:44:48.856429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.541 [2024-09-29 16:44:48.863296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.541 [2024-09-29 16:44:48.863705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.541 [2024-09-29 16:44:48.863759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.541 [2024-09-29 16:44:48.870610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.541 [2024-09-29 16:44:48.871026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.541 [2024-09-29 16:44:48.871069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.541 [2024-09-29 16:44:48.877887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.541 [2024-09-29 16:44:48.878299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.541 [2024-09-29 16:44:48.878342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.541 [2024-09-29 16:44:48.885093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.541 [2024-09-29 16:44:48.885484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.541 [2024-09-29 16:44:48.885526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.541 [2024-09-29 16:44:48.894120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.541 [2024-09-29 16:44:48.894510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.541 [2024-09-29 16:44:48.894553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.541 [2024-09-29 16:44:48.901325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.541 [2024-09-29 16:44:48.901764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.541 [2024-09-29 16:44:48.901816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.541 [2024-09-29 16:44:48.908589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.541 [2024-09-29 16:44:48.908973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.541 [2024-09-29 16:44:48.909030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.541 [2024-09-29 16:44:48.915841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.541 [2024-09-29 16:44:48.916239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.541 [2024-09-29 16:44:48.916282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.542 [2024-09-29 16:44:48.924429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.542 [2024-09-29 16:44:48.924877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.542 [2024-09-29 16:44:48.924914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.542 [2024-09-29 16:44:48.933069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.542 [2024-09-29 16:44:48.933499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.542 [2024-09-29 16:44:48.933557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.542 [2024-09-29 16:44:48.942189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.542 [2024-09-29 16:44:48.942583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.542 [2024-09-29 16:44:48.942625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.542 [2024-09-29 16:44:48.950782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.542 [2024-09-29 16:44:48.951274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.542 [2024-09-29 16:44:48.951316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.542 [2024-09-29 16:44:48.959721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.542 [2024-09-29 16:44:48.960138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.542 [2024-09-29 16:44:48.960181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.542 [2024-09-29 16:44:48.967793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.542 [2024-09-29 16:44:48.968208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.542 [2024-09-29 16:44:48.968252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.542 [2024-09-29 16:44:48.975606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.542 [2024-09-29 16:44:48.976056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.542 [2024-09-29 16:44:48.976099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.542 [2024-09-29 16:44:48.983734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.542 [2024-09-29 16:44:48.984183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.542 [2024-09-29 16:44:48.984226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.542 [2024-09-29 16:44:48.991613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.542 [2024-09-29 16:44:48.992016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.542 [2024-09-29 16:44:48.992067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.542 [2024-09-29 16:44:48.999587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.542 [2024-09-29 16:44:48.999989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.542 [2024-09-29 16:44:49.000025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.542 [2024-09-29 16:44:49.008224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.542 [2024-09-29 16:44:49.008619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.542 [2024-09-29 16:44:49.008661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.542 [2024-09-29 16:44:49.016153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.542 [2024-09-29 16:44:49.016550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.542 [2024-09-29 16:44:49.016593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.542 [2024-09-29 16:44:49.024503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.542 [2024-09-29 16:44:49.024945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.542 [2024-09-29 16:44:49.024997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.542 [2024-09-29 16:44:49.031893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.542 [2024-09-29 16:44:49.032302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.542 [2024-09-29 16:44:49.032344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.542 [2024-09-29 16:44:49.040104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.542 [2024-09-29 16:44:49.040479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.542 [2024-09-29 16:44:49.040522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.542 [2024-09-29 16:44:49.047631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.542 [2024-09-29 16:44:49.048073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.542 [2024-09-29 16:44:49.048116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.542 [2024-09-29 16:44:49.055551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.542 [2024-09-29 16:44:49.055997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.542 [2024-09-29 16:44:49.056041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.542 [2024-09-29 16:44:49.063641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.542 [2024-09-29 16:44:49.064096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.542 [2024-09-29 16:44:49.064139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.542 [2024-09-29 16:44:49.070976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.542 [2024-09-29 16:44:49.071396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.542 [2024-09-29 16:44:49.071438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.542 [2024-09-29 16:44:49.078803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.542 [2024-09-29 16:44:49.079188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.542 [2024-09-29 16:44:49.079231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.542 [2024-09-29 16:44:49.085931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.542 [2024-09-29 16:44:49.086368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.542 [2024-09-29 16:44:49.086411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.542 [2024-09-29 16:44:49.093006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.542 [2024-09-29 16:44:49.093416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.542 [2024-09-29 16:44:49.093459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.542 [2024-09-29 16:44:49.100315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.542 [2024-09-29 16:44:49.100727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.542 [2024-09-29 16:44:49.100783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.801 [2024-09-29 16:44:49.107882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.801 [2024-09-29 16:44:49.108330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.801 [2024-09-29 16:44:49.108373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.801 [2024-09-29 16:44:49.115152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.801 [2024-09-29 16:44:49.115575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.801 [2024-09-29 16:44:49.115617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.801 [2024-09-29 16:44:49.123098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.801 [2024-09-29 16:44:49.123488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.801 [2024-09-29 16:44:49.123538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.801 [2024-09-29 16:44:49.130267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.801 [2024-09-29 16:44:49.130658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.801 [2024-09-29 16:44:49.130722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.801 [2024-09-29 16:44:49.137315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.801 [2024-09-29 16:44:49.137760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.801 [2024-09-29 16:44:49.137797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.801 [2024-09-29 16:44:49.145075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.802 [2024-09-29 16:44:49.145470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.802 [2024-09-29 16:44:49.145512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.802 [2024-09-29 16:44:49.152378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.802 [2024-09-29 16:44:49.152833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.802 [2024-09-29 16:44:49.152885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.802 [2024-09-29 16:44:49.159590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.802 [2024-09-29 16:44:49.159991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.802 [2024-09-29 16:44:49.160047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.802 [2024-09-29 16:44:49.166827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.802 [2024-09-29 16:44:49.167278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.802 [2024-09-29 16:44:49.167321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.802 [2024-09-29 16:44:49.174812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.802 [2024-09-29 16:44:49.175217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.802 [2024-09-29 16:44:49.175261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.802 [2024-09-29 16:44:49.181919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.802 [2024-09-29 16:44:49.182324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.802 [2024-09-29 16:44:49.182366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.802 [2024-09-29 16:44:49.189057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.802 [2024-09-29 16:44:49.189461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.802 [2024-09-29 16:44:49.189503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.802 [2024-09-29 16:44:49.197037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.802 [2024-09-29 16:44:49.197434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.802 [2024-09-29 16:44:49.197477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.802 [2024-09-29 16:44:49.204420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.802 [2024-09-29 16:44:49.204828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.802 [2024-09-29 16:44:49.204880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.802 [2024-09-29 16:44:49.211462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.802 [2024-09-29 16:44:49.211894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.802 [2024-09-29 16:44:49.211932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.802 [2024-09-29 16:44:49.218578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.802 [2024-09-29 16:44:49.218986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.802 [2024-09-29 16:44:49.219029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.802 [2024-09-29 16:44:49.226600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.802 [2024-09-29 16:44:49.227008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.802 [2024-09-29 16:44:49.227052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.802 [2024-09-29 16:44:49.234294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.802 [2024-09-29 16:44:49.234735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.802 [2024-09-29 16:44:49.234773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.802 [2024-09-29 16:44:49.242604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.802 [2024-09-29 16:44:49.243027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.802 [2024-09-29 16:44:49.243071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.802 [2024-09-29 16:44:49.250206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.802 [2024-09-29 16:44:49.250620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.802 [2024-09-29 16:44:49.250685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.802 [2024-09-29 16:44:49.258998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.802 [2024-09-29 16:44:49.259413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.802 [2024-09-29 16:44:49.259456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.802 [2024-09-29 16:44:49.268133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.802 [2024-09-29 16:44:49.268525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.802 [2024-09-29 16:44:49.268568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.802 [2024-09-29 16:44:49.276929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.802 [2024-09-29 16:44:49.277340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.802 [2024-09-29 16:44:49.277383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.802 [2024-09-29 16:44:49.285551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.802 [2024-09-29 16:44:49.285965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.802 [2024-09-29 16:44:49.286028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.802 [2024-09-29 16:44:49.294004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.802 [2024-09-29 16:44:49.294217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.802 [2024-09-29 16:44:49.294259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.802 [2024-09-29 16:44:49.302195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.802 [2024-09-29 16:44:49.302598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.802 [2024-09-29 16:44:49.302640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.802 [2024-09-29 16:44:49.309861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.802 [2024-09-29 16:44:49.310341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.802 [2024-09-29 16:44:49.310383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.802 3906.00 IOPS, 488.25 MiB/s [2024-09-29 16:44:49.318971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.802 [2024-09-29 16:44:49.319327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.802 [2024-09-29 16:44:49.319371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.802 [2024-09-29 16:44:49.325494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.802 [2024-09-29 16:44:49.325875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.802 [2024-09-29 16:44:49.325915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.802 [2024-09-29 16:44:49.331988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.802 [2024-09-29 16:44:49.332329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.802 [2024-09-29 16:44:49.332369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.802 [2024-09-29 16:44:49.338503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.803 [2024-09-29 16:44:49.338867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.803 [2024-09-29 16:44:49.338906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.803 [2024-09-29 16:44:49.345086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.803 [2024-09-29 16:44:49.345453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.803 [2024-09-29 16:44:49.345496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.803 [2024-09-29 16:44:49.351917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.803 [2024-09-29 16:44:49.352353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.803 [2024-09-29 16:44:49.352395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.803 [2024-09-29 16:44:49.358487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:48.803 [2024-09-29 16:44:49.358819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.803 [2024-09-29 16:44:49.358858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:49.062 [2024-09-29 16:44:49.365252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.062 [2024-09-29 16:44:49.365592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.062 [2024-09-29 16:44:49.365634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:49.062 [2024-09-29 16:44:49.372281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.062 [2024-09-29 16:44:49.372604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.062 [2024-09-29 16:44:49.372646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:49.062 [2024-09-29 16:44:49.380106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.062 [2024-09-29 16:44:49.380427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.062 [2024-09-29 16:44:49.380478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.062 [2024-09-29 16:44:49.387133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.062 [2024-09-29 16:44:49.387575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.062 [2024-09-29 16:44:49.387618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:49.062 [2024-09-29 16:44:49.394905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.062 [2024-09-29 16:44:49.395341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.062 [2024-09-29 16:44:49.395384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:49.062 [2024-09-29 16:44:49.402781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.062 [2024-09-29 16:44:49.403220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.062 [2024-09-29 16:44:49.403263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:49.062 [2024-09-29 16:44:49.410694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.062 [2024-09-29 16:44:49.411025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.062 [2024-09-29 16:44:49.411068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.062 [2024-09-29 16:44:49.418416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.062 [2024-09-29 16:44:49.418838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.062 [2024-09-29 16:44:49.418877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:49.062 [2024-09-29 16:44:49.426275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.062 [2024-09-29 16:44:49.426593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.062 [2024-09-29 16:44:49.426652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:49.062 [2024-09-29 16:44:49.433847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.062 [2024-09-29 16:44:49.434259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.062 [2024-09-29 16:44:49.434301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:49.062 [2024-09-29 16:44:49.441717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.062 [2024-09-29 16:44:49.442130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.062 [2024-09-29 16:44:49.442173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.062 [2024-09-29 16:44:49.449280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.062 [2024-09-29 16:44:49.449661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.062 [2024-09-29 16:44:49.449730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:49.062 [2024-09-29 16:44:49.457143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.062 [2024-09-29 16:44:49.457463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.062 [2024-09-29 16:44:49.457506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:49.062 [2024-09-29 16:44:49.465001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.062 [2024-09-29 16:44:49.465394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.062 [2024-09-29 16:44:49.465436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:49.062 [2024-09-29 16:44:49.472842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.062 [2024-09-29 16:44:49.473201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.062 [2024-09-29 16:44:49.473243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.062 [2024-09-29 16:44:49.480421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.062 [2024-09-29 16:44:49.480821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.062 [2024-09-29 16:44:49.480860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:49.062 [2024-09-29 16:44:49.488200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.062 [2024-09-29 16:44:49.488532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.062 [2024-09-29 16:44:49.488576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:49.062 [2024-09-29 16:44:49.495722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.062 [2024-09-29 16:44:49.496163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.062 [2024-09-29 16:44:49.496206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:49.062 [2024-09-29 16:44:49.503666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.062 [2024-09-29 16:44:49.504048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.062 [2024-09-29 16:44:49.504091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.062 [2024-09-29 16:44:49.511334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.062 [2024-09-29 16:44:49.511724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.062 [2024-09-29 16:44:49.511763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:49.062 [2024-09-29 16:44:49.518925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.062 [2024-09-29 16:44:49.519323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.062 [2024-09-29 16:44:49.519365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:49.062 [2024-09-29 16:44:49.526438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.062 [2024-09-29 16:44:49.526852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.062 [2024-09-29 16:44:49.526892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:49.062 [2024-09-29 16:44:49.534163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.062 [2024-09-29 16:44:49.534526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.062 [2024-09-29 16:44:49.534568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.062 [2024-09-29 16:44:49.541656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.062 [2024-09-29 16:44:49.541981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.062 [2024-09-29 16:44:49.542037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:49.062 [2024-09-29 16:44:49.549367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.063 [2024-09-29 16:44:49.549797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.063 [2024-09-29 16:44:49.549836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:49.063 [2024-09-29 16:44:49.557295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.063 [2024-09-29 16:44:49.557645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.063 [2024-09-29 16:44:49.557719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:49.063 [2024-09-29 16:44:49.565120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.063 [2024-09-29 16:44:49.565491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.063 [2024-09-29 16:44:49.565533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.063 [2024-09-29 16:44:49.572071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.063 [2024-09-29 16:44:49.572454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.063 [2024-09-29 16:44:49.572496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:49.063 [2024-09-29 16:44:49.579095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.063 [2024-09-29 16:44:49.579471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.063 [2024-09-29 16:44:49.579514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:49.063 [2024-09-29 16:44:49.586529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.063 [2024-09-29 16:44:49.586893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.063 [2024-09-29 16:44:49.586932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:49.063 [2024-09-29 16:44:49.594162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.063 [2024-09-29 16:44:49.594462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.063 [2024-09-29 16:44:49.594504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.063 [2024-09-29 16:44:49.601530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.063 [2024-09-29 16:44:49.601883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.063 [2024-09-29 16:44:49.601921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:49.063 [2024-09-29 16:44:49.609647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.063 [2024-09-29 16:44:49.609967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.063 [2024-09-29 16:44:49.610010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:49.063 [2024-09-29 16:44:49.617347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.063 [2024-09-29 16:44:49.617744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.063 [2024-09-29 16:44:49.617784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:49.323 [2024-09-29 16:44:49.625102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.323 [2024-09-29 16:44:49.625439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.323 [2024-09-29 16:44:49.625482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.323 [2024-09-29 16:44:49.632777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.323 [2024-09-29 16:44:49.633121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.323 [2024-09-29 16:44:49.633164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:49.323 [2024-09-29 16:44:49.640136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.323 [2024-09-29 16:44:49.640533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.323 [2024-09-29 16:44:49.640575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:49.323 [2024-09-29 16:44:49.647771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.323 [2024-09-29 16:44:49.648168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.323 [2024-09-29 16:44:49.648212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:49.323 [2024-09-29 16:44:49.655318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.323 [2024-09-29 16:44:49.655647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.323 [2024-09-29 16:44:49.655699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.323 [2024-09-29 16:44:49.663196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.323 [2024-09-29 16:44:49.663577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.323 [2024-09-29 16:44:49.663619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:49.323 [2024-09-29 16:44:49.670895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.323 [2024-09-29 16:44:49.671325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.323 [2024-09-29 16:44:49.671367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:49.323 [2024-09-29 16:44:49.678374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.323 [2024-09-29 16:44:49.678789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.323 [2024-09-29 16:44:49.678828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:49.323 [2024-09-29 16:44:49.686148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.323 [2024-09-29 16:44:49.686451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.323 [2024-09-29 16:44:49.686493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.323 [2024-09-29 16:44:49.692546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.323 [2024-09-29 16:44:49.692856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.323 [2024-09-29 16:44:49.692895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:49.323 [2024-09-29 16:44:49.698685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.323 [2024-09-29 16:44:49.698973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.323 [2024-09-29 16:44:49.699031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:49.323 [2024-09-29 16:44:49.704818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.323 [2024-09-29 16:44:49.705117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.323 [2024-09-29 16:44:49.705173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:49.323 [2024-09-29 16:44:49.710940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.323 [2024-09-29 16:44:49.711256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.323 [2024-09-29 16:44:49.711297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.323 [2024-09-29 16:44:49.717181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.323 [2024-09-29 16:44:49.717501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.323 [2024-09-29 16:44:49.717543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:49.323 [2024-09-29 16:44:49.723415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.323 [2024-09-29 16:44:49.723734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.323 [2024-09-29 16:44:49.723773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:49.323 [2024-09-29 16:44:49.729584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.323 [2024-09-29 16:44:49.729899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.323 [2024-09-29 16:44:49.729938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:49.323 [2024-09-29 16:44:49.736334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.323 [2024-09-29 16:44:49.736661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.323 [2024-09-29 16:44:49.736734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.323 [2024-09-29 16:44:49.743034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.323 [2024-09-29 16:44:49.743394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.323 [2024-09-29 16:44:49.743437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:49.323 [2024-09-29 16:44:49.750414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.323 [2024-09-29 16:44:49.750858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.323 [2024-09-29 16:44:49.750897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:49.323 [2024-09-29 16:44:49.757522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.323 [2024-09-29 16:44:49.757840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.323 [2024-09-29 16:44:49.757879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:49.323 [2024-09-29 16:44:49.764532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.323 [2024-09-29 16:44:49.764917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.323 [2024-09-29 16:44:49.764956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.323 [2024-09-29 16:44:49.771568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.323 [2024-09-29 16:44:49.771920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.323 [2024-09-29 16:44:49.771959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:49.323 [2024-09-29 16:44:49.778747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.323 [2024-09-29 16:44:49.779175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.323 [2024-09-29 16:44:49.779218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:49.323 [2024-09-29 16:44:49.785648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.323 [2024-09-29 16:44:49.786024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.323 [2024-09-29 16:44:49.786066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:49.323 [2024-09-29 16:44:49.792872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.323 [2024-09-29 16:44:49.793260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.323 [2024-09-29 16:44:49.793302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.324 [2024-09-29 16:44:49.800039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.324 [2024-09-29 16:44:49.800352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.324 [2024-09-29 16:44:49.800395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:49.324 [2024-09-29 16:44:49.807456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.324 [2024-09-29 16:44:49.807839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.324 [2024-09-29 16:44:49.807878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:49.324 [2024-09-29 16:44:49.814555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.324 [2024-09-29 16:44:49.814931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.324 [2024-09-29 16:44:49.814970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:49.324 [2024-09-29 16:44:49.821801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.324 [2024-09-29 16:44:49.822171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.324 [2024-09-29 16:44:49.822223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.324 [2024-09-29 16:44:49.828661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.324 [2024-09-29 16:44:49.829003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.324 [2024-09-29 16:44:49.829046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:49.324 [2024-09-29 16:44:49.835510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.324 [2024-09-29 16:44:49.835877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.324 [2024-09-29 16:44:49.835916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:49.324 [2024-09-29 16:44:49.842133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.324 [2024-09-29 16:44:49.842434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.324 [2024-09-29 16:44:49.842477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:49.324 [2024-09-29 16:44:49.849264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.324 [2024-09-29 16:44:49.849561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.324 [2024-09-29 16:44:49.849605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.324 [2024-09-29 16:44:49.856282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.324 [2024-09-29 16:44:49.856687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.324 [2024-09-29 16:44:49.856743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:49.324 [2024-09-29 16:44:49.863301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.324 [2024-09-29 16:44:49.863668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.324 [2024-09-29 16:44:49.863736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:49.324 [2024-09-29 16:44:49.870328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.324 [2024-09-29 16:44:49.870669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.324 [2024-09-29 16:44:49.870736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:49.324 [2024-09-29 16:44:49.877359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.324 [2024-09-29 16:44:49.877789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.324 [2024-09-29 16:44:49.877828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.583 [2024-09-29 16:44:49.884507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.583 [2024-09-29 16:44:49.884811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.583 [2024-09-29 16:44:49.884850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:49.583 [2024-09-29 16:44:49.891646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.583 [2024-09-29 16:44:49.891955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.583 [2024-09-29 16:44:49.892028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:49.583 [2024-09-29 16:44:49.898722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.583 [2024-09-29 16:44:49.899131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.583 [2024-09-29 16:44:49.899173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:49.583 [2024-09-29 16:44:49.905881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.583 [2024-09-29 16:44:49.906191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.583 [2024-09-29 16:44:49.906233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.583 [2024-09-29 16:44:49.913172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.583 [2024-09-29 16:44:49.913504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.583 [2024-09-29 16:44:49.913546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:49.583 [2024-09-29 16:44:49.920024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.583 [2024-09-29 16:44:49.920396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.583 [2024-09-29 16:44:49.920439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:49.583 [2024-09-29 16:44:49.927137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.583 [2024-09-29 16:44:49.927436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.583 [2024-09-29 16:44:49.927478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:49.583 [2024-09-29 16:44:49.934252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.583 [2024-09-29 16:44:49.934633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.583 [2024-09-29 16:44:49.934684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.583 [2024-09-29 16:44:49.941149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.583 [2024-09-29 16:44:49.941514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.583 [2024-09-29 16:44:49.941565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:49.583 [2024-09-29 16:44:49.948067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.583 [2024-09-29 16:44:49.948392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.583 [2024-09-29 16:44:49.948434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:49.583 [2024-09-29 16:44:49.954943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.583 [2024-09-29 16:44:49.955356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.583 [2024-09-29 16:44:49.955398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:49.583 [2024-09-29 16:44:49.962124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.583 [2024-09-29 16:44:49.962466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.583 [2024-09-29 16:44:49.962509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.583 [2024-09-29 16:44:49.969244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.583 [2024-09-29 16:44:49.969578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.583 [2024-09-29 16:44:49.969620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:49.583 [2024-09-29 16:44:49.976117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.583 [2024-09-29 16:44:49.976486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.583 [2024-09-29 16:44:49.976528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:49.583 [2024-09-29 16:44:49.983063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.583 [2024-09-29 16:44:49.983378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.583 [2024-09-29 16:44:49.983421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:49.583 [2024-09-29 16:44:49.990266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.583 [2024-09-29 16:44:49.990615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.583 [2024-09-29 16:44:49.990658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.583 [2024-09-29 16:44:49.997264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.583 [2024-09-29 16:44:49.997617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.584 [2024-09-29 16:44:49.997660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:49.584 [2024-09-29 16:44:50.004080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.584 [2024-09-29 16:44:50.004369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.584 [2024-09-29 16:44:50.004412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:49.584 [2024-09-29 16:44:50.010550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.584 [2024-09-29 16:44:50.010885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.584 [2024-09-29 16:44:50.010925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:49.584 [2024-09-29 16:44:50.017616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.584 [2024-09-29 16:44:50.018024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.584 [2024-09-29 16:44:50.018071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.584 [2024-09-29 16:44:50.024224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.584 [2024-09-29 16:44:50.024527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.584 [2024-09-29 16:44:50.024570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:49.584 [2024-09-29 16:44:50.030713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.584 [2024-09-29 16:44:50.031023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.584 [2024-09-29 16:44:50.031068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:49.584 [2024-09-29 16:44:50.037222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.584 [2024-09-29 16:44:50.037533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.584 [2024-09-29 16:44:50.037576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:49.584 [2024-09-29 16:44:50.043775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.584 [2024-09-29 16:44:50.044064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.584 [2024-09-29 16:44:50.044107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.584 [2024-09-29 16:44:50.050690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.584 [2024-09-29 16:44:50.051017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.584 [2024-09-29 16:44:50.051060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:49.584 [2024-09-29 16:44:50.057180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.584 [2024-09-29 16:44:50.057491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.584 [2024-09-29 16:44:50.057544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:49.584 [2024-09-29 16:44:50.063645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.584 [2024-09-29 16:44:50.063957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.584 [2024-09-29 16:44:50.064000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:49.584 [2024-09-29 16:44:50.070566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.584 [2024-09-29 16:44:50.070885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.584 [2024-09-29 16:44:50.070925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.584 [2024-09-29 16:44:50.078556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.584 [2024-09-29 16:44:50.078945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.584 [2024-09-29 16:44:50.078991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:49.584 [2024-09-29 16:44:50.086085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.584 [2024-09-29 16:44:50.086401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.584 [2024-09-29 16:44:50.086444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:49.584 [2024-09-29 16:44:50.093933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.584 [2024-09-29 16:44:50.094221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.584 [2024-09-29 16:44:50.094264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:49.584 [2024-09-29 16:44:50.101257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.584 [2024-09-29 16:44:50.101551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.584 [2024-09-29 16:44:50.101594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.584 [2024-09-29 16:44:50.107801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.584 [2024-09-29 16:44:50.108059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.584 [2024-09-29 16:44:50.108116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:49.584 [2024-09-29 16:44:50.113849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.584 [2024-09-29 16:44:50.114100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.584 [2024-09-29 16:44:50.114140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:49.584 [2024-09-29 16:44:50.119738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.584 [2024-09-29 16:44:50.119991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.584 [2024-09-29 16:44:50.120048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:49.584 [2024-09-29 16:44:50.126001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.584 [2024-09-29 16:44:50.126261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.584 [2024-09-29 16:44:50.126304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.584 [2024-09-29 16:44:50.132330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.584 [2024-09-29 16:44:50.132611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.584 [2024-09-29 16:44:50.132653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:49.584 [2024-09-29 16:44:50.138566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.584 [2024-09-29 16:44:50.138835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.584 [2024-09-29 16:44:50.138874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:49.844 [2024-09-29 16:44:50.145208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.844 [2024-09-29 16:44:50.145473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.844 [2024-09-29 16:44:50.145512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:49.844 [2024-09-29 16:44:50.151721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.844 [2024-09-29 16:44:50.151994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.844 [2024-09-29 16:44:50.152038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.844 [2024-09-29 16:44:50.158182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.844 [2024-09-29 16:44:50.158491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.844 [2024-09-29 16:44:50.158533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:49.844 [2024-09-29 16:44:50.164942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.844 [2024-09-29 16:44:50.165233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.844 [2024-09-29 16:44:50.165289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:49.844 [2024-09-29 16:44:50.171402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.844 [2024-09-29 16:44:50.171682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.844 [2024-09-29 16:44:50.171740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:49.844 [2024-09-29 16:44:50.178092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.844 [2024-09-29 16:44:50.178449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.844 [2024-09-29 16:44:50.178492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.844 [2024-09-29 16:44:50.185781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.844 [2024-09-29 16:44:50.186185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.844 [2024-09-29 16:44:50.186229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:49.844 [2024-09-29 16:44:50.192885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.844 [2024-09-29 16:44:50.193271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.844 [2024-09-29 16:44:50.193314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:49.844 [2024-09-29 16:44:50.200326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.844 [2024-09-29 16:44:50.200644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.844 [2024-09-29 16:44:50.200697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:49.844 [2024-09-29 16:44:50.207143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.844 [2024-09-29 16:44:50.207404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.844 [2024-09-29 16:44:50.207446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.844 [2024-09-29 16:44:50.213451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.844 [2024-09-29 16:44:50.213747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.844 [2024-09-29 16:44:50.213787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:49.844 [2024-09-29 16:44:50.220165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.844 [2024-09-29 16:44:50.220444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.844 [2024-09-29 16:44:50.220487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:49.844 [2024-09-29 16:44:50.226784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.844 [2024-09-29 16:44:50.227063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.844 [2024-09-29 16:44:50.227106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:49.844 [2024-09-29 16:44:50.234232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.844 [2024-09-29 16:44:50.234548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.844 [2024-09-29 16:44:50.234590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.844 [2024-09-29 16:44:50.241101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.844 [2024-09-29 16:44:50.241455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.844 [2024-09-29 16:44:50.241493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:49.844 [2024-09-29 16:44:50.248494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.844 [2024-09-29 16:44:50.248869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.844 [2024-09-29 16:44:50.248908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:49.844 [2024-09-29 16:44:50.255880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.844 [2024-09-29 16:44:50.256177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.844 [2024-09-29 16:44:50.256219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:49.844 [2024-09-29 16:44:50.263327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.844 [2024-09-29 16:44:50.263631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.844 [2024-09-29 16:44:50.263683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.844 [2024-09-29 16:44:50.270822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.844 [2024-09-29 16:44:50.271205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.844 [2024-09-29 16:44:50.271248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:49.844 [2024-09-29 16:44:50.278100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.844 [2024-09-29 16:44:50.278436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.844 [2024-09-29 16:44:50.278479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:49.844 [2024-09-29 16:44:50.285945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.845 [2024-09-29 16:44:50.286257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.845 [2024-09-29 16:44:50.286300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:49.845 [2024-09-29 16:44:50.293790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.845 [2024-09-29 16:44:50.294096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.845 [2024-09-29 16:44:50.294139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.845 [2024-09-29 16:44:50.301177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.845 [2024-09-29 16:44:50.301535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.845 [2024-09-29 16:44:50.301578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:49.845 [2024-09-29 16:44:50.308913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.845 [2024-09-29 16:44:50.309198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.845 [2024-09-29 16:44:50.309240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:49.845 4131.00 IOPS, 516.38 MiB/s [2024-09-29 16:44:50.317480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:36:49.845 [2024-09-29 16:44:50.317742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.845 [2024-09-29 16:44:50.317781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:49.845 00:36:49.845 Latency(us) 00:36:49.845 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:49.845 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:36:49.845 nvme0n1 : 2.01 4129.15 516.14 0.00 0.00 3863.04 2657.85 14369.37 00:36:49.845 =================================================================================================================== 00:36:49.845 Total : 4129.15 516.14 0.00 0.00 3863.04 2657.85 14369.37 00:36:49.845 { 00:36:49.845 "results": [ 00:36:49.845 { 00:36:49.845 "job": "nvme0n1", 00:36:49.845 "core_mask": "0x2", 00:36:49.845 "workload": "randwrite", 00:36:49.845 "status": "finished", 00:36:49.845 "queue_depth": 16, 00:36:49.845 "io_size": 131072, 00:36:49.845 "runtime": 2.005982, 00:36:49.845 "iops": 4129.149713207796, 00:36:49.845 "mibps": 516.1437141509745, 00:36:49.845 "io_failed": 0, 00:36:49.845 "io_timeout": 0, 00:36:49.845 "avg_latency_us": 3863.0447962582894, 00:36:49.845 "min_latency_us": 2657.8488888888887, 00:36:49.845 "max_latency_us": 14369.374814814815 00:36:49.845 } 00:36:49.845 ], 00:36:49.845 "core_count": 1 00:36:49.845 } 00:36:49.845 16:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:49.845 16:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:49.845 16:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:49.845 16:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:49.845 | .driver_specific 00:36:49.845 | .nvme_error 00:36:49.845 | .status_code 00:36:49.845 | .command_transient_transport_error' 00:36:50.103 16:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 267 > 0 )) 00:36:50.103 16:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3321862 00:36:50.103 16:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3321862 ']' 00:36:50.103 16:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3321862 00:36:50.103 16:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:36:50.103 16:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:50.103 16:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3321862 00:36:50.361 16:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:50.361 16:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:50.361 16:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3321862' 00:36:50.361 killing process with pid 3321862 00:36:50.361 16:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3321862 00:36:50.361 Received shutdown signal, test time was about 2.000000 seconds 00:36:50.361 00:36:50.361 Latency(us) 00:36:50.361 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:50.361 =================================================================================================================== 00:36:50.361 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:50.361 16:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3321862 00:36:51.295 16:44:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3319826 00:36:51.295 16:44:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3319826 ']' 00:36:51.295 16:44:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3319826 00:36:51.295 16:44:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:36:51.295 16:44:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:51.295 16:44:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3319826 00:36:51.295 16:44:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:51.295 16:44:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:51.295 16:44:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3319826' 00:36:51.295 killing process with pid 3319826 00:36:51.295 16:44:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3319826 00:36:51.295 16:44:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3319826 00:36:52.670 00:36:52.670 real 0m24.025s 00:36:52.670 user 0m47.022s 00:36:52.670 sys 0m4.678s 00:36:52.670 16:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:52.670 16:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:52.670 ************************************ 00:36:52.670 END TEST nvmf_digest_error 00:36:52.670 ************************************ 00:36:52.670 16:44:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:36:52.670 16:44:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:36:52.670 16:44:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@512 -- # nvmfcleanup 00:36:52.670 16:44:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:36:52.670 16:44:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:52.670 16:44:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:36:52.670 16:44:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:52.670 16:44:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:52.670 rmmod nvme_tcp 00:36:52.670 rmmod nvme_fabrics 00:36:52.670 rmmod nvme_keyring 00:36:52.670 16:44:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:52.670 16:44:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:36:52.670 16:44:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:36:52.670 16:44:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@513 -- # '[' -n 3319826 ']' 00:36:52.670 16:44:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@514 -- # killprocess 3319826 00:36:52.670 16:44:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 3319826 ']' 00:36:52.670 16:44:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 3319826 00:36:52.670 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3319826) - No such process 00:36:52.670 16:44:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 3319826 is not found' 00:36:52.670 Process with pid 3319826 is not found 00:36:52.670 16:44:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:36:52.670 16:44:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:36:52.670 16:44:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:36:52.670 16:44:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:36:52.670 16:44:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # iptables-save 00:36:52.670 16:44:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:36:52.670 16:44:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # iptables-restore 00:36:52.670 16:44:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:52.670 16:44:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:52.670 16:44:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:52.670 16:44:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:52.670 16:44:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:54.662 16:44:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:54.662 00:36:54.662 real 0m53.649s 00:36:54.662 user 1m37.276s 00:36:54.662 sys 0m10.873s 00:36:54.662 16:44:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:54.662 16:44:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:54.662 ************************************ 00:36:54.662 END TEST nvmf_digest 00:36:54.662 ************************************ 00:36:54.662 16:44:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:36:54.662 16:44:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:36:54.662 16:44:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:36:54.662 16:44:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:36:54.662 16:44:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:36:54.663 16:44:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:54.663 16:44:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.663 ************************************ 00:36:54.663 START TEST nvmf_bdevperf 00:36:54.663 ************************************ 00:36:54.663 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:36:54.921 * Looking for test storage... 00:36:54.921 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:54.921 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:36:54.921 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # lcov --version 00:36:54.921 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:36:54.921 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:36:54.921 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:54.921 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:54.921 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:54.921 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:36:54.921 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:36:54.921 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:36:54.921 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:36:54.921 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:36:54.921 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:36:54.921 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:36:54.921 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:54.921 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:36:54.921 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:36:54.921 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:54.921 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:54.921 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:36:54.921 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:36:54.921 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:54.921 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:36:54.921 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:36:54.921 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:36:54.921 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:36:54.921 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:54.921 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:36:54.921 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:36:54.921 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:54.921 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:54.921 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:36:54.921 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:54.921 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:36:54.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:54.921 --rc genhtml_branch_coverage=1 00:36:54.921 --rc genhtml_function_coverage=1 00:36:54.921 --rc genhtml_legend=1 00:36:54.921 --rc geninfo_all_blocks=1 00:36:54.921 --rc geninfo_unexecuted_blocks=1 00:36:54.921 00:36:54.921 ' 00:36:54.921 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:36:54.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:54.921 --rc genhtml_branch_coverage=1 00:36:54.921 --rc genhtml_function_coverage=1 00:36:54.921 --rc genhtml_legend=1 00:36:54.921 --rc geninfo_all_blocks=1 00:36:54.922 --rc geninfo_unexecuted_blocks=1 00:36:54.922 00:36:54.922 ' 00:36:54.922 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:36:54.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:54.922 --rc genhtml_branch_coverage=1 00:36:54.922 --rc genhtml_function_coverage=1 00:36:54.922 --rc genhtml_legend=1 00:36:54.922 --rc geninfo_all_blocks=1 00:36:54.922 --rc geninfo_unexecuted_blocks=1 00:36:54.922 00:36:54.922 ' 00:36:54.922 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:36:54.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:54.922 --rc genhtml_branch_coverage=1 00:36:54.922 --rc genhtml_function_coverage=1 00:36:54.922 --rc genhtml_legend=1 00:36:54.922 --rc geninfo_all_blocks=1 00:36:54.922 --rc geninfo_unexecuted_blocks=1 00:36:54.922 00:36:54.922 ' 00:36:54.922 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:54.922 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:36:54.922 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:54.922 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:54.922 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:54.922 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:54.922 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:54.922 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:54.922 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:54.922 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:54.922 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:54.922 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:54.922 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:54.922 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:54.922 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:54.922 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:54.922 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:54.922 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:54.922 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:54.922 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:36:54.922 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:54.922 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:54.922 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:54.922 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:54.922 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:54.922 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:54.922 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:36:54.922 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:54.922 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:36:54.922 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:54.922 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:54.922 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:54.922 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:54.922 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:54.922 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:54.922 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:54.922 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:54.922 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:54.922 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:54.922 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:54.922 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:54.922 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:36:54.922 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:36:54.922 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:54.922 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@472 -- # prepare_net_devs 00:36:54.922 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:36:54.922 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:36:54.922 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:54.922 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:54.922 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:54.922 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:36:54.922 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:36:54.922 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:36:54.922 16:44:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:56.822 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:56.822 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:36:56.822 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:56.822 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:56.822 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:56.822 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:56.822 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:56.822 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:36:56.822 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:56.822 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:36:56.822 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:36:56.822 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:36:56.822 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:36:56.822 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:36:56.822 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:36:56.822 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:56.822 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:56.822 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:56.822 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:56.822 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:56.822 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:56.822 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:56.822 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:56.822 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:56.822 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:56.822 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:56.823 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:36:56.823 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:36:56.823 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:36:56.823 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:36:56.823 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:36:56.823 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:36:56.823 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:36:56.823 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:56.823 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:56.823 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:36:56.823 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:36:56.823 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:56.823 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:56.823 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:36:56.823 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:36:56.823 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:56.823 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:56.823 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:36:56.823 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:36:56.823 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:56.823 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:56.823 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:36:56.823 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:36:56.823 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:36:56.823 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:36:56.823 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:36:56.823 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:56.823 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:36:56.823 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:56.823 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:36:56.823 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:36:56.823 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:56.823 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:56.823 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:56.823 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:36:56.823 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:36:56.823 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:56.823 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:36:56.823 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:56.823 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:36:56.823 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:36:56.823 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:56.823 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:56.823 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:56.823 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:36:56.823 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:36:56.823 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # is_hw=yes 00:36:56.823 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:36:56.823 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:36:56.823 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:36:56.823 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:56.823 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:56.823 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:56.823 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:56.823 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:56.823 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:56.823 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:56.823 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:56.823 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:56.823 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:56.823 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:56.823 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:56.823 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:56.823 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:56.823 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:57.082 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:57.082 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:57.082 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:57.082 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:57.082 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:57.082 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:57.082 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:57.082 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:57.082 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:57.082 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:36:57.082 00:36:57.082 --- 10.0.0.2 ping statistics --- 00:36:57.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:57.082 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:36:57.082 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:57.082 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:57.082 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:36:57.082 00:36:57.082 --- 10.0.0.1 ping statistics --- 00:36:57.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:57.082 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:36:57.082 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:57.082 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # return 0 00:36:57.082 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:36:57.082 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:57.082 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:36:57.082 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:36:57.082 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:57.082 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:36:57.082 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:36:57.082 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:36:57.082 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:36:57.082 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:36:57.082 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:57.082 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:57.082 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # nvmfpid=3324600 00:36:57.082 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:36:57.082 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # waitforlisten 3324600 00:36:57.082 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 3324600 ']' 00:36:57.083 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:57.083 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:57.083 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:57.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:57.083 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:57.083 16:44:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:57.083 [2024-09-29 16:44:57.580924] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:36:57.083 [2024-09-29 16:44:57.581058] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:57.341 [2024-09-29 16:44:57.727683] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:57.599 [2024-09-29 16:44:57.991761] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:57.599 [2024-09-29 16:44:57.991847] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:57.599 [2024-09-29 16:44:57.991873] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:57.599 [2024-09-29 16:44:57.991897] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:57.599 [2024-09-29 16:44:57.991917] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:57.599 [2024-09-29 16:44:57.992048] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:36:57.599 [2024-09-29 16:44:57.992101] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:36:57.599 [2024-09-29 16:44:57.992108] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:36:58.165 16:44:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:58.165 16:44:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:36:58.165 16:44:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:36:58.165 16:44:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:58.165 16:44:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:58.165 16:44:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:58.165 16:44:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:58.165 16:44:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:58.165 16:44:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:58.165 [2024-09-29 16:44:58.630170] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:58.165 16:44:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:58.165 16:44:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:58.165 16:44:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:58.165 16:44:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:58.165 Malloc0 00:36:58.165 16:44:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:58.165 16:44:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:58.165 16:44:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:58.165 16:44:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:58.165 16:44:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:58.165 16:44:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:58.165 16:44:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:58.165 16:44:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:58.422 16:44:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:58.422 16:44:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:58.422 16:44:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:58.422 16:44:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:58.422 [2024-09-29 16:44:58.733926] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:58.422 16:44:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:58.422 16:44:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:36:58.422 16:44:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:36:58.422 16:44:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # config=() 00:36:58.422 16:44:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # local subsystem config 00:36:58.422 16:44:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:36:58.422 16:44:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:36:58.422 { 00:36:58.422 "params": { 00:36:58.422 "name": "Nvme$subsystem", 00:36:58.422 "trtype": "$TEST_TRANSPORT", 00:36:58.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:58.422 "adrfam": "ipv4", 00:36:58.422 "trsvcid": "$NVMF_PORT", 00:36:58.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:58.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:58.422 "hdgst": ${hdgst:-false}, 00:36:58.422 "ddgst": ${ddgst:-false} 00:36:58.422 }, 00:36:58.422 "method": "bdev_nvme_attach_controller" 00:36:58.422 } 00:36:58.422 EOF 00:36:58.422 )") 00:36:58.422 16:44:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@578 -- # cat 00:36:58.422 16:44:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # jq . 00:36:58.422 16:44:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@581 -- # IFS=, 00:36:58.422 16:44:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:36:58.422 "params": { 00:36:58.422 "name": "Nvme1", 00:36:58.422 "trtype": "tcp", 00:36:58.422 "traddr": "10.0.0.2", 00:36:58.422 "adrfam": "ipv4", 00:36:58.422 "trsvcid": "4420", 00:36:58.422 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:58.422 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:58.422 "hdgst": false, 00:36:58.422 "ddgst": false 00:36:58.422 }, 00:36:58.422 "method": "bdev_nvme_attach_controller" 00:36:58.422 }' 00:36:58.422 [2024-09-29 16:44:58.821431] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:36:58.422 [2024-09-29 16:44:58.821566] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3324757 ] 00:36:58.422 [2024-09-29 16:44:58.952666] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:58.679 [2024-09-29 16:44:59.188543] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:36:59.245 Running I/O for 1 seconds... 00:37:00.620 6238.00 IOPS, 24.37 MiB/s 00:37:00.620 Latency(us) 00:37:00.620 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:00.620 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:37:00.620 Verification LBA range: start 0x0 length 0x4000 00:37:00.620 Nvme1n1 : 1.02 6272.65 24.50 0.00 0.00 20313.23 1626.26 17573.36 00:37:00.620 =================================================================================================================== 00:37:00.620 Total : 6272.65 24.50 0.00 0.00 20313.23 1626.26 17573.36 00:37:01.555 16:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3325164 00:37:01.555 16:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:37:01.555 16:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:37:01.555 16:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:37:01.555 16:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # config=() 00:37:01.555 16:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # local subsystem config 00:37:01.555 16:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:37:01.555 16:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:37:01.555 { 00:37:01.555 "params": { 00:37:01.555 "name": "Nvme$subsystem", 00:37:01.555 "trtype": "$TEST_TRANSPORT", 00:37:01.555 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:01.555 "adrfam": "ipv4", 00:37:01.555 "trsvcid": "$NVMF_PORT", 00:37:01.555 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:01.555 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:01.555 "hdgst": ${hdgst:-false}, 00:37:01.555 "ddgst": ${ddgst:-false} 00:37:01.555 }, 00:37:01.555 "method": "bdev_nvme_attach_controller" 00:37:01.555 } 00:37:01.555 EOF 00:37:01.555 )") 00:37:01.555 16:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@578 -- # cat 00:37:01.555 16:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # jq . 00:37:01.555 16:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@581 -- # IFS=, 00:37:01.555 16:45:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:37:01.555 "params": { 00:37:01.555 "name": "Nvme1", 00:37:01.555 "trtype": "tcp", 00:37:01.555 "traddr": "10.0.0.2", 00:37:01.555 "adrfam": "ipv4", 00:37:01.555 "trsvcid": "4420", 00:37:01.555 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:01.555 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:01.555 "hdgst": false, 00:37:01.555 "ddgst": false 00:37:01.555 }, 00:37:01.555 "method": "bdev_nvme_attach_controller" 00:37:01.555 }' 00:37:01.555 [2024-09-29 16:45:01.885544] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:37:01.555 [2024-09-29 16:45:01.885724] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3325164 ] 00:37:01.555 [2024-09-29 16:45:02.019353] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:01.813 [2024-09-29 16:45:02.262009] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:37:02.378 Running I/O for 15 seconds... 00:37:04.247 6048.00 IOPS, 23.62 MiB/s 6099.50 IOPS, 23.83 MiB/s 16:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3324600 00:37:04.247 16:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:37:04.509 [2024-09-29 16:45:04.825406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:103352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:04.509 [2024-09-29 16:45:04.825484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.509 [2024-09-29 16:45:04.825538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:103360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:04.509 [2024-09-29 16:45:04.825562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.509 [2024-09-29 16:45:04.825589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:04.509 [2024-09-29 16:45:04.825628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.509 [2024-09-29 16:45:04.825653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:103376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:04.509 [2024-09-29 16:45:04.825710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.509 [2024-09-29 16:45:04.825739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:103384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:04.509 [2024-09-29 16:45:04.825762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.509 [2024-09-29 16:45:04.825789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:103392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:04.509 [2024-09-29 16:45:04.825814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.509 [2024-09-29 16:45:04.825839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:103400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:04.509 [2024-09-29 16:45:04.825870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.509 [2024-09-29 16:45:04.825895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:103408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:04.509 [2024-09-29 16:45:04.825919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.509 [2024-09-29 16:45:04.825945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:103416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:04.509 [2024-09-29 16:45:04.825995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.509 [2024-09-29 16:45:04.826033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:103424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:04.509 [2024-09-29 16:45:04.826055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.509 [2024-09-29 16:45:04.826080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:103432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:04.509 [2024-09-29 16:45:04.826102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.509 [2024-09-29 16:45:04.826134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:103440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:04.509 [2024-09-29 16:45:04.826157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.509 [2024-09-29 16:45:04.826180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:103448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:04.509 [2024-09-29 16:45:04.826209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.509 [2024-09-29 16:45:04.826233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:103456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:04.509 [2024-09-29 16:45:04.826254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.509 [2024-09-29 16:45:04.826278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:102696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.509 [2024-09-29 16:45:04.826299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.509 [2024-09-29 16:45:04.826323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:102704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.509 [2024-09-29 16:45:04.826348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.509 [2024-09-29 16:45:04.826372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:103464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:04.509 [2024-09-29 16:45:04.826393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.509 [2024-09-29 16:45:04.826416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:103472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:04.509 [2024-09-29 16:45:04.826437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.509 [2024-09-29 16:45:04.826461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:103480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:04.509 [2024-09-29 16:45:04.826482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.509 [2024-09-29 16:45:04.826506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:04.509 [2024-09-29 16:45:04.826527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.509 [2024-09-29 16:45:04.826561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:103496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:04.509 [2024-09-29 16:45:04.826583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.509 [2024-09-29 16:45:04.826607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:04.509 [2024-09-29 16:45:04.826629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.509 [2024-09-29 16:45:04.826661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:103512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:04.509 [2024-09-29 16:45:04.826690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.509 [2024-09-29 16:45:04.826715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:103520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:04.509 [2024-09-29 16:45:04.826742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.509 [2024-09-29 16:45:04.826767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:103528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:04.509 [2024-09-29 16:45:04.826789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.509 [2024-09-29 16:45:04.826812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:103536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:04.509 [2024-09-29 16:45:04.826834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.509 [2024-09-29 16:45:04.826857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:103544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:04.510 [2024-09-29 16:45:04.826878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.510 [2024-09-29 16:45:04.826902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:103552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:04.510 [2024-09-29 16:45:04.826923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.510 [2024-09-29 16:45:04.826946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:04.510 [2024-09-29 16:45:04.826977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.510 [2024-09-29 16:45:04.827000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:103568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:04.510 [2024-09-29 16:45:04.827022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.510 [2024-09-29 16:45:04.827045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:04.510 [2024-09-29 16:45:04.827067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.510 [2024-09-29 16:45:04.827090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:103584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:04.510 [2024-09-29 16:45:04.827111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.510 [2024-09-29 16:45:04.827134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:103592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:04.510 [2024-09-29 16:45:04.827156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.510 [2024-09-29 16:45:04.827179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:103600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:04.510 [2024-09-29 16:45:04.827200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.510 [2024-09-29 16:45:04.827223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:103608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:04.510 [2024-09-29 16:45:04.827244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.510 [2024-09-29 16:45:04.827268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:103616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:04.510 [2024-09-29 16:45:04.827290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.510 [2024-09-29 16:45:04.827318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:103624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:04.510 [2024-09-29 16:45:04.827340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.510 [2024-09-29 16:45:04.827363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:103632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:04.510 [2024-09-29 16:45:04.827384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.510 [2024-09-29 16:45:04.827407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:103640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:04.510 [2024-09-29 16:45:04.827429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.510 [2024-09-29 16:45:04.827452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:103648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:04.510 [2024-09-29 16:45:04.827473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.510 [2024-09-29 16:45:04.827495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:103656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:04.510 [2024-09-29 16:45:04.827517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.510 [2024-09-29 16:45:04.827540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:103664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:04.510 [2024-09-29 16:45:04.827561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.510 [2024-09-29 16:45:04.827584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:103672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:04.510 [2024-09-29 16:45:04.827605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.510 [2024-09-29 16:45:04.827628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:103680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:04.510 [2024-09-29 16:45:04.827650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.510 [2024-09-29 16:45:04.827687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:103688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:04.510 [2024-09-29 16:45:04.827709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.510 [2024-09-29 16:45:04.827733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:103696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:04.510 [2024-09-29 16:45:04.827754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.510 [2024-09-29 16:45:04.827779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:103704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:04.510 [2024-09-29 16:45:04.827800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.510 [2024-09-29 16:45:04.827823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:102712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.510 [2024-09-29 16:45:04.827844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.510 [2024-09-29 16:45:04.827867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:102720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.510 [2024-09-29 16:45:04.827893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.510 [2024-09-29 16:45:04.827918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:102728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.510 [2024-09-29 16:45:04.827940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.510 [2024-09-29 16:45:04.827968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:102736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.510 [2024-09-29 16:45:04.827989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.510 [2024-09-29 16:45:04.828013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:102744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.510 [2024-09-29 16:45:04.828035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.510 [2024-09-29 16:45:04.828059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:102752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.510 [2024-09-29 16:45:04.828080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.510 [2024-09-29 16:45:04.828104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:102760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.510 [2024-09-29 16:45:04.828125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.510 [2024-09-29 16:45:04.828149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.510 [2024-09-29 16:45:04.828170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.510 [2024-09-29 16:45:04.828194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:102776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.510 [2024-09-29 16:45:04.828215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.510 [2024-09-29 16:45:04.828238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.510 [2024-09-29 16:45:04.828260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.510 [2024-09-29 16:45:04.828283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:102792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.510 [2024-09-29 16:45:04.828305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.510 [2024-09-29 16:45:04.828328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:102800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.510 [2024-09-29 16:45:04.828349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.510 [2024-09-29 16:45:04.828373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:102808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.510 [2024-09-29 16:45:04.828394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.510 [2024-09-29 16:45:04.828418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:102816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.510 [2024-09-29 16:45:04.828440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.510 [2024-09-29 16:45:04.828468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:102824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.510 [2024-09-29 16:45:04.828490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.510 [2024-09-29 16:45:04.828515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:103712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:04.510 [2024-09-29 16:45:04.828536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.510 [2024-09-29 16:45:04.828560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:102832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.510 [2024-09-29 16:45:04.828581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.510 [2024-09-29 16:45:04.828604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:102840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.510 [2024-09-29 16:45:04.828626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.510 [2024-09-29 16:45:04.828649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:102848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.510 [2024-09-29 16:45:04.828683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.510 [2024-09-29 16:45:04.828708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:102856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.511 [2024-09-29 16:45:04.828730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.511 [2024-09-29 16:45:04.828754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:102864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.511 [2024-09-29 16:45:04.828775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.511 [2024-09-29 16:45:04.828800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:102872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.511 [2024-09-29 16:45:04.828821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.511 [2024-09-29 16:45:04.828844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:102880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.511 [2024-09-29 16:45:04.828866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.511 [2024-09-29 16:45:04.828889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:102888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.511 [2024-09-29 16:45:04.828910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.511 [2024-09-29 16:45:04.828934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:102896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.511 [2024-09-29 16:45:04.828965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.511 [2024-09-29 16:45:04.828988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.511 [2024-09-29 16:45:04.829022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.511 [2024-09-29 16:45:04.829048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:102912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.511 [2024-09-29 16:45:04.829074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.511 [2024-09-29 16:45:04.829098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:102920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.511 [2024-09-29 16:45:04.829120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.511 [2024-09-29 16:45:04.829144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:102928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.511 [2024-09-29 16:45:04.829165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.511 [2024-09-29 16:45:04.829188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:102936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.511 [2024-09-29 16:45:04.829209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.511 [2024-09-29 16:45:04.829233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:102944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.511 [2024-09-29 16:45:04.829255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.511 [2024-09-29 16:45:04.829278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:102952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.511 [2024-09-29 16:45:04.829299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.511 [2024-09-29 16:45:04.829323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:102960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.511 [2024-09-29 16:45:04.829343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.511 [2024-09-29 16:45:04.829367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:102968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.511 [2024-09-29 16:45:04.829388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.511 [2024-09-29 16:45:04.829411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:102976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.511 [2024-09-29 16:45:04.829432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.511 [2024-09-29 16:45:04.829455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:102984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.511 [2024-09-29 16:45:04.829476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.511 [2024-09-29 16:45:04.829499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:102992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.511 [2024-09-29 16:45:04.829521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.511 [2024-09-29 16:45:04.829544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:103000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.511 [2024-09-29 16:45:04.829566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.511 [2024-09-29 16:45:04.829589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:103008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.511 [2024-09-29 16:45:04.829610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.511 [2024-09-29 16:45:04.829633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:103016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.511 [2024-09-29 16:45:04.829670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.511 [2024-09-29 16:45:04.829703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:103024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.511 [2024-09-29 16:45:04.829725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.511 [2024-09-29 16:45:04.829748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:103032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.511 [2024-09-29 16:45:04.829770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.511 [2024-09-29 16:45:04.829793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:103040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.511 [2024-09-29 16:45:04.829814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.511 [2024-09-29 16:45:04.829838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:103048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.511 [2024-09-29 16:45:04.829860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.511 [2024-09-29 16:45:04.829883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:103056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.511 [2024-09-29 16:45:04.829905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.511 [2024-09-29 16:45:04.829928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.511 [2024-09-29 16:45:04.829965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.511 [2024-09-29 16:45:04.829989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:103072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.511 [2024-09-29 16:45:04.830010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.511 [2024-09-29 16:45:04.830043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:103080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.511 [2024-09-29 16:45:04.830063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.511 [2024-09-29 16:45:04.830086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:103088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.511 [2024-09-29 16:45:04.830106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.511 [2024-09-29 16:45:04.830128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:103096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.511 [2024-09-29 16:45:04.830149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.511 [2024-09-29 16:45:04.830172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:103104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.511 [2024-09-29 16:45:04.830192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.511 [2024-09-29 16:45:04.830214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:103112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.511 [2024-09-29 16:45:04.830235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.511 [2024-09-29 16:45:04.830262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:103120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.511 [2024-09-29 16:45:04.830284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.511 [2024-09-29 16:45:04.830308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:103128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.511 [2024-09-29 16:45:04.830328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.511 [2024-09-29 16:45:04.830351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:103136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.511 [2024-09-29 16:45:04.830371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.511 [2024-09-29 16:45:04.830394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:103144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.511 [2024-09-29 16:45:04.830414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.511 [2024-09-29 16:45:04.830437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:103152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.511 [2024-09-29 16:45:04.830458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.511 [2024-09-29 16:45:04.830480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:103160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.511 [2024-09-29 16:45:04.830501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.511 [2024-09-29 16:45:04.830523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:103168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.511 [2024-09-29 16:45:04.830544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.511 [2024-09-29 16:45:04.830566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:103176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.512 [2024-09-29 16:45:04.830587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.512 [2024-09-29 16:45:04.830609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:103184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.512 [2024-09-29 16:45:04.830629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.512 [2024-09-29 16:45:04.830682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.512 [2024-09-29 16:45:04.830707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.512 [2024-09-29 16:45:04.830733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:103200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.512 [2024-09-29 16:45:04.830754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.512 [2024-09-29 16:45:04.830777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:103208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.512 [2024-09-29 16:45:04.830798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.512 [2024-09-29 16:45:04.830821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:103216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.512 [2024-09-29 16:45:04.830847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.512 [2024-09-29 16:45:04.830871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:103224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.512 [2024-09-29 16:45:04.830892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.512 [2024-09-29 16:45:04.830915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:103232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.512 [2024-09-29 16:45:04.830936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.512 [2024-09-29 16:45:04.830974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:103240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.512 [2024-09-29 16:45:04.830996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.512 [2024-09-29 16:45:04.831019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:103248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.512 [2024-09-29 16:45:04.831040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.512 [2024-09-29 16:45:04.831062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:103256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.512 [2024-09-29 16:45:04.831083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.512 [2024-09-29 16:45:04.831105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:103264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.512 [2024-09-29 16:45:04.831136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.512 [2024-09-29 16:45:04.831158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.512 [2024-09-29 16:45:04.831179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.512 [2024-09-29 16:45:04.831201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:103280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.512 [2024-09-29 16:45:04.831222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.512 [2024-09-29 16:45:04.831244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:103288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.512 [2024-09-29 16:45:04.831265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.512 [2024-09-29 16:45:04.831287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:103296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.512 [2024-09-29 16:45:04.831307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.512 [2024-09-29 16:45:04.831329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:103304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.512 [2024-09-29 16:45:04.831350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.512 [2024-09-29 16:45:04.831372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:103312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.512 [2024-09-29 16:45:04.831392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.512 [2024-09-29 16:45:04.831422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:103320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.512 [2024-09-29 16:45:04.831444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.512 [2024-09-29 16:45:04.831475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:103328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.512 [2024-09-29 16:45:04.831496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.512 [2024-09-29 16:45:04.831519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:103336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:04.512 [2024-09-29 16:45:04.831539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.512 [2024-09-29 16:45:04.831560] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2f00 is same with the state(6) to be set 00:37:04.512 [2024-09-29 16:45:04.831586] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:04.512 [2024-09-29 16:45:04.831604] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:04.512 [2024-09-29 16:45:04.831622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103344 len:8 PRP1 0x0 PRP2 0x0 00:37:04.512 [2024-09-29 16:45:04.831642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.512 [2024-09-29 16:45:04.831944] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f2f00 was disconnected and freed. reset controller. 00:37:04.512 [2024-09-29 16:45:04.832067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:37:04.512 [2024-09-29 16:45:04.832095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.512 [2024-09-29 16:45:04.832135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:37:04.512 [2024-09-29 16:45:04.832156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.512 [2024-09-29 16:45:04.832178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:37:04.512 [2024-09-29 16:45:04.832199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.512 [2024-09-29 16:45:04.832219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:37:04.512 [2024-09-29 16:45:04.832239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:04.512 [2024-09-29 16:45:04.832258] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:04.512 [2024-09-29 16:45:04.836233] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:04.512 [2024-09-29 16:45:04.836288] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:04.512 [2024-09-29 16:45:04.837092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:04.512 [2024-09-29 16:45:04.837140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:04.512 [2024-09-29 16:45:04.837178] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:04.512 [2024-09-29 16:45:04.837465] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:04.512 [2024-09-29 16:45:04.837754] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:04.512 [2024-09-29 16:45:04.837784] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:04.512 [2024-09-29 16:45:04.837809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:04.512 [2024-09-29 16:45:04.841510] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:04.512 [2024-09-29 16:45:04.851133] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:04.512 [2024-09-29 16:45:04.851630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:04.512 [2024-09-29 16:45:04.851697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:04.512 [2024-09-29 16:45:04.851746] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:04.512 [2024-09-29 16:45:04.852037] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:04.512 [2024-09-29 16:45:04.852328] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:04.512 [2024-09-29 16:45:04.852358] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:04.512 [2024-09-29 16:45:04.852380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:04.512 [2024-09-29 16:45:04.856566] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:04.512 [2024-09-29 16:45:04.865803] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:04.512 [2024-09-29 16:45:04.866276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:04.512 [2024-09-29 16:45:04.866318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:04.512 [2024-09-29 16:45:04.866344] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:04.512 [2024-09-29 16:45:04.866635] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:04.512 [2024-09-29 16:45:04.866941] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:04.512 [2024-09-29 16:45:04.866973] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:04.512 [2024-09-29 16:45:04.866995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:04.512 [2024-09-29 16:45:04.871218] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:04.513 [2024-09-29 16:45:04.880366] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:04.513 [2024-09-29 16:45:04.880850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:04.513 [2024-09-29 16:45:04.880891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:04.513 [2024-09-29 16:45:04.880916] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:04.513 [2024-09-29 16:45:04.881205] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:04.513 [2024-09-29 16:45:04.881498] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:04.513 [2024-09-29 16:45:04.881529] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:04.513 [2024-09-29 16:45:04.881556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:04.513 [2024-09-29 16:45:04.885830] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:04.513 [2024-09-29 16:45:04.894977] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:04.513 [2024-09-29 16:45:04.895431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:04.513 [2024-09-29 16:45:04.895473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:04.513 [2024-09-29 16:45:04.895499] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:04.513 [2024-09-29 16:45:04.895800] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:04.513 [2024-09-29 16:45:04.896092] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:04.513 [2024-09-29 16:45:04.896123] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:04.513 [2024-09-29 16:45:04.896145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:04.513 [2024-09-29 16:45:04.900302] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:04.513 [2024-09-29 16:45:04.909596] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:04.513 [2024-09-29 16:45:04.910055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:04.513 [2024-09-29 16:45:04.910096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:04.513 [2024-09-29 16:45:04.910122] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:04.513 [2024-09-29 16:45:04.910409] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:04.513 [2024-09-29 16:45:04.910711] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:04.513 [2024-09-29 16:45:04.910742] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:04.513 [2024-09-29 16:45:04.910764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:04.513 [2024-09-29 16:45:04.914937] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:04.513 [2024-09-29 16:45:04.924260] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:04.513 [2024-09-29 16:45:04.924701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:04.513 [2024-09-29 16:45:04.924743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:04.513 [2024-09-29 16:45:04.924769] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:04.513 [2024-09-29 16:45:04.925055] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:04.513 [2024-09-29 16:45:04.925344] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:04.513 [2024-09-29 16:45:04.925374] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:04.513 [2024-09-29 16:45:04.925395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:04.513 [2024-09-29 16:45:04.929573] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:04.513 [2024-09-29 16:45:04.938979] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:04.513 [2024-09-29 16:45:04.939475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:04.513 [2024-09-29 16:45:04.939532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:04.513 [2024-09-29 16:45:04.939557] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:04.513 [2024-09-29 16:45:04.939883] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:04.513 [2024-09-29 16:45:04.940177] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:04.513 [2024-09-29 16:45:04.940207] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:04.513 [2024-09-29 16:45:04.940228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:04.513 [2024-09-29 16:45:04.944410] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:04.513 [2024-09-29 16:45:04.953500] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:04.513 [2024-09-29 16:45:04.953989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:04.513 [2024-09-29 16:45:04.954030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:04.513 [2024-09-29 16:45:04.954056] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:04.513 [2024-09-29 16:45:04.954343] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:04.513 [2024-09-29 16:45:04.954632] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:04.513 [2024-09-29 16:45:04.954661] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:04.513 [2024-09-29 16:45:04.954702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:04.513 [2024-09-29 16:45:04.958890] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:04.513 [2024-09-29 16:45:04.967978] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:04.513 [2024-09-29 16:45:04.968457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:04.513 [2024-09-29 16:45:04.968498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:04.513 [2024-09-29 16:45:04.968524] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:04.513 [2024-09-29 16:45:04.968838] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:04.513 [2024-09-29 16:45:04.969130] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:04.513 [2024-09-29 16:45:04.969161] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:04.513 [2024-09-29 16:45:04.969183] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:04.513 [2024-09-29 16:45:04.973343] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:04.513 [2024-09-29 16:45:04.982642] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:04.514 [2024-09-29 16:45:04.983078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:04.514 [2024-09-29 16:45:04.983118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:04.514 [2024-09-29 16:45:04.983144] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:04.514 [2024-09-29 16:45:04.983430] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:04.514 [2024-09-29 16:45:04.983743] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:04.514 [2024-09-29 16:45:04.983773] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:04.514 [2024-09-29 16:45:04.983795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:04.514 [2024-09-29 16:45:04.987965] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:04.514 [2024-09-29 16:45:04.997301] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:04.514 [2024-09-29 16:45:04.997776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:04.514 [2024-09-29 16:45:04.997819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:04.514 [2024-09-29 16:45:04.997845] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:04.514 [2024-09-29 16:45:04.998133] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:04.514 [2024-09-29 16:45:04.998422] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:04.514 [2024-09-29 16:45:04.998452] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:04.514 [2024-09-29 16:45:04.998473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:04.514 [2024-09-29 16:45:05.002622] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:04.514 [2024-09-29 16:45:05.011927] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:04.514 [2024-09-29 16:45:05.012391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:04.514 [2024-09-29 16:45:05.012432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:04.514 [2024-09-29 16:45:05.012458] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:04.514 [2024-09-29 16:45:05.012758] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:04.514 [2024-09-29 16:45:05.013047] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:04.514 [2024-09-29 16:45:05.013077] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:04.514 [2024-09-29 16:45:05.013099] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:04.514 [2024-09-29 16:45:05.017261] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:04.514 [2024-09-29 16:45:05.026512] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:04.514 [2024-09-29 16:45:05.027001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:04.514 [2024-09-29 16:45:05.027051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:04.514 [2024-09-29 16:45:05.027076] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:04.514 [2024-09-29 16:45:05.027376] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:04.514 [2024-09-29 16:45:05.027665] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:04.514 [2024-09-29 16:45:05.027717] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:04.514 [2024-09-29 16:45:05.027746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:04.514 [2024-09-29 16:45:05.031904] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:04.514 [2024-09-29 16:45:05.041209] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:04.514 [2024-09-29 16:45:05.041644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:04.514 [2024-09-29 16:45:05.041695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:04.514 [2024-09-29 16:45:05.041722] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:04.514 [2024-09-29 16:45:05.042009] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:04.514 [2024-09-29 16:45:05.042310] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:04.514 [2024-09-29 16:45:05.042341] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:04.514 [2024-09-29 16:45:05.042363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:04.514 [2024-09-29 16:45:05.046492] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:04.514 [2024-09-29 16:45:05.055742] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:04.514 [2024-09-29 16:45:05.056187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:04.514 [2024-09-29 16:45:05.056227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:04.514 [2024-09-29 16:45:05.056253] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:04.514 [2024-09-29 16:45:05.056539] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:04.514 [2024-09-29 16:45:05.056842] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:04.514 [2024-09-29 16:45:05.056873] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:04.514 [2024-09-29 16:45:05.056895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:04.514 [2024-09-29 16:45:05.061043] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:04.774 [2024-09-29 16:45:05.070631] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:04.774 [2024-09-29 16:45:05.071090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:04.774 [2024-09-29 16:45:05.071134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:04.774 [2024-09-29 16:45:05.071162] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:04.774 [2024-09-29 16:45:05.071450] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:04.774 [2024-09-29 16:45:05.071752] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:04.774 [2024-09-29 16:45:05.071784] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:04.774 [2024-09-29 16:45:05.071806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:04.774 [2024-09-29 16:45:05.075951] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:04.774 [2024-09-29 16:45:05.085194] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:04.774 [2024-09-29 16:45:05.085691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:04.774 [2024-09-29 16:45:05.085734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:04.774 [2024-09-29 16:45:05.085762] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:04.774 [2024-09-29 16:45:05.086048] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:04.774 [2024-09-29 16:45:05.086337] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:04.774 [2024-09-29 16:45:05.086368] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:04.774 [2024-09-29 16:45:05.086389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:04.774 [2024-09-29 16:45:05.090535] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:04.774 [2024-09-29 16:45:05.099818] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:04.774 [2024-09-29 16:45:05.100279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:04.774 [2024-09-29 16:45:05.100331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:04.774 [2024-09-29 16:45:05.100355] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:04.774 [2024-09-29 16:45:05.100665] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:04.774 [2024-09-29 16:45:05.100975] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:04.774 [2024-09-29 16:45:05.101006] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:04.774 [2024-09-29 16:45:05.101027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:04.774 [2024-09-29 16:45:05.105161] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:04.774 [2024-09-29 16:45:05.114403] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:04.774 [2024-09-29 16:45:05.114858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:04.774 [2024-09-29 16:45:05.114900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:04.774 [2024-09-29 16:45:05.114926] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:04.774 [2024-09-29 16:45:05.115214] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:04.774 [2024-09-29 16:45:05.115502] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:04.774 [2024-09-29 16:45:05.115532] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:04.774 [2024-09-29 16:45:05.115554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:04.774 [2024-09-29 16:45:05.119725] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:04.774 [2024-09-29 16:45:05.128973] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:04.774 [2024-09-29 16:45:05.129544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:04.774 [2024-09-29 16:45:05.129596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:04.774 [2024-09-29 16:45:05.129620] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:04.774 [2024-09-29 16:45:05.129934] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:04.774 [2024-09-29 16:45:05.130223] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:04.774 [2024-09-29 16:45:05.130254] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:04.774 [2024-09-29 16:45:05.130275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:04.774 [2024-09-29 16:45:05.134413] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:04.774 [2024-09-29 16:45:05.143441] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:04.774 [2024-09-29 16:45:05.143895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:04.775 [2024-09-29 16:45:05.143937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:04.775 [2024-09-29 16:45:05.143963] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:04.775 [2024-09-29 16:45:05.144249] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:04.775 [2024-09-29 16:45:05.144537] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:04.775 [2024-09-29 16:45:05.144567] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:04.775 [2024-09-29 16:45:05.144589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:04.775 [2024-09-29 16:45:05.148741] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:04.775 [2024-09-29 16:45:05.157981] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:04.775 [2024-09-29 16:45:05.158459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:04.775 [2024-09-29 16:45:05.158500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:04.775 [2024-09-29 16:45:05.158526] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:04.775 [2024-09-29 16:45:05.158826] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:04.775 [2024-09-29 16:45:05.159114] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:04.775 [2024-09-29 16:45:05.159145] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:04.775 [2024-09-29 16:45:05.159166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:04.775 [2024-09-29 16:45:05.163302] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:04.775 [2024-09-29 16:45:05.172544] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:04.775 [2024-09-29 16:45:05.173011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:04.775 [2024-09-29 16:45:05.173046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:04.775 [2024-09-29 16:45:05.173084] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:04.775 [2024-09-29 16:45:05.173373] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:04.775 [2024-09-29 16:45:05.173661] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:04.775 [2024-09-29 16:45:05.173713] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:04.775 [2024-09-29 16:45:05.173742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:04.775 [2024-09-29 16:45:05.177877] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:04.775 [2024-09-29 16:45:05.187126] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:04.775 [2024-09-29 16:45:05.187591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:04.775 [2024-09-29 16:45:05.187631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:04.775 [2024-09-29 16:45:05.187657] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:04.775 [2024-09-29 16:45:05.187954] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:04.775 [2024-09-29 16:45:05.188243] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:04.775 [2024-09-29 16:45:05.188273] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:04.775 [2024-09-29 16:45:05.188295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:04.775 [2024-09-29 16:45:05.192429] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:04.775 [2024-09-29 16:45:05.201658] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:04.775 [2024-09-29 16:45:05.202135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:04.775 [2024-09-29 16:45:05.202176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:04.775 [2024-09-29 16:45:05.202202] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:04.775 [2024-09-29 16:45:05.202488] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:04.775 [2024-09-29 16:45:05.202788] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:04.775 [2024-09-29 16:45:05.202819] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:04.775 [2024-09-29 16:45:05.202840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:04.775 [2024-09-29 16:45:05.206976] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:04.775 [2024-09-29 16:45:05.216250] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:04.775 [2024-09-29 16:45:05.216717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:04.775 [2024-09-29 16:45:05.216758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:04.775 [2024-09-29 16:45:05.216785] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:04.775 [2024-09-29 16:45:05.217071] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:04.775 [2024-09-29 16:45:05.217360] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:04.775 [2024-09-29 16:45:05.217389] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:04.775 [2024-09-29 16:45:05.217411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:04.775 [2024-09-29 16:45:05.221570] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:04.775 [2024-09-29 16:45:05.230839] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:04.775 [2024-09-29 16:45:05.231292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:04.775 [2024-09-29 16:45:05.231333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:04.775 [2024-09-29 16:45:05.231359] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:04.775 [2024-09-29 16:45:05.231646] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:04.775 [2024-09-29 16:45:05.231944] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:04.775 [2024-09-29 16:45:05.231984] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:04.775 [2024-09-29 16:45:05.232005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:04.775 [2024-09-29 16:45:05.236173] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:04.775 [2024-09-29 16:45:05.245505] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:04.775 [2024-09-29 16:45:05.245925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:04.775 [2024-09-29 16:45:05.245981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:04.775 [2024-09-29 16:45:05.246007] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:04.775 [2024-09-29 16:45:05.246306] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:04.775 [2024-09-29 16:45:05.246601] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:04.775 [2024-09-29 16:45:05.246627] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:04.775 [2024-09-29 16:45:05.246682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:04.775 [2024-09-29 16:45:05.250834] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:04.775 [2024-09-29 16:45:05.260049] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:04.775 [2024-09-29 16:45:05.260492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:04.775 [2024-09-29 16:45:05.260533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:04.775 [2024-09-29 16:45:05.260559] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:04.775 [2024-09-29 16:45:05.260870] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:04.775 [2024-09-29 16:45:05.261174] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:04.775 [2024-09-29 16:45:05.261205] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:04.775 [2024-09-29 16:45:05.261227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:04.775 [2024-09-29 16:45:05.265329] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:04.775 [2024-09-29 16:45:05.274647] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:04.775 [2024-09-29 16:45:05.275119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:04.775 [2024-09-29 16:45:05.275159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:04.775 [2024-09-29 16:45:05.275184] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:04.775 [2024-09-29 16:45:05.275478] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:04.775 [2024-09-29 16:45:05.275789] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:04.775 [2024-09-29 16:45:05.275817] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:04.775 [2024-09-29 16:45:05.275837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:04.775 [2024-09-29 16:45:05.280017] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:04.775 [2024-09-29 16:45:05.289216] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:04.775 [2024-09-29 16:45:05.289725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:04.775 [2024-09-29 16:45:05.289763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:04.776 [2024-09-29 16:45:05.289787] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:04.776 [2024-09-29 16:45:05.290078] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:04.776 [2024-09-29 16:45:05.290369] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:04.776 [2024-09-29 16:45:05.290399] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:04.776 [2024-09-29 16:45:05.290421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:04.776 [2024-09-29 16:45:05.294681] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:04.776 [2024-09-29 16:45:05.303812] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:04.776 [2024-09-29 16:45:05.304308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:04.776 [2024-09-29 16:45:05.304349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:04.776 [2024-09-29 16:45:05.304375] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:04.776 [2024-09-29 16:45:05.304685] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:04.776 [2024-09-29 16:45:05.304992] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:04.776 [2024-09-29 16:45:05.305023] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:04.776 [2024-09-29 16:45:05.305045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:04.776 [2024-09-29 16:45:05.309263] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:04.776 [2024-09-29 16:45:05.318505] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:04.776 [2024-09-29 16:45:05.318946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:04.776 [2024-09-29 16:45:05.319011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:04.776 [2024-09-29 16:45:05.319036] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:04.776 [2024-09-29 16:45:05.319325] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:04.776 [2024-09-29 16:45:05.319613] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:04.776 [2024-09-29 16:45:05.319643] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:04.776 [2024-09-29 16:45:05.319701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:04.776 [2024-09-29 16:45:05.323951] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:04.776 [2024-09-29 16:45:05.333348] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:04.776 [2024-09-29 16:45:05.333871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:04.776 [2024-09-29 16:45:05.333911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:04.776 [2024-09-29 16:45:05.333935] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:04.776 [2024-09-29 16:45:05.334278] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:04.776 [2024-09-29 16:45:05.334584] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:04.776 [2024-09-29 16:45:05.334617] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:04.776 [2024-09-29 16:45:05.334639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:05.035 [2024-09-29 16:45:05.339165] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:05.035 [2024-09-29 16:45:05.348086] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:05.035 [2024-09-29 16:45:05.348573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:05.035 [2024-09-29 16:45:05.348612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:05.035 [2024-09-29 16:45:05.348636] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:05.035 [2024-09-29 16:45:05.348938] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:05.035 [2024-09-29 16:45:05.349249] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:05.035 [2024-09-29 16:45:05.349279] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:05.035 [2024-09-29 16:45:05.349301] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:05.035 [2024-09-29 16:45:05.353473] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:05.035 [2024-09-29 16:45:05.362579] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:05.035 [2024-09-29 16:45:05.363066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:05.035 [2024-09-29 16:45:05.363108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:05.035 [2024-09-29 16:45:05.363135] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:05.035 [2024-09-29 16:45:05.363420] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:05.035 [2024-09-29 16:45:05.363730] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:05.035 [2024-09-29 16:45:05.363761] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:05.035 [2024-09-29 16:45:05.363782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:05.035 [2024-09-29 16:45:05.367929] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:05.035 [2024-09-29 16:45:05.377247] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:05.035 [2024-09-29 16:45:05.377726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:05.035 [2024-09-29 16:45:05.377768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:05.035 [2024-09-29 16:45:05.377794] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:05.035 [2024-09-29 16:45:05.378083] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:05.035 [2024-09-29 16:45:05.378374] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:05.035 [2024-09-29 16:45:05.378404] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:05.035 [2024-09-29 16:45:05.378425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:05.035 [2024-09-29 16:45:05.382590] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:05.035 [2024-09-29 16:45:05.391902] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:05.035 [2024-09-29 16:45:05.392433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:05.035 [2024-09-29 16:45:05.392483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:05.035 [2024-09-29 16:45:05.392507] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:05.035 [2024-09-29 16:45:05.392827] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:05.035 [2024-09-29 16:45:05.393117] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:05.035 [2024-09-29 16:45:05.393147] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:05.035 [2024-09-29 16:45:05.393170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:05.035 [2024-09-29 16:45:05.397325] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:05.035 [2024-09-29 16:45:05.406379] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:05.035 [2024-09-29 16:45:05.406867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:05.035 [2024-09-29 16:45:05.406908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:05.035 [2024-09-29 16:45:05.406934] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:05.035 [2024-09-29 16:45:05.407220] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:05.035 [2024-09-29 16:45:05.407508] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:05.035 [2024-09-29 16:45:05.407538] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:05.035 [2024-09-29 16:45:05.407560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:05.035 [2024-09-29 16:45:05.411710] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:05.035 [2024-09-29 16:45:05.420985] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:05.035 [2024-09-29 16:45:05.421463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:05.035 [2024-09-29 16:45:05.421504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:05.035 [2024-09-29 16:45:05.421530] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:05.035 [2024-09-29 16:45:05.421839] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:05.035 [2024-09-29 16:45:05.422130] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:05.035 [2024-09-29 16:45:05.422161] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:05.035 [2024-09-29 16:45:05.422182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:05.035 [2024-09-29 16:45:05.426324] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:05.035 [2024-09-29 16:45:05.435610] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:05.035 [2024-09-29 16:45:05.436100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:05.035 [2024-09-29 16:45:05.436142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:05.035 [2024-09-29 16:45:05.436168] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:05.035 [2024-09-29 16:45:05.436454] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:05.035 [2024-09-29 16:45:05.436758] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:05.035 [2024-09-29 16:45:05.436789] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:05.035 [2024-09-29 16:45:05.436811] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:05.035 [2024-09-29 16:45:05.441017] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:05.035 [2024-09-29 16:45:05.450066] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:05.035 [2024-09-29 16:45:05.450612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:05.035 [2024-09-29 16:45:05.450653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:05.035 [2024-09-29 16:45:05.450691] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:05.035 [2024-09-29 16:45:05.450988] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:05.036 [2024-09-29 16:45:05.451282] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:05.036 [2024-09-29 16:45:05.451312] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:05.036 [2024-09-29 16:45:05.451334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:05.036 [2024-09-29 16:45:05.455504] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:05.036 [2024-09-29 16:45:05.464553] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:05.036 [2024-09-29 16:45:05.465043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:05.036 [2024-09-29 16:45:05.465079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:05.036 [2024-09-29 16:45:05.465119] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:05.036 [2024-09-29 16:45:05.465418] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:05.036 [2024-09-29 16:45:05.465723] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:05.036 [2024-09-29 16:45:05.465753] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:05.036 [2024-09-29 16:45:05.465781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:05.036 [2024-09-29 16:45:05.469932] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:05.036 [2024-09-29 16:45:05.479239] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:05.036 [2024-09-29 16:45:05.479719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:05.036 [2024-09-29 16:45:05.479761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:05.036 [2024-09-29 16:45:05.479788] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:05.036 [2024-09-29 16:45:05.480075] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:05.036 [2024-09-29 16:45:05.480362] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:05.036 [2024-09-29 16:45:05.480392] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:05.036 [2024-09-29 16:45:05.480414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:05.036 [2024-09-29 16:45:05.484566] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:05.036 [2024-09-29 16:45:05.493835] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:05.036 [2024-09-29 16:45:05.494303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:05.036 [2024-09-29 16:45:05.494344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:05.036 [2024-09-29 16:45:05.494369] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:05.036 [2024-09-29 16:45:05.494657] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:05.036 [2024-09-29 16:45:05.494967] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:05.036 [2024-09-29 16:45:05.495001] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:05.036 [2024-09-29 16:45:05.495023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:05.036 [2024-09-29 16:45:05.499160] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:05.036 [2024-09-29 16:45:05.508415] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:05.036 [2024-09-29 16:45:05.508877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:05.036 [2024-09-29 16:45:05.508913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:05.036 [2024-09-29 16:45:05.508952] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:05.036 [2024-09-29 16:45:05.509262] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:05.036 [2024-09-29 16:45:05.509551] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:05.036 [2024-09-29 16:45:05.509582] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:05.036 [2024-09-29 16:45:05.509603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:05.036 [2024-09-29 16:45:05.513767] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:05.036 [2024-09-29 16:45:05.523036] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:05.036 [2024-09-29 16:45:05.523540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:05.036 [2024-09-29 16:45:05.523581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:05.036 [2024-09-29 16:45:05.523607] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:05.036 [2024-09-29 16:45:05.523906] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:05.036 [2024-09-29 16:45:05.524192] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:05.036 [2024-09-29 16:45:05.524223] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:05.036 [2024-09-29 16:45:05.524244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:05.036 [2024-09-29 16:45:05.528385] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:05.036 [2024-09-29 16:45:05.537642] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:05.036 [2024-09-29 16:45:05.538077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:05.036 [2024-09-29 16:45:05.538119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:05.036 [2024-09-29 16:45:05.538144] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:05.036 [2024-09-29 16:45:05.538430] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:05.036 [2024-09-29 16:45:05.538734] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:05.036 [2024-09-29 16:45:05.538765] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:05.036 [2024-09-29 16:45:05.538786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:05.036 [2024-09-29 16:45:05.542945] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:05.036 [2024-09-29 16:45:05.552201] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:05.036 [2024-09-29 16:45:05.552689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:05.036 [2024-09-29 16:45:05.552730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:05.036 [2024-09-29 16:45:05.552756] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:05.036 [2024-09-29 16:45:05.553042] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:05.036 [2024-09-29 16:45:05.553331] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:05.036 [2024-09-29 16:45:05.553361] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:05.036 [2024-09-29 16:45:05.553383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:05.036 [2024-09-29 16:45:05.557518] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:05.036 [2024-09-29 16:45:05.566784] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:05.036 [2024-09-29 16:45:05.567227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:05.036 [2024-09-29 16:45:05.567280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:05.036 [2024-09-29 16:45:05.567302] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:05.036 [2024-09-29 16:45:05.567614] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:05.036 [2024-09-29 16:45:05.567913] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:05.036 [2024-09-29 16:45:05.567944] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:05.036 [2024-09-29 16:45:05.567966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:05.036 [2024-09-29 16:45:05.572108] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:05.036 [2024-09-29 16:45:05.581345] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:05.036 [2024-09-29 16:45:05.581802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:05.036 [2024-09-29 16:45:05.581844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:05.036 [2024-09-29 16:45:05.581869] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:05.036 [2024-09-29 16:45:05.582156] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:05.036 [2024-09-29 16:45:05.582443] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:05.036 [2024-09-29 16:45:05.582473] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:05.036 [2024-09-29 16:45:05.582495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:05.036 [2024-09-29 16:45:05.586628] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:05.036 [2024-09-29 16:45:05.596181] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:05.036 [2024-09-29 16:45:05.596688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:05.036 [2024-09-29 16:45:05.596747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:05.036 [2024-09-29 16:45:05.596786] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:05.294 [2024-09-29 16:45:05.597077] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:05.294 [2024-09-29 16:45:05.597431] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:05.294 [2024-09-29 16:45:05.597465] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:05.294 [2024-09-29 16:45:05.597488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:05.294 [2024-09-29 16:45:05.601801] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:05.294 [2024-09-29 16:45:05.610852] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:05.294 [2024-09-29 16:45:05.611322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:05.294 [2024-09-29 16:45:05.611364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:05.294 [2024-09-29 16:45:05.611390] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:05.294 [2024-09-29 16:45:05.611699] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:05.294 [2024-09-29 16:45:05.611987] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:05.294 [2024-09-29 16:45:05.612024] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:05.294 [2024-09-29 16:45:05.612047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:05.294 [2024-09-29 16:45:05.616202] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:05.294 [2024-09-29 16:45:05.625446] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:05.294 [2024-09-29 16:45:05.625904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:05.294 [2024-09-29 16:45:05.625956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:05.294 [2024-09-29 16:45:05.625996] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:05.294 [2024-09-29 16:45:05.626295] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:05.294 [2024-09-29 16:45:05.626584] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:05.294 [2024-09-29 16:45:05.626621] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:05.294 [2024-09-29 16:45:05.626642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:05.294 [2024-09-29 16:45:05.630779] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:05.294 [2024-09-29 16:45:05.640046] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:05.294 [2024-09-29 16:45:05.640520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:05.294 [2024-09-29 16:45:05.640561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:05.294 [2024-09-29 16:45:05.640587] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:05.294 [2024-09-29 16:45:05.640887] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:05.294 [2024-09-29 16:45:05.641176] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:05.294 [2024-09-29 16:45:05.641206] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:05.294 [2024-09-29 16:45:05.641228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:05.294 [2024-09-29 16:45:05.645378] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:05.294 [2024-09-29 16:45:05.654616] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:05.294 [2024-09-29 16:45:05.655088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:05.294 [2024-09-29 16:45:05.655130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:05.294 [2024-09-29 16:45:05.655155] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:05.294 [2024-09-29 16:45:05.655441] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:05.294 [2024-09-29 16:45:05.655744] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:05.294 [2024-09-29 16:45:05.655775] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:05.294 [2024-09-29 16:45:05.655797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:05.294 [2024-09-29 16:45:05.659919] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:05.294 [2024-09-29 16:45:05.669161] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:05.294 [2024-09-29 16:45:05.669642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:05.294 [2024-09-29 16:45:05.669701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:05.294 [2024-09-29 16:45:05.669728] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:05.294 [2024-09-29 16:45:05.670013] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:05.294 [2024-09-29 16:45:05.670301] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:05.294 [2024-09-29 16:45:05.670331] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:05.294 [2024-09-29 16:45:05.670352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:05.294 [2024-09-29 16:45:05.674494] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:05.294 [2024-09-29 16:45:05.683761] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:05.294 [2024-09-29 16:45:05.684259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:05.294 [2024-09-29 16:45:05.684295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:05.294 [2024-09-29 16:45:05.684317] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:05.294 [2024-09-29 16:45:05.684612] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:05.294 [2024-09-29 16:45:05.684907] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:05.294 [2024-09-29 16:45:05.684938] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:05.294 [2024-09-29 16:45:05.684961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:05.294 [2024-09-29 16:45:05.689089] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:05.294 [2024-09-29 16:45:05.698330] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:05.294 [2024-09-29 16:45:05.698793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:05.294 [2024-09-29 16:45:05.698835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:05.294 [2024-09-29 16:45:05.698860] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:05.294 [2024-09-29 16:45:05.699146] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:05.294 [2024-09-29 16:45:05.699433] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:05.294 [2024-09-29 16:45:05.699463] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:05.294 [2024-09-29 16:45:05.699484] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:05.294 [2024-09-29 16:45:05.703636] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:05.294 4279.00 IOPS, 16.71 MiB/s [2024-09-29 16:45:05.712993] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:05.294 [2024-09-29 16:45:05.713474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:05.294 [2024-09-29 16:45:05.713524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:05.294 [2024-09-29 16:45:05.713552] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:05.294 [2024-09-29 16:45:05.713829] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:05.294 [2024-09-29 16:45:05.714110] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:05.294 [2024-09-29 16:45:05.714141] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:05.294 [2024-09-29 16:45:05.714162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:05.294 [2024-09-29 16:45:05.718287] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:05.294 [2024-09-29 16:45:05.727539] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:05.294 [2024-09-29 16:45:05.728026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:05.294 [2024-09-29 16:45:05.728068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:05.294 [2024-09-29 16:45:05.728093] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:05.294 [2024-09-29 16:45:05.728378] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:05.294 [2024-09-29 16:45:05.728666] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:05.294 [2024-09-29 16:45:05.728728] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:05.294 [2024-09-29 16:45:05.728752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:05.294 [2024-09-29 16:45:05.732925] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:05.294 [2024-09-29 16:45:05.742187] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:05.294 [2024-09-29 16:45:05.742689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:05.294 [2024-09-29 16:45:05.742727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:05.294 [2024-09-29 16:45:05.742751] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:05.294 [2024-09-29 16:45:05.743043] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:05.294 [2024-09-29 16:45:05.743332] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:05.294 [2024-09-29 16:45:05.743362] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:05.294 [2024-09-29 16:45:05.743384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:05.294 [2024-09-29 16:45:05.747527] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:05.294 [2024-09-29 16:45:05.756776] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:05.294 [2024-09-29 16:45:05.757275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:05.294 [2024-09-29 16:45:05.757317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:05.294 [2024-09-29 16:45:05.757343] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:05.294 [2024-09-29 16:45:05.757629] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:05.294 [2024-09-29 16:45:05.757927] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:05.294 [2024-09-29 16:45:05.757963] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:05.294 [2024-09-29 16:45:05.757986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:05.294 [2024-09-29 16:45:05.762126] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:05.294 [2024-09-29 16:45:05.771368] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:05.294 [2024-09-29 16:45:05.771823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:05.294 [2024-09-29 16:45:05.771864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:05.294 [2024-09-29 16:45:05.771890] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:05.294 [2024-09-29 16:45:05.772176] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:05.294 [2024-09-29 16:45:05.772463] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:05.295 [2024-09-29 16:45:05.772493] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:05.295 [2024-09-29 16:45:05.772515] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:05.295 [2024-09-29 16:45:05.776645] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:05.295 [2024-09-29 16:45:05.785887] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:05.295 [2024-09-29 16:45:05.786345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:05.295 [2024-09-29 16:45:05.786386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:05.295 [2024-09-29 16:45:05.786411] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:05.295 [2024-09-29 16:45:05.786707] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:05.295 [2024-09-29 16:45:05.787001] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:05.295 [2024-09-29 16:45:05.787031] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:05.295 [2024-09-29 16:45:05.787053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:05.295 [2024-09-29 16:45:05.791188] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:05.295 [2024-09-29 16:45:05.800435] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:05.295 [2024-09-29 16:45:05.800935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:05.295 [2024-09-29 16:45:05.800987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:05.295 [2024-09-29 16:45:05.801010] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:05.295 [2024-09-29 16:45:05.801327] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:05.295 [2024-09-29 16:45:05.801614] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:05.295 [2024-09-29 16:45:05.801644] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:05.295 [2024-09-29 16:45:05.801666] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:05.295 [2024-09-29 16:45:05.805816] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:05.295 [2024-09-29 16:45:05.815072] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:05.295 [2024-09-29 16:45:05.815505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:05.295 [2024-09-29 16:45:05.815546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:05.295 [2024-09-29 16:45:05.815571] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:05.295 [2024-09-29 16:45:05.815874] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:05.295 [2024-09-29 16:45:05.816162] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:05.295 [2024-09-29 16:45:05.816193] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:05.295 [2024-09-29 16:45:05.816215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:05.295 [2024-09-29 16:45:05.820353] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:05.295 [2024-09-29 16:45:05.829633] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:05.295 [2024-09-29 16:45:05.830095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:05.295 [2024-09-29 16:45:05.830138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:05.295 [2024-09-29 16:45:05.830164] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:05.295 [2024-09-29 16:45:05.830450] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:05.295 [2024-09-29 16:45:05.830750] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:05.295 [2024-09-29 16:45:05.830781] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:05.295 [2024-09-29 16:45:05.830803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:05.295 [2024-09-29 16:45:05.834934] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:05.295 [2024-09-29 16:45:05.844222] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:05.295 [2024-09-29 16:45:05.844665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:05.295 [2024-09-29 16:45:05.844714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:05.295 [2024-09-29 16:45:05.844741] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:05.295 [2024-09-29 16:45:05.845029] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:05.295 [2024-09-29 16:45:05.845316] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:05.295 [2024-09-29 16:45:05.845347] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:05.295 [2024-09-29 16:45:05.845369] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:05.295 [2024-09-29 16:45:05.849503] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:05.553 [2024-09-29 16:45:05.859026] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:05.553 [2024-09-29 16:45:05.859505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:05.553 [2024-09-29 16:45:05.859565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:05.553 [2024-09-29 16:45:05.859617] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:05.553 [2024-09-29 16:45:05.859928] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:05.553 [2024-09-29 16:45:05.860245] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:05.553 [2024-09-29 16:45:05.860280] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:05.553 [2024-09-29 16:45:05.860313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:05.553 [2024-09-29 16:45:05.864465] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:05.553 [2024-09-29 16:45:05.873462] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:05.553 [2024-09-29 16:45:05.873954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:05.553 [2024-09-29 16:45:05.873998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:05.553 [2024-09-29 16:45:05.874025] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:05.553 [2024-09-29 16:45:05.874357] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:05.553 [2024-09-29 16:45:05.874646] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:05.553 [2024-09-29 16:45:05.874690] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:05.553 [2024-09-29 16:45:05.874716] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:05.553 [2024-09-29 16:45:05.878858] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:05.553 [2024-09-29 16:45:05.888125] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:05.553 [2024-09-29 16:45:05.888605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:05.553 [2024-09-29 16:45:05.888647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:05.553 [2024-09-29 16:45:05.888685] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:05.553 [2024-09-29 16:45:05.888977] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:05.553 [2024-09-29 16:45:05.889266] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:05.553 [2024-09-29 16:45:05.889296] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:05.553 [2024-09-29 16:45:05.889318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:05.553 [2024-09-29 16:45:05.893460] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:05.553 [2024-09-29 16:45:05.902751] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:05.553 [2024-09-29 16:45:05.903185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:05.553 [2024-09-29 16:45:05.903228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:05.553 [2024-09-29 16:45:05.903254] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:05.553 [2024-09-29 16:45:05.903541] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:05.553 [2024-09-29 16:45:05.903842] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:05.553 [2024-09-29 16:45:05.903879] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:05.553 [2024-09-29 16:45:05.903902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:05.553 [2024-09-29 16:45:05.908031] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:05.553 [2024-09-29 16:45:05.917162] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:05.553 [2024-09-29 16:45:05.917586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:05.553 [2024-09-29 16:45:05.917623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:05.554 [2024-09-29 16:45:05.917646] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:05.554 [2024-09-29 16:45:05.917934] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:05.554 [2024-09-29 16:45:05.918205] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:05.554 [2024-09-29 16:45:05.918234] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:05.554 [2024-09-29 16:45:05.918254] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:05.554 [2024-09-29 16:45:05.922069] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:05.554 [2024-09-29 16:45:05.930962] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:05.554 [2024-09-29 16:45:05.931545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:05.554 [2024-09-29 16:45:05.931582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:05.554 [2024-09-29 16:45:05.931621] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:05.554 [2024-09-29 16:45:05.931916] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:05.554 [2024-09-29 16:45:05.932223] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:05.554 [2024-09-29 16:45:05.932254] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:05.554 [2024-09-29 16:45:05.932275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:05.554 [2024-09-29 16:45:05.936404] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:05.554 [2024-09-29 16:45:05.945538] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:05.554 [2024-09-29 16:45:05.946018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:05.554 [2024-09-29 16:45:05.946054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:05.554 [2024-09-29 16:45:05.946077] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:05.554 [2024-09-29 16:45:05.946370] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:05.554 [2024-09-29 16:45:05.946663] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:05.554 [2024-09-29 16:45:05.946708] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:05.554 [2024-09-29 16:45:05.946731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:05.554 [2024-09-29 16:45:05.950953] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:05.554 [2024-09-29 16:45:05.960195] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:05.554 [2024-09-29 16:45:05.960664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:05.554 [2024-09-29 16:45:05.960713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:05.554 [2024-09-29 16:45:05.960739] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:05.554 [2024-09-29 16:45:05.961029] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:05.554 [2024-09-29 16:45:05.961320] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:05.554 [2024-09-29 16:45:05.961351] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:05.554 [2024-09-29 16:45:05.961372] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:05.554 [2024-09-29 16:45:05.965564] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:05.554 [2024-09-29 16:45:05.974724] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:05.554 [2024-09-29 16:45:05.975171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:05.554 [2024-09-29 16:45:05.975212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:05.554 [2024-09-29 16:45:05.975238] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:05.554 [2024-09-29 16:45:05.975530] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:05.554 [2024-09-29 16:45:05.975835] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:05.554 [2024-09-29 16:45:05.975866] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:05.554 [2024-09-29 16:45:05.975888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:05.554 [2024-09-29 16:45:05.980057] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:05.554 [2024-09-29 16:45:05.989355] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:05.554 [2024-09-29 16:45:05.989841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:05.554 [2024-09-29 16:45:05.989878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:05.554 [2024-09-29 16:45:05.989902] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:05.554 [2024-09-29 16:45:05.990189] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:05.554 [2024-09-29 16:45:05.990479] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:05.554 [2024-09-29 16:45:05.990509] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:05.554 [2024-09-29 16:45:05.990531] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:05.554 [2024-09-29 16:45:05.994667] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:05.554 [2024-09-29 16:45:06.003939] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:05.554 [2024-09-29 16:45:06.004421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:05.554 [2024-09-29 16:45:06.004473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:05.554 [2024-09-29 16:45:06.004503] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:05.554 [2024-09-29 16:45:06.004832] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:05.554 [2024-09-29 16:45:06.005120] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:05.554 [2024-09-29 16:45:06.005150] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:05.554 [2024-09-29 16:45:06.005172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:05.554 [2024-09-29 16:45:06.009320] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:05.554 [2024-09-29 16:45:06.018598] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:05.554 [2024-09-29 16:45:06.019055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:05.554 [2024-09-29 16:45:06.019096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:05.554 [2024-09-29 16:45:06.019122] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:05.554 [2024-09-29 16:45:06.019408] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:05.554 [2024-09-29 16:45:06.019718] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:05.554 [2024-09-29 16:45:06.019750] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:05.554 [2024-09-29 16:45:06.019772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:05.554 [2024-09-29 16:45:06.023921] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:05.554 [2024-09-29 16:45:06.033268] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:05.554 [2024-09-29 16:45:06.033742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:05.554 [2024-09-29 16:45:06.033784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:05.554 [2024-09-29 16:45:06.033810] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:05.554 [2024-09-29 16:45:06.034097] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:05.554 [2024-09-29 16:45:06.034385] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:05.554 [2024-09-29 16:45:06.034415] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:05.554 [2024-09-29 16:45:06.034436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:05.554 [2024-09-29 16:45:06.038605] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:05.554 [2024-09-29 16:45:06.047926] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:05.554 [2024-09-29 16:45:06.048370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:05.554 [2024-09-29 16:45:06.048411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:05.554 [2024-09-29 16:45:06.048437] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:05.554 [2024-09-29 16:45:06.048739] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:05.554 [2024-09-29 16:45:06.049033] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:05.554 [2024-09-29 16:45:06.049065] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:05.554 [2024-09-29 16:45:06.049087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:05.554 [2024-09-29 16:45:06.053262] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:05.554 [2024-09-29 16:45:06.062573] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:05.554 [2024-09-29 16:45:06.063129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:05.554 [2024-09-29 16:45:06.063181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:05.554 [2024-09-29 16:45:06.063205] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:05.555 [2024-09-29 16:45:06.063509] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:05.555 [2024-09-29 16:45:06.063812] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:05.555 [2024-09-29 16:45:06.063843] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:05.555 [2024-09-29 16:45:06.063865] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:05.555 [2024-09-29 16:45:06.068033] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:05.555 [2024-09-29 16:45:06.077116] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:05.555 [2024-09-29 16:45:06.077584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:05.555 [2024-09-29 16:45:06.077624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:05.555 [2024-09-29 16:45:06.077650] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:05.555 [2024-09-29 16:45:06.077946] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:05.555 [2024-09-29 16:45:06.078235] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:05.555 [2024-09-29 16:45:06.078279] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:05.555 [2024-09-29 16:45:06.078301] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:05.555 [2024-09-29 16:45:06.082460] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:05.555 [2024-09-29 16:45:06.091757] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:05.555 [2024-09-29 16:45:06.092225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:05.555 [2024-09-29 16:45:06.092265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:05.555 [2024-09-29 16:45:06.092291] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:05.555 [2024-09-29 16:45:06.092577] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:05.555 [2024-09-29 16:45:06.092881] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:05.555 [2024-09-29 16:45:06.092912] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:05.555 [2024-09-29 16:45:06.092934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:05.555 [2024-09-29 16:45:06.097096] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:05.555 [2024-09-29 16:45:06.106378] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:05.555 [2024-09-29 16:45:06.106817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:05.555 [2024-09-29 16:45:06.106859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:05.555 [2024-09-29 16:45:06.106884] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:05.555 [2024-09-29 16:45:06.107171] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:05.555 [2024-09-29 16:45:06.107460] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:05.555 [2024-09-29 16:45:06.107490] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:05.555 [2024-09-29 16:45:06.107511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:05.555 [2024-09-29 16:45:06.111789] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:05.814 [2024-09-29 16:45:06.121158] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:05.814 [2024-09-29 16:45:06.121647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:05.814 [2024-09-29 16:45:06.121704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:05.814 [2024-09-29 16:45:06.121735] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:05.814 [2024-09-29 16:45:06.122023] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:05.814 [2024-09-29 16:45:06.122311] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:05.814 [2024-09-29 16:45:06.122341] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:05.814 [2024-09-29 16:45:06.122363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:05.814 [2024-09-29 16:45:06.126529] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:05.814 [2024-09-29 16:45:06.135814] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:05.814 [2024-09-29 16:45:06.136301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:05.814 [2024-09-29 16:45:06.136343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:05.814 [2024-09-29 16:45:06.136370] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:05.814 [2024-09-29 16:45:06.136657] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:05.814 [2024-09-29 16:45:06.136962] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:05.814 [2024-09-29 16:45:06.136993] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:05.814 [2024-09-29 16:45:06.137015] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:05.814 [2024-09-29 16:45:06.141204] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:05.814 [2024-09-29 16:45:06.150272] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:05.814 [2024-09-29 16:45:06.150745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:05.814 [2024-09-29 16:45:06.150787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:05.814 [2024-09-29 16:45:06.150819] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:05.814 [2024-09-29 16:45:06.151108] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:05.814 [2024-09-29 16:45:06.151397] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:05.814 [2024-09-29 16:45:06.151427] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:05.814 [2024-09-29 16:45:06.151449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:05.814 [2024-09-29 16:45:06.155615] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:05.814 [2024-09-29 16:45:06.164892] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:05.814 [2024-09-29 16:45:06.165345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:05.814 [2024-09-29 16:45:06.165381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:05.814 [2024-09-29 16:45:06.165404] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:05.814 [2024-09-29 16:45:06.165707] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:05.815 [2024-09-29 16:45:06.165995] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:05.815 [2024-09-29 16:45:06.166025] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:05.815 [2024-09-29 16:45:06.166047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:05.815 [2024-09-29 16:45:06.170188] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:05.815 [2024-09-29 16:45:06.179471] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:05.815 [2024-09-29 16:45:06.179950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:05.815 [2024-09-29 16:45:06.179992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:05.815 [2024-09-29 16:45:06.180018] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:05.815 [2024-09-29 16:45:06.180306] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:05.815 [2024-09-29 16:45:06.180595] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:05.815 [2024-09-29 16:45:06.180625] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:05.815 [2024-09-29 16:45:06.180647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:05.815 [2024-09-29 16:45:06.184801] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:05.815 [2024-09-29 16:45:06.194079] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:05.815 [2024-09-29 16:45:06.194617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:05.815 [2024-09-29 16:45:06.194659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:05.815 [2024-09-29 16:45:06.194695] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:05.815 [2024-09-29 16:45:06.194983] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:05.815 [2024-09-29 16:45:06.195278] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:05.815 [2024-09-29 16:45:06.195308] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:05.815 [2024-09-29 16:45:06.195330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:05.815 [2024-09-29 16:45:06.199472] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:05.815 [2024-09-29 16:45:06.208735] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:05.815 [2024-09-29 16:45:06.209174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:05.815 [2024-09-29 16:45:06.209215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:05.815 [2024-09-29 16:45:06.209249] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:05.815 [2024-09-29 16:45:06.209536] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:05.815 [2024-09-29 16:45:06.209840] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:05.815 [2024-09-29 16:45:06.209871] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:05.815 [2024-09-29 16:45:06.209893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:05.815 [2024-09-29 16:45:06.214056] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:05.815 [2024-09-29 16:45:06.223314] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:05.815 [2024-09-29 16:45:06.223793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:05.815 [2024-09-29 16:45:06.223834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:05.815 [2024-09-29 16:45:06.223860] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:05.815 [2024-09-29 16:45:06.224145] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:05.815 [2024-09-29 16:45:06.224433] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:05.815 [2024-09-29 16:45:06.224463] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:05.815 [2024-09-29 16:45:06.224485] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:05.815 [2024-09-29 16:45:06.228632] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:05.815 [2024-09-29 16:45:06.237915] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:05.815 [2024-09-29 16:45:06.238370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:05.815 [2024-09-29 16:45:06.238412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:05.815 [2024-09-29 16:45:06.238439] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:05.815 [2024-09-29 16:45:06.238744] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:05.815 [2024-09-29 16:45:06.239032] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:05.815 [2024-09-29 16:45:06.239063] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:05.815 [2024-09-29 16:45:06.239084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:05.815 [2024-09-29 16:45:06.243258] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:05.815 [2024-09-29 16:45:06.252507] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:05.815 [2024-09-29 16:45:06.252972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:05.815 [2024-09-29 16:45:06.253014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:05.815 [2024-09-29 16:45:06.253040] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:05.815 [2024-09-29 16:45:06.253326] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:05.815 [2024-09-29 16:45:06.253614] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:05.815 [2024-09-29 16:45:06.253643] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:05.815 [2024-09-29 16:45:06.253665] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:05.815 [2024-09-29 16:45:06.257821] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:05.815 [2024-09-29 16:45:06.267066] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:05.815 [2024-09-29 16:45:06.267535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:05.815 [2024-09-29 16:45:06.267577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:05.815 [2024-09-29 16:45:06.267603] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:05.815 [2024-09-29 16:45:06.267902] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:05.815 [2024-09-29 16:45:06.268192] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:05.815 [2024-09-29 16:45:06.268222] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:05.815 [2024-09-29 16:45:06.268243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:05.815 [2024-09-29 16:45:06.272384] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:05.815 [2024-09-29 16:45:06.281668] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:05.815 [2024-09-29 16:45:06.282198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:05.815 [2024-09-29 16:45:06.282256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:05.815 [2024-09-29 16:45:06.282281] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:05.815 [2024-09-29 16:45:06.282567] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:05.815 [2024-09-29 16:45:06.282871] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:05.815 [2024-09-29 16:45:06.282903] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:05.815 [2024-09-29 16:45:06.282924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:05.815 [2024-09-29 16:45:06.287072] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:05.815 [2024-09-29 16:45:06.296316] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:05.815 [2024-09-29 16:45:06.296775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:05.815 [2024-09-29 16:45:06.296816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:05.815 [2024-09-29 16:45:06.296848] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:05.815 [2024-09-29 16:45:06.297136] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:05.815 [2024-09-29 16:45:06.297425] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:05.815 [2024-09-29 16:45:06.297455] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:05.815 [2024-09-29 16:45:06.297476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:05.815 [2024-09-29 16:45:06.301612] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:05.815 [2024-09-29 16:45:06.310924] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:05.815 [2024-09-29 16:45:06.311404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:05.815 [2024-09-29 16:45:06.311445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:05.815 [2024-09-29 16:45:06.311471] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:05.815 [2024-09-29 16:45:06.311773] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:05.815 [2024-09-29 16:45:06.312060] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:05.816 [2024-09-29 16:45:06.312091] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:05.816 [2024-09-29 16:45:06.312112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:05.816 [2024-09-29 16:45:06.316240] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:05.816 [2024-09-29 16:45:06.325505] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:05.816 [2024-09-29 16:45:06.326006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:05.816 [2024-09-29 16:45:06.326058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:05.816 [2024-09-29 16:45:06.326081] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:05.816 [2024-09-29 16:45:06.326383] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:05.816 [2024-09-29 16:45:06.326684] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:05.816 [2024-09-29 16:45:06.326715] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:05.816 [2024-09-29 16:45:06.326736] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:05.816 [2024-09-29 16:45:06.330875] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:05.816 [2024-09-29 16:45:06.340144] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:05.816 [2024-09-29 16:45:06.340664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:05.816 [2024-09-29 16:45:06.340714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:05.816 [2024-09-29 16:45:06.340740] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:05.816 [2024-09-29 16:45:06.341027] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:05.816 [2024-09-29 16:45:06.341320] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:05.816 [2024-09-29 16:45:06.341351] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:05.816 [2024-09-29 16:45:06.341373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:05.816 [2024-09-29 16:45:06.345543] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:05.816 [2024-09-29 16:45:06.354836] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:05.816 [2024-09-29 16:45:06.355300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:05.816 [2024-09-29 16:45:06.355337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:05.816 [2024-09-29 16:45:06.355360] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:05.816 [2024-09-29 16:45:06.355647] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:05.816 [2024-09-29 16:45:06.355967] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:05.816 [2024-09-29 16:45:06.355995] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:05.816 [2024-09-29 16:45:06.356015] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:05.816 [2024-09-29 16:45:06.360293] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:05.816 [2024-09-29 16:45:06.369457] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:05.816 [2024-09-29 16:45:06.369987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:05.816 [2024-09-29 16:45:06.370049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:05.816 [2024-09-29 16:45:06.370076] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:05.816 [2024-09-29 16:45:06.370363] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:05.816 [2024-09-29 16:45:06.370652] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:05.816 [2024-09-29 16:45:06.370695] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:05.816 [2024-09-29 16:45:06.370735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:05.816 [2024-09-29 16:45:06.375204] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:06.076 [2024-09-29 16:45:06.384226] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:06.076 [2024-09-29 16:45:06.384711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:06.076 [2024-09-29 16:45:06.384755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:06.076 [2024-09-29 16:45:06.384782] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:06.076 [2024-09-29 16:45:06.385070] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:06.076 [2024-09-29 16:45:06.385359] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:06.076 [2024-09-29 16:45:06.385390] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:06.076 [2024-09-29 16:45:06.385411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:06.076 [2024-09-29 16:45:06.389557] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:06.076 [2024-09-29 16:45:06.398823] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:06.076 [2024-09-29 16:45:06.399392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:06.076 [2024-09-29 16:45:06.399454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:06.076 [2024-09-29 16:45:06.399480] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:06.076 [2024-09-29 16:45:06.399781] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:06.076 [2024-09-29 16:45:06.400071] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:06.076 [2024-09-29 16:45:06.400101] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:06.076 [2024-09-29 16:45:06.400122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:06.076 [2024-09-29 16:45:06.404262] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:06.076 [2024-09-29 16:45:06.413288] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:06.076 [2024-09-29 16:45:06.413757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:06.076 [2024-09-29 16:45:06.413810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:06.076 [2024-09-29 16:45:06.413834] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:06.076 [2024-09-29 16:45:06.414130] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:06.076 [2024-09-29 16:45:06.414419] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:06.076 [2024-09-29 16:45:06.414449] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:06.076 [2024-09-29 16:45:06.414471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:06.076 [2024-09-29 16:45:06.418614] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:06.076 [2024-09-29 16:45:06.427893] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:06.076 [2024-09-29 16:45:06.428370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:06.076 [2024-09-29 16:45:06.428421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:06.076 [2024-09-29 16:45:06.428445] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:06.076 [2024-09-29 16:45:06.428761] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:06.076 [2024-09-29 16:45:06.429049] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:06.076 [2024-09-29 16:45:06.429079] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:06.076 [2024-09-29 16:45:06.429101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:06.076 [2024-09-29 16:45:06.433239] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:06.076 [2024-09-29 16:45:06.442518] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:06.076 [2024-09-29 16:45:06.442994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:06.076 [2024-09-29 16:45:06.443041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:06.076 [2024-09-29 16:45:06.443068] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:06.076 [2024-09-29 16:45:06.443354] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:06.076 [2024-09-29 16:45:06.443643] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:06.076 [2024-09-29 16:45:06.443684] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:06.076 [2024-09-29 16:45:06.443710] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:06.076 [2024-09-29 16:45:06.447851] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:06.076 [2024-09-29 16:45:06.457123] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:06.076 [2024-09-29 16:45:06.457629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:06.076 [2024-09-29 16:45:06.457697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:06.076 [2024-09-29 16:45:06.457725] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:06.076 [2024-09-29 16:45:06.458012] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:06.076 [2024-09-29 16:45:06.458302] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:06.077 [2024-09-29 16:45:06.458332] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:06.077 [2024-09-29 16:45:06.458353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:06.077 [2024-09-29 16:45:06.462489] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:06.077 [2024-09-29 16:45:06.471771] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:06.077 [2024-09-29 16:45:06.472248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:06.077 [2024-09-29 16:45:06.472285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:06.077 [2024-09-29 16:45:06.472325] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:06.077 [2024-09-29 16:45:06.472627] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:06.077 [2024-09-29 16:45:06.472924] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:06.077 [2024-09-29 16:45:06.472956] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:06.077 [2024-09-29 16:45:06.472978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:06.077 [2024-09-29 16:45:06.477109] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:06.077 [2024-09-29 16:45:06.486344] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:06.077 [2024-09-29 16:45:06.486801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:06.077 [2024-09-29 16:45:06.486843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:06.077 [2024-09-29 16:45:06.486868] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:06.077 [2024-09-29 16:45:06.487152] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:06.077 [2024-09-29 16:45:06.487447] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:06.077 [2024-09-29 16:45:06.487477] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:06.077 [2024-09-29 16:45:06.487499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:06.077 [2024-09-29 16:45:06.491642] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:06.077 [2024-09-29 16:45:06.500978] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:06.077 [2024-09-29 16:45:06.501414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:06.077 [2024-09-29 16:45:06.501463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:06.077 [2024-09-29 16:45:06.501489] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:06.077 [2024-09-29 16:45:06.501802] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:06.077 [2024-09-29 16:45:06.502090] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:06.077 [2024-09-29 16:45:06.502120] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:06.077 [2024-09-29 16:45:06.502142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:06.077 [2024-09-29 16:45:06.506279] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:06.077 [2024-09-29 16:45:06.515561] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:06.077 [2024-09-29 16:45:06.516023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:06.077 [2024-09-29 16:45:06.516065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:06.077 [2024-09-29 16:45:06.516091] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:06.077 [2024-09-29 16:45:06.516377] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:06.077 [2024-09-29 16:45:06.516663] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:06.077 [2024-09-29 16:45:06.516708] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:06.077 [2024-09-29 16:45:06.516735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:06.077 [2024-09-29 16:45:06.520876] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:06.077 [2024-09-29 16:45:06.530146] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:06.077 [2024-09-29 16:45:06.530626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:06.077 [2024-09-29 16:45:06.530667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:06.077 [2024-09-29 16:45:06.530708] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:06.077 [2024-09-29 16:45:06.530995] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:06.077 [2024-09-29 16:45:06.531284] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:06.077 [2024-09-29 16:45:06.531314] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:06.077 [2024-09-29 16:45:06.531336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:06.077 [2024-09-29 16:45:06.535478] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:06.077 [2024-09-29 16:45:06.544772] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:06.077 [2024-09-29 16:45:06.545214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:06.077 [2024-09-29 16:45:06.545256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:06.077 [2024-09-29 16:45:06.545281] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:06.077 [2024-09-29 16:45:06.545566] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:06.077 [2024-09-29 16:45:06.545870] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:06.077 [2024-09-29 16:45:06.545902] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:06.077 [2024-09-29 16:45:06.545924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:06.077 [2024-09-29 16:45:06.550066] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:06.077 [2024-09-29 16:45:06.559318] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:06.077 [2024-09-29 16:45:06.559785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:06.077 [2024-09-29 16:45:06.559826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:06.077 [2024-09-29 16:45:06.559852] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:06.077 [2024-09-29 16:45:06.560138] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:06.077 [2024-09-29 16:45:06.560425] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:06.077 [2024-09-29 16:45:06.560456] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:06.077 [2024-09-29 16:45:06.560478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:06.077 [2024-09-29 16:45:06.564612] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:06.077 [2024-09-29 16:45:06.573868] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:06.077 [2024-09-29 16:45:06.574344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:06.077 [2024-09-29 16:45:06.574385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:06.077 [2024-09-29 16:45:06.574411] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:06.077 [2024-09-29 16:45:06.574713] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:06.077 [2024-09-29 16:45:06.575001] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:06.077 [2024-09-29 16:45:06.575031] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:06.077 [2024-09-29 16:45:06.575053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:06.077 [2024-09-29 16:45:06.579193] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:06.077 [2024-09-29 16:45:06.588462] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:06.077 [2024-09-29 16:45:06.588937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:06.077 [2024-09-29 16:45:06.588984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:06.077 [2024-09-29 16:45:06.589010] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:06.077 [2024-09-29 16:45:06.589296] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:06.077 [2024-09-29 16:45:06.589582] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:06.077 [2024-09-29 16:45:06.589612] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:06.077 [2024-09-29 16:45:06.589634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:06.077 [2024-09-29 16:45:06.593782] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:06.077 [2024-09-29 16:45:06.603043] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:06.077 [2024-09-29 16:45:06.603595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:06.077 [2024-09-29 16:45:06.603684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:06.077 [2024-09-29 16:45:06.603712] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:06.077 [2024-09-29 16:45:06.603999] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:06.077 [2024-09-29 16:45:06.604288] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:06.077 [2024-09-29 16:45:06.604318] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:06.077 [2024-09-29 16:45:06.604340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:06.077 [2024-09-29 16:45:06.608479] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:06.077 [2024-09-29 16:45:06.617512] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:06.077 [2024-09-29 16:45:06.617994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:06.077 [2024-09-29 16:45:06.618036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:06.077 [2024-09-29 16:45:06.618061] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:06.078 [2024-09-29 16:45:06.618358] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:06.078 [2024-09-29 16:45:06.618647] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:06.078 [2024-09-29 16:45:06.618689] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:06.078 [2024-09-29 16:45:06.618714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:06.078 [2024-09-29 16:45:06.622875] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:06.078 [2024-09-29 16:45:06.632165] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:06.078 [2024-09-29 16:45:06.632689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:06.078 [2024-09-29 16:45:06.632732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:06.078 [2024-09-29 16:45:06.632777] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:06.078 [2024-09-29 16:45:06.633134] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:06.078 [2024-09-29 16:45:06.633450] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:06.078 [2024-09-29 16:45:06.633487] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:06.078 [2024-09-29 16:45:06.633511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:06.337 [2024-09-29 16:45:06.637969] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:06.337 [2024-09-29 16:45:06.646981] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:06.337 [2024-09-29 16:45:06.647544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:06.337 [2024-09-29 16:45:06.647589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:06.337 [2024-09-29 16:45:06.647617] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:06.337 [2024-09-29 16:45:06.647918] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:06.337 [2024-09-29 16:45:06.648207] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:06.337 [2024-09-29 16:45:06.648237] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:06.337 [2024-09-29 16:45:06.648259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:06.337 [2024-09-29 16:45:06.652395] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:06.337 [2024-09-29 16:45:06.661654] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:06.337 [2024-09-29 16:45:06.662133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:06.337 [2024-09-29 16:45:06.662175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:06.337 [2024-09-29 16:45:06.662200] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:06.337 [2024-09-29 16:45:06.662489] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:06.337 [2024-09-29 16:45:06.662793] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:06.337 [2024-09-29 16:45:06.662825] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:06.337 [2024-09-29 16:45:06.662846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:06.337 [2024-09-29 16:45:06.666984] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:06.337 [2024-09-29 16:45:06.676239] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:06.337 [2024-09-29 16:45:06.676706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:06.337 [2024-09-29 16:45:06.676748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:06.337 [2024-09-29 16:45:06.676774] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:06.337 [2024-09-29 16:45:06.677061] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:06.337 [2024-09-29 16:45:06.677349] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:06.337 [2024-09-29 16:45:06.677379] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:06.337 [2024-09-29 16:45:06.677407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:06.337 [2024-09-29 16:45:06.681536] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:06.337 [2024-09-29 16:45:06.690781] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:06.337 [2024-09-29 16:45:06.691356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:06.337 [2024-09-29 16:45:06.691415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:06.337 [2024-09-29 16:45:06.691440] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:06.337 [2024-09-29 16:45:06.691765] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:06.337 [2024-09-29 16:45:06.692054] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:06.337 [2024-09-29 16:45:06.692084] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:06.337 [2024-09-29 16:45:06.692106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:06.337 [2024-09-29 16:45:06.696224] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:06.337 [2024-09-29 16:45:06.705219] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:06.337 [2024-09-29 16:45:06.705686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:06.337 [2024-09-29 16:45:06.705728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:06.337 [2024-09-29 16:45:06.705770] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:06.337 [2024-09-29 16:45:06.706056] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:06.337 [2024-09-29 16:45:06.706344] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:06.337 [2024-09-29 16:45:06.706374] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:06.337 [2024-09-29 16:45:06.706396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:06.337 3209.25 IOPS, 12.54 MiB/s [2024-09-29 16:45:06.712351] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:06.337 [2024-09-29 16:45:06.719666] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:06.337 [2024-09-29 16:45:06.720155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:06.337 [2024-09-29 16:45:06.720197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:06.338 [2024-09-29 16:45:06.720223] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:06.338 [2024-09-29 16:45:06.720509] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:06.338 [2024-09-29 16:45:06.720812] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:06.338 [2024-09-29 16:45:06.720843] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:06.338 [2024-09-29 16:45:06.720865] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:06.338 [2024-09-29 16:45:06.725001] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:06.338 [2024-09-29 16:45:06.734273] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:06.338 [2024-09-29 16:45:06.734728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:06.338 [2024-09-29 16:45:06.734770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:06.338 [2024-09-29 16:45:06.734796] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:06.338 [2024-09-29 16:45:06.735084] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:06.338 [2024-09-29 16:45:06.735374] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:06.338 [2024-09-29 16:45:06.735404] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:06.338 [2024-09-29 16:45:06.735425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:06.338 [2024-09-29 16:45:06.739559] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:06.338 [2024-09-29 16:45:06.748842] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:06.338 [2024-09-29 16:45:06.749310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:06.338 [2024-09-29 16:45:06.749351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:06.338 [2024-09-29 16:45:06.749377] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:06.338 [2024-09-29 16:45:06.749662] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:06.338 [2024-09-29 16:45:06.749962] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:06.338 [2024-09-29 16:45:06.749993] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:06.338 [2024-09-29 16:45:06.750014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:06.338 [2024-09-29 16:45:06.754160] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:06.338 [2024-09-29 16:45:06.763296] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:06.338 [2024-09-29 16:45:06.763755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:06.338 [2024-09-29 16:45:06.763797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:06.338 [2024-09-29 16:45:06.763823] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:06.338 [2024-09-29 16:45:06.764111] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:06.338 [2024-09-29 16:45:06.764401] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:06.338 [2024-09-29 16:45:06.764430] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:06.338 [2024-09-29 16:45:06.764452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:06.338 [2024-09-29 16:45:06.768600] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:06.338 [2024-09-29 16:45:06.777889] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:06.338 [2024-09-29 16:45:06.778332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:06.338 [2024-09-29 16:45:06.778373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:06.338 [2024-09-29 16:45:06.778398] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:06.338 [2024-09-29 16:45:06.778715] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:06.338 [2024-09-29 16:45:06.779004] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:06.338 [2024-09-29 16:45:06.779035] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:06.338 [2024-09-29 16:45:06.779056] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:06.338 [2024-09-29 16:45:06.783196] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:06.338 [2024-09-29 16:45:06.792463] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:06.338 [2024-09-29 16:45:06.792889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:06.338 [2024-09-29 16:45:06.792930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:06.338 [2024-09-29 16:45:06.792956] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:06.338 [2024-09-29 16:45:06.793242] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:06.338 [2024-09-29 16:45:06.793530] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:06.338 [2024-09-29 16:45:06.793560] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:06.338 [2024-09-29 16:45:06.793581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:06.338 [2024-09-29 16:45:06.797726] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:06.338 [2024-09-29 16:45:06.806984] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:06.338 [2024-09-29 16:45:06.807438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:06.338 [2024-09-29 16:45:06.807478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:06.338 [2024-09-29 16:45:06.807504] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:06.338 [2024-09-29 16:45:06.807804] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:06.338 [2024-09-29 16:45:06.808092] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:06.338 [2024-09-29 16:45:06.808123] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:06.338 [2024-09-29 16:45:06.808145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:06.338 [2024-09-29 16:45:06.812294] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:06.338 [2024-09-29 16:45:06.821581] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:06.338 [2024-09-29 16:45:06.822054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:06.338 [2024-09-29 16:45:06.822095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:06.338 [2024-09-29 16:45:06.822121] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:06.338 [2024-09-29 16:45:06.822405] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:06.338 [2024-09-29 16:45:06.822707] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:06.338 [2024-09-29 16:45:06.822738] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:06.338 [2024-09-29 16:45:06.822766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:06.338 [2024-09-29 16:45:06.826908] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:06.338 [2024-09-29 16:45:06.836181] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:06.338 [2024-09-29 16:45:06.836653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:06.338 [2024-09-29 16:45:06.836702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:06.338 [2024-09-29 16:45:06.836728] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:06.338 [2024-09-29 16:45:06.837016] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:06.338 [2024-09-29 16:45:06.837305] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:06.338 [2024-09-29 16:45:06.837335] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:06.338 [2024-09-29 16:45:06.837356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:06.338 [2024-09-29 16:45:06.841496] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:06.338 [2024-09-29 16:45:06.850783] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:06.338 [2024-09-29 16:45:06.851226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:06.338 [2024-09-29 16:45:06.851267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:06.338 [2024-09-29 16:45:06.851294] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:06.338 [2024-09-29 16:45:06.851581] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:06.338 [2024-09-29 16:45:06.851886] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:06.338 [2024-09-29 16:45:06.851917] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:06.338 [2024-09-29 16:45:06.851939] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:06.338 [2024-09-29 16:45:06.856292] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:06.338 [2024-09-29 16:45:06.865309] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:06.338 [2024-09-29 16:45:06.865814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:06.338 [2024-09-29 16:45:06.865857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:06.338 [2024-09-29 16:45:06.865884] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:06.339 [2024-09-29 16:45:06.866172] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:06.339 [2024-09-29 16:45:06.866459] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:06.339 [2024-09-29 16:45:06.866489] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:06.339 [2024-09-29 16:45:06.866511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:06.339 [2024-09-29 16:45:06.870645] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:06.339 [2024-09-29 16:45:06.879908] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:06.339 [2024-09-29 16:45:06.880401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:06.339 [2024-09-29 16:45:06.880443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:06.339 [2024-09-29 16:45:06.880468] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:06.339 [2024-09-29 16:45:06.880766] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:06.339 [2024-09-29 16:45:06.881053] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:06.339 [2024-09-29 16:45:06.881083] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:06.339 [2024-09-29 16:45:06.881105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:06.339 [2024-09-29 16:45:06.885248] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:06.339 [2024-09-29 16:45:06.894497] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:06.339 [2024-09-29 16:45:06.895022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:06.339 [2024-09-29 16:45:06.895066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:06.339 [2024-09-29 16:45:06.895093] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:06.339 [2024-09-29 16:45:06.895409] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:06.339 [2024-09-29 16:45:06.895727] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:06.339 [2024-09-29 16:45:06.895759] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:06.339 [2024-09-29 16:45:06.895782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:06.598 [2024-09-29 16:45:06.900230] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:06.598 [2024-09-29 16:45:06.908973] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:06.598 [2024-09-29 16:45:06.909449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:06.598 [2024-09-29 16:45:06.909492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:06.598 [2024-09-29 16:45:06.909519] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:06.598 [2024-09-29 16:45:06.909820] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:06.598 [2024-09-29 16:45:06.910123] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:06.598 [2024-09-29 16:45:06.910153] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:06.598 [2024-09-29 16:45:06.910175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:06.598 [2024-09-29 16:45:06.914311] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:06.598 [2024-09-29 16:45:06.923579] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:06.598 [2024-09-29 16:45:06.924072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:06.598 [2024-09-29 16:45:06.924114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:06.598 [2024-09-29 16:45:06.924140] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:06.599 [2024-09-29 16:45:06.924432] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:06.599 [2024-09-29 16:45:06.924734] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:06.599 [2024-09-29 16:45:06.924765] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:06.599 [2024-09-29 16:45:06.924787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:06.599 [2024-09-29 16:45:06.928940] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:06.599 [2024-09-29 16:45:06.938235] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:06.599 [2024-09-29 16:45:06.938727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:06.599 [2024-09-29 16:45:06.938770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:06.599 [2024-09-29 16:45:06.938795] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:06.599 [2024-09-29 16:45:06.939082] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:06.599 [2024-09-29 16:45:06.939375] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:06.599 [2024-09-29 16:45:06.939405] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:06.599 [2024-09-29 16:45:06.939426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:06.599 [2024-09-29 16:45:06.943592] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:06.599 [2024-09-29 16:45:06.952760] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:06.599 [2024-09-29 16:45:06.953253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:06.599 [2024-09-29 16:45:06.953295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:06.599 [2024-09-29 16:45:06.953321] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:06.599 [2024-09-29 16:45:06.953611] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:06.599 [2024-09-29 16:45:06.953911] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:06.599 [2024-09-29 16:45:06.953942] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:06.599 [2024-09-29 16:45:06.953964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:06.599 [2024-09-29 16:45:06.958155] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:06.599 [2024-09-29 16:45:06.967256] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:06.599 [2024-09-29 16:45:06.967731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:06.599 [2024-09-29 16:45:06.967773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:06.599 [2024-09-29 16:45:06.967799] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:06.599 [2024-09-29 16:45:06.968088] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:06.599 [2024-09-29 16:45:06.968377] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:06.599 [2024-09-29 16:45:06.968408] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:06.599 [2024-09-29 16:45:06.968435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:06.599 [2024-09-29 16:45:06.972611] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:06.599 [2024-09-29 16:45:06.981930] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:06.599 [2024-09-29 16:45:06.982378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:06.599 [2024-09-29 16:45:06.982419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:06.599 [2024-09-29 16:45:06.982445] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:06.599 [2024-09-29 16:45:06.982746] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:06.599 [2024-09-29 16:45:06.983037] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:06.599 [2024-09-29 16:45:06.983068] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:06.599 [2024-09-29 16:45:06.983090] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:06.599 [2024-09-29 16:45:06.987242] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:06.599 [2024-09-29 16:45:06.996550] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:06.599 [2024-09-29 16:45:06.997041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:06.599 [2024-09-29 16:45:06.997083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:06.599 [2024-09-29 16:45:06.997109] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:06.599 [2024-09-29 16:45:06.997396] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:06.599 [2024-09-29 16:45:06.997697] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:06.599 [2024-09-29 16:45:06.997728] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:06.599 [2024-09-29 16:45:06.997750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:06.599 [2024-09-29 16:45:07.001913] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:06.599 [2024-09-29 16:45:07.011235] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:06.599 [2024-09-29 16:45:07.011700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:06.599 [2024-09-29 16:45:07.011742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:06.599 [2024-09-29 16:45:07.011768] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:06.599 [2024-09-29 16:45:07.012055] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:06.599 [2024-09-29 16:45:07.012345] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:06.599 [2024-09-29 16:45:07.012375] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:06.599 [2024-09-29 16:45:07.012397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:06.599 [2024-09-29 16:45:07.016563] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:06.599 [2024-09-29 16:45:07.025871] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:06.599 [2024-09-29 16:45:07.026351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:06.599 [2024-09-29 16:45:07.026392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:06.599 [2024-09-29 16:45:07.026418] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:06.599 [2024-09-29 16:45:07.026718] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:06.599 [2024-09-29 16:45:07.027008] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:06.599 [2024-09-29 16:45:07.027039] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:06.599 [2024-09-29 16:45:07.027069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:06.599 [2024-09-29 16:45:07.031217] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:06.599 [2024-09-29 16:45:07.040484] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:06.599 [2024-09-29 16:45:07.040980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:06.599 [2024-09-29 16:45:07.041021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:06.599 [2024-09-29 16:45:07.041047] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:06.599 [2024-09-29 16:45:07.041333] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:06.599 [2024-09-29 16:45:07.041621] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:06.599 [2024-09-29 16:45:07.041651] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:06.599 [2024-09-29 16:45:07.041686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:06.599 [2024-09-29 16:45:07.045863] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:06.599 [2024-09-29 16:45:07.055141] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:06.599 [2024-09-29 16:45:07.055608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:06.599 [2024-09-29 16:45:07.055649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:06.599 [2024-09-29 16:45:07.055687] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:06.599 [2024-09-29 16:45:07.055978] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:06.599 [2024-09-29 16:45:07.056267] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:06.599 [2024-09-29 16:45:07.056298] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:06.599 [2024-09-29 16:45:07.056319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:06.599 [2024-09-29 16:45:07.060470] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:06.599 [2024-09-29 16:45:07.069737] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:06.599 [2024-09-29 16:45:07.070216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:06.599 [2024-09-29 16:45:07.070258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:06.599 [2024-09-29 16:45:07.070284] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:06.600 [2024-09-29 16:45:07.070577] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:06.600 [2024-09-29 16:45:07.070880] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:06.600 [2024-09-29 16:45:07.070911] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:06.600 [2024-09-29 16:45:07.070933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:06.600 [2024-09-29 16:45:07.075085] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:06.600 [2024-09-29 16:45:07.084415] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:06.600 [2024-09-29 16:45:07.084867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:06.600 [2024-09-29 16:45:07.084909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:06.600 [2024-09-29 16:45:07.084936] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:06.600 [2024-09-29 16:45:07.085224] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:06.600 [2024-09-29 16:45:07.085512] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:06.600 [2024-09-29 16:45:07.085542] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:06.600 [2024-09-29 16:45:07.085565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:06.600 [2024-09-29 16:45:07.089726] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:06.600 [2024-09-29 16:45:07.099073] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:06.600 [2024-09-29 16:45:07.099509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:06.600 [2024-09-29 16:45:07.099549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:06.600 [2024-09-29 16:45:07.099575] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:06.600 [2024-09-29 16:45:07.099902] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:06.600 [2024-09-29 16:45:07.100191] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:06.600 [2024-09-29 16:45:07.100222] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:06.600 [2024-09-29 16:45:07.100244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:06.600 [2024-09-29 16:45:07.104401] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:06.600 [2024-09-29 16:45:07.113771] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:06.600 [2024-09-29 16:45:07.114221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:06.600 [2024-09-29 16:45:07.114262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:06.600 [2024-09-29 16:45:07.114288] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:06.600 [2024-09-29 16:45:07.114574] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:06.600 [2024-09-29 16:45:07.114891] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:06.600 [2024-09-29 16:45:07.114931] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:06.600 [2024-09-29 16:45:07.114969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:06.600 [2024-09-29 16:45:07.119123] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:06.600 [2024-09-29 16:45:07.128287] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:06.600 [2024-09-29 16:45:07.128738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:06.600 [2024-09-29 16:45:07.128779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:06.600 [2024-09-29 16:45:07.128805] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:06.600 [2024-09-29 16:45:07.129103] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:06.600 [2024-09-29 16:45:07.129391] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:06.600 [2024-09-29 16:45:07.129421] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:06.600 [2024-09-29 16:45:07.129443] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:06.600 [2024-09-29 16:45:07.133680] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:06.600 [2024-09-29 16:45:07.142828] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:06.600 [2024-09-29 16:45:07.143277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:06.600 [2024-09-29 16:45:07.143318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:06.600 [2024-09-29 16:45:07.143344] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:06.600 [2024-09-29 16:45:07.143631] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:06.600 [2024-09-29 16:45:07.143933] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:06.600 [2024-09-29 16:45:07.143974] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:06.600 [2024-09-29 16:45:07.143995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:06.600 [2024-09-29 16:45:07.148230] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:06.600 [2024-09-29 16:45:07.157607] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:06.600 [2024-09-29 16:45:07.158160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:06.600 [2024-09-29 16:45:07.158211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:06.600 [2024-09-29 16:45:07.158239] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:06.600 [2024-09-29 16:45:07.158570] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:06.600 [2024-09-29 16:45:07.158896] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:06.600 [2024-09-29 16:45:07.158931] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:06.600 [2024-09-29 16:45:07.158954] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:06.864 [2024-09-29 16:45:07.163464] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:06.864 [2024-09-29 16:45:07.172208] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:06.864 [2024-09-29 16:45:07.172707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:06.864 [2024-09-29 16:45:07.172751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:06.864 [2024-09-29 16:45:07.172777] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:06.864 [2024-09-29 16:45:07.173067] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:06.864 [2024-09-29 16:45:07.173359] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:06.864 [2024-09-29 16:45:07.173390] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:06.864 [2024-09-29 16:45:07.173412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:06.864 [2024-09-29 16:45:07.177687] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:06.864 [2024-09-29 16:45:07.186975] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:06.864 [2024-09-29 16:45:07.187442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:06.864 [2024-09-29 16:45:07.187484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:06.864 [2024-09-29 16:45:07.187510] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:06.864 [2024-09-29 16:45:07.187819] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:06.864 [2024-09-29 16:45:07.188112] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:06.864 [2024-09-29 16:45:07.188142] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:06.864 [2024-09-29 16:45:07.188163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:06.864 [2024-09-29 16:45:07.192441] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:06.864 [2024-09-29 16:45:07.201749] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:06.864 [2024-09-29 16:45:07.202297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:06.864 [2024-09-29 16:45:07.202356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:06.864 [2024-09-29 16:45:07.202382] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:06.864 [2024-09-29 16:45:07.202689] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:06.864 [2024-09-29 16:45:07.202985] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:06.864 [2024-09-29 16:45:07.203016] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:06.864 [2024-09-29 16:45:07.203037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:06.864 [2024-09-29 16:45:07.207352] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:06.864 [2024-09-29 16:45:07.216326] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:06.864 [2024-09-29 16:45:07.216787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:06.864 [2024-09-29 16:45:07.216828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:06.864 [2024-09-29 16:45:07.216854] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:06.864 [2024-09-29 16:45:07.217152] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:06.864 [2024-09-29 16:45:07.217444] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:06.864 [2024-09-29 16:45:07.217474] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:06.864 [2024-09-29 16:45:07.217496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:06.864 [2024-09-29 16:45:07.221773] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:06.864 [2024-09-29 16:45:07.231043] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:06.864 [2024-09-29 16:45:07.231531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:06.864 [2024-09-29 16:45:07.231573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:06.864 [2024-09-29 16:45:07.231599] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:06.864 [2024-09-29 16:45:07.231909] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:06.864 [2024-09-29 16:45:07.232204] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:06.864 [2024-09-29 16:45:07.232235] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:06.864 [2024-09-29 16:45:07.232257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:06.864 [2024-09-29 16:45:07.236520] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:06.864 [2024-09-29 16:45:07.245791] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:06.864 [2024-09-29 16:45:07.246247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:06.864 [2024-09-29 16:45:07.246288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:06.864 [2024-09-29 16:45:07.246314] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:06.864 [2024-09-29 16:45:07.246603] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:06.864 [2024-09-29 16:45:07.246904] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:06.864 [2024-09-29 16:45:07.246935] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:06.864 [2024-09-29 16:45:07.246957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:06.864 [2024-09-29 16:45:07.251199] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:06.864 [2024-09-29 16:45:07.260409] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:06.864 [2024-09-29 16:45:07.260866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:06.864 [2024-09-29 16:45:07.260907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:06.864 [2024-09-29 16:45:07.260934] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:06.864 [2024-09-29 16:45:07.261236] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:06.864 [2024-09-29 16:45:07.261525] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:06.864 [2024-09-29 16:45:07.261565] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:06.864 [2024-09-29 16:45:07.261596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:06.864 [2024-09-29 16:45:07.265834] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:06.864 [2024-09-29 16:45:07.274981] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:06.864 [2024-09-29 16:45:07.275444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:06.864 [2024-09-29 16:45:07.275486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:06.864 [2024-09-29 16:45:07.275513] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:06.864 [2024-09-29 16:45:07.275814] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:06.864 [2024-09-29 16:45:07.276108] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:06.864 [2024-09-29 16:45:07.276139] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:06.864 [2024-09-29 16:45:07.276161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:06.864 [2024-09-29 16:45:07.280339] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:06.864 [2024-09-29 16:45:07.289681] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:06.864 [2024-09-29 16:45:07.290127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:06.864 [2024-09-29 16:45:07.290168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:06.864 [2024-09-29 16:45:07.290194] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:06.864 [2024-09-29 16:45:07.290482] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:06.865 [2024-09-29 16:45:07.290783] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:06.865 [2024-09-29 16:45:07.290814] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:06.865 [2024-09-29 16:45:07.290837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:06.865 [2024-09-29 16:45:07.295050] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:06.865 [2024-09-29 16:45:07.304169] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:06.865 [2024-09-29 16:45:07.304650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:06.865 [2024-09-29 16:45:07.304701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:06.865 [2024-09-29 16:45:07.304728] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:06.865 [2024-09-29 16:45:07.305017] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:06.865 [2024-09-29 16:45:07.305314] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:06.865 [2024-09-29 16:45:07.305345] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:06.865 [2024-09-29 16:45:07.305366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:06.865 [2024-09-29 16:45:07.309547] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:06.865 [2024-09-29 16:45:07.318665] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:06.865 [2024-09-29 16:45:07.319119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:06.865 [2024-09-29 16:45:07.319159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:06.865 [2024-09-29 16:45:07.319184] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:06.865 [2024-09-29 16:45:07.319472] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:06.865 [2024-09-29 16:45:07.319779] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:06.865 [2024-09-29 16:45:07.319810] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:06.865 [2024-09-29 16:45:07.319832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:06.865 [2024-09-29 16:45:07.324003] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:06.865 [2024-09-29 16:45:07.333388] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:06.865 [2024-09-29 16:45:07.333863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:06.865 [2024-09-29 16:45:07.333905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:06.865 [2024-09-29 16:45:07.333931] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:06.865 [2024-09-29 16:45:07.334219] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:06.865 [2024-09-29 16:45:07.334508] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:06.865 [2024-09-29 16:45:07.334539] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:06.865 [2024-09-29 16:45:07.334561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:06.865 [2024-09-29 16:45:07.338750] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:06.865 [2024-09-29 16:45:07.348108] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:06.865 [2024-09-29 16:45:07.348602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:06.865 [2024-09-29 16:45:07.348643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:06.865 [2024-09-29 16:45:07.348669] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:06.865 [2024-09-29 16:45:07.348972] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:06.865 [2024-09-29 16:45:07.349264] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:06.865 [2024-09-29 16:45:07.349295] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:06.865 [2024-09-29 16:45:07.349317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:06.865 [2024-09-29 16:45:07.353483] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:06.865 [2024-09-29 16:45:07.362816] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:06.865 [2024-09-29 16:45:07.363283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:06.865 [2024-09-29 16:45:07.363324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:06.865 [2024-09-29 16:45:07.363356] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:06.865 [2024-09-29 16:45:07.363646] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:06.865 [2024-09-29 16:45:07.363947] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:06.865 [2024-09-29 16:45:07.363978] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:06.865 [2024-09-29 16:45:07.364008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:06.865 [2024-09-29 16:45:07.368174] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:06.865 [2024-09-29 16:45:07.377465] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:06.865 [2024-09-29 16:45:07.377943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:06.865 [2024-09-29 16:45:07.377994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:06.865 [2024-09-29 16:45:07.378019] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:06.865 [2024-09-29 16:45:07.378305] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:06.865 [2024-09-29 16:45:07.378594] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:06.865 [2024-09-29 16:45:07.378624] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:06.865 [2024-09-29 16:45:07.378646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:06.865 [2024-09-29 16:45:07.382827] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:06.865 [2024-09-29 16:45:07.392147] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:06.865 [2024-09-29 16:45:07.392633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:06.865 [2024-09-29 16:45:07.392683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:06.865 [2024-09-29 16:45:07.392712] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:06.865 [2024-09-29 16:45:07.393000] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:06.865 [2024-09-29 16:45:07.393290] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:06.865 [2024-09-29 16:45:07.393320] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:06.865 [2024-09-29 16:45:07.393341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:06.865 [2024-09-29 16:45:07.397505] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:06.865 [2024-09-29 16:45:07.406811] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:06.865 [2024-09-29 16:45:07.407290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:06.865 [2024-09-29 16:45:07.407331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:06.865 [2024-09-29 16:45:07.407357] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:06.865 [2024-09-29 16:45:07.407643] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:06.865 [2024-09-29 16:45:07.407962] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:06.865 [2024-09-29 16:45:07.407999] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:06.865 [2024-09-29 16:45:07.408022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:06.865 [2024-09-29 16:45:07.412195] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:06.865 [2024-09-29 16:45:07.421881] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:06.865 [2024-09-29 16:45:07.422507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:06.865 [2024-09-29 16:45:07.422570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:06.865 [2024-09-29 16:45:07.422617] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.175 [2024-09-29 16:45:07.423060] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.175 [2024-09-29 16:45:07.423474] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.175 [2024-09-29 16:45:07.423522] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.175 [2024-09-29 16:45:07.423561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.175 [2024-09-29 16:45:07.429683] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.175 [2024-09-29 16:45:07.437169] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.175 [2024-09-29 16:45:07.437652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.175 [2024-09-29 16:45:07.437708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.175 [2024-09-29 16:45:07.437737] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.175 [2024-09-29 16:45:07.438028] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.175 [2024-09-29 16:45:07.438320] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.175 [2024-09-29 16:45:07.438352] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.175 [2024-09-29 16:45:07.438374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.175 [2024-09-29 16:45:07.442597] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.175 [2024-09-29 16:45:07.451722] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.175 [2024-09-29 16:45:07.452222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.176 [2024-09-29 16:45:07.452264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.176 [2024-09-29 16:45:07.452290] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.176 [2024-09-29 16:45:07.452578] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.176 [2024-09-29 16:45:07.452887] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.176 [2024-09-29 16:45:07.452919] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.176 [2024-09-29 16:45:07.452941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.176 [2024-09-29 16:45:07.457114] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.176 [2024-09-29 16:45:07.466254] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.176 [2024-09-29 16:45:07.466747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.176 [2024-09-29 16:45:07.466790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.176 [2024-09-29 16:45:07.466816] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.176 [2024-09-29 16:45:07.467104] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.176 [2024-09-29 16:45:07.467393] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.176 [2024-09-29 16:45:07.467424] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.176 [2024-09-29 16:45:07.467445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.176 [2024-09-29 16:45:07.471635] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.176 [2024-09-29 16:45:07.480812] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.176 [2024-09-29 16:45:07.481286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.176 [2024-09-29 16:45:07.481328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.176 [2024-09-29 16:45:07.481353] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.176 [2024-09-29 16:45:07.481643] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.176 [2024-09-29 16:45:07.481955] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.176 [2024-09-29 16:45:07.481985] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.176 [2024-09-29 16:45:07.482007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.176 [2024-09-29 16:45:07.486244] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.176 [2024-09-29 16:45:07.495457] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.176 [2024-09-29 16:45:07.495903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.176 [2024-09-29 16:45:07.495945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.176 [2024-09-29 16:45:07.495975] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.176 [2024-09-29 16:45:07.496264] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.176 [2024-09-29 16:45:07.496565] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.176 [2024-09-29 16:45:07.496595] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.176 [2024-09-29 16:45:07.496617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.176 [2024-09-29 16:45:07.500833] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.176 [2024-09-29 16:45:07.509973] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.176 [2024-09-29 16:45:07.510475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.176 [2024-09-29 16:45:07.510517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.176 [2024-09-29 16:45:07.510552] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.176 [2024-09-29 16:45:07.510862] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.176 [2024-09-29 16:45:07.511154] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.176 [2024-09-29 16:45:07.511184] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.176 [2024-09-29 16:45:07.511206] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.176 [2024-09-29 16:45:07.515413] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.176 [2024-09-29 16:45:07.524667] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.176 [2024-09-29 16:45:07.525123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.176 [2024-09-29 16:45:07.525165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.176 [2024-09-29 16:45:07.525190] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.176 [2024-09-29 16:45:07.525488] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.176 [2024-09-29 16:45:07.525801] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.176 [2024-09-29 16:45:07.525832] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.176 [2024-09-29 16:45:07.525854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.176 [2024-09-29 16:45:07.530081] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.176 [2024-09-29 16:45:07.539213] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.176 [2024-09-29 16:45:07.539685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.176 [2024-09-29 16:45:07.539740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.176 [2024-09-29 16:45:07.539767] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.176 [2024-09-29 16:45:07.540056] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.176 [2024-09-29 16:45:07.540348] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.176 [2024-09-29 16:45:07.540377] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.176 [2024-09-29 16:45:07.540399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.176 [2024-09-29 16:45:07.544606] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.176 [2024-09-29 16:45:07.553815] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.176 [2024-09-29 16:45:07.554269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.176 [2024-09-29 16:45:07.554311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.176 [2024-09-29 16:45:07.554337] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.176 [2024-09-29 16:45:07.554625] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.176 [2024-09-29 16:45:07.554936] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.176 [2024-09-29 16:45:07.554981] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.176 [2024-09-29 16:45:07.555004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.176 [2024-09-29 16:45:07.559189] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.176 [2024-09-29 16:45:07.568221] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.176 [2024-09-29 16:45:07.568717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.176 [2024-09-29 16:45:07.568754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.176 [2024-09-29 16:45:07.568777] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.176 [2024-09-29 16:45:07.569066] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.176 [2024-09-29 16:45:07.569318] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.176 [2024-09-29 16:45:07.569343] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.176 [2024-09-29 16:45:07.569366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.176 [2024-09-29 16:45:07.573082] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.176 [2024-09-29 16:45:07.582362] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.176 [2024-09-29 16:45:07.582792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.176 [2024-09-29 16:45:07.582830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.176 [2024-09-29 16:45:07.582853] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.176 [2024-09-29 16:45:07.583146] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.176 [2024-09-29 16:45:07.583386] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.176 [2024-09-29 16:45:07.583411] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.176 [2024-09-29 16:45:07.583428] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.176 [2024-09-29 16:45:07.587124] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.176 [2024-09-29 16:45:07.596732] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.177 [2024-09-29 16:45:07.597276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.177 [2024-09-29 16:45:07.597326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.177 [2024-09-29 16:45:07.597350] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.177 [2024-09-29 16:45:07.597651] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.177 [2024-09-29 16:45:07.597939] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.177 [2024-09-29 16:45:07.597989] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.177 [2024-09-29 16:45:07.598007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.177 [2024-09-29 16:45:07.602277] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.177 [2024-09-29 16:45:07.611226] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.177 [2024-09-29 16:45:07.611749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.177 [2024-09-29 16:45:07.611787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.177 [2024-09-29 16:45:07.611810] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.177 [2024-09-29 16:45:07.612105] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.177 [2024-09-29 16:45:07.612416] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.177 [2024-09-29 16:45:07.612447] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.177 [2024-09-29 16:45:07.612468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.177 [2024-09-29 16:45:07.616756] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.177 [2024-09-29 16:45:07.626027] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.177 [2024-09-29 16:45:07.626531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.177 [2024-09-29 16:45:07.626572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.177 [2024-09-29 16:45:07.626598] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.177 [2024-09-29 16:45:07.626910] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.177 [2024-09-29 16:45:07.627215] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.177 [2024-09-29 16:45:07.627246] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.177 [2024-09-29 16:45:07.627268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.177 [2024-09-29 16:45:07.631421] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.177 [2024-09-29 16:45:07.640627] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.177 [2024-09-29 16:45:07.641118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.177 [2024-09-29 16:45:07.641179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.177 [2024-09-29 16:45:07.641205] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.177 [2024-09-29 16:45:07.641493] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.177 [2024-09-29 16:45:07.641805] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.177 [2024-09-29 16:45:07.641832] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.177 [2024-09-29 16:45:07.641850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.177 [2024-09-29 16:45:07.645995] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.177 [2024-09-29 16:45:07.655043] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.177 [2024-09-29 16:45:07.655537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.177 [2024-09-29 16:45:07.655578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.177 [2024-09-29 16:45:07.655610] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.177 [2024-09-29 16:45:07.655937] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.177 [2024-09-29 16:45:07.656264] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.177 [2024-09-29 16:45:07.656295] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.177 [2024-09-29 16:45:07.656316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.177 [2024-09-29 16:45:07.660469] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.177 [2024-09-29 16:45:07.669588] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.177 [2024-09-29 16:45:07.670076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.177 [2024-09-29 16:45:07.670117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.177 [2024-09-29 16:45:07.670143] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.177 [2024-09-29 16:45:07.670429] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.177 [2024-09-29 16:45:07.670730] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.177 [2024-09-29 16:45:07.670760] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.177 [2024-09-29 16:45:07.670782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.177 [2024-09-29 16:45:07.674961] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.177 [2024-09-29 16:45:07.684311] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.177 [2024-09-29 16:45:07.684756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.177 [2024-09-29 16:45:07.684798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.177 [2024-09-29 16:45:07.684824] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.177 [2024-09-29 16:45:07.685113] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.177 [2024-09-29 16:45:07.685402] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.177 [2024-09-29 16:45:07.685433] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.177 [2024-09-29 16:45:07.685454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.177 [2024-09-29 16:45:07.689625] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.177 [2024-09-29 16:45:07.698986] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.177 [2024-09-29 16:45:07.699486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.177 [2024-09-29 16:45:07.699545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.177 [2024-09-29 16:45:07.699571] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.177 [2024-09-29 16:45:07.699874] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.177 [2024-09-29 16:45:07.700164] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.177 [2024-09-29 16:45:07.700200] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.177 [2024-09-29 16:45:07.700223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.177 [2024-09-29 16:45:07.704390] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.177 2567.40 IOPS, 10.03 MiB/s [2024-09-29 16:45:07.717389] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.177 [2024-09-29 16:45:07.717865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.177 [2024-09-29 16:45:07.717910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.177 [2024-09-29 16:45:07.717938] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.177 [2024-09-29 16:45:07.718227] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.177 [2024-09-29 16:45:07.718518] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.177 [2024-09-29 16:45:07.718549] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.177 [2024-09-29 16:45:07.718571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.460 [2024-09-29 16:45:07.724353] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.460 [2024-09-29 16:45:07.731951] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.460 [2024-09-29 16:45:07.732494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.460 [2024-09-29 16:45:07.732557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.460 [2024-09-29 16:45:07.732584] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.460 [2024-09-29 16:45:07.732886] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.460 [2024-09-29 16:45:07.733176] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.460 [2024-09-29 16:45:07.733207] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.460 [2024-09-29 16:45:07.733229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.460 [2024-09-29 16:45:07.737448] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.460 [2024-09-29 16:45:07.748497] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.460 [2024-09-29 16:45:07.749112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.460 [2024-09-29 16:45:07.749171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.460 [2024-09-29 16:45:07.749219] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.460 [2024-09-29 16:45:07.749660] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.460 [2024-09-29 16:45:07.750108] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.460 [2024-09-29 16:45:07.750153] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.460 [2024-09-29 16:45:07.750192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.460 [2024-09-29 16:45:07.754642] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.460 [2024-09-29 16:45:07.763193] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.460 [2024-09-29 16:45:07.763685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.460 [2024-09-29 16:45:07.763741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.460 [2024-09-29 16:45:07.763770] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.460 [2024-09-29 16:45:07.764073] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.460 [2024-09-29 16:45:07.764373] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.460 [2024-09-29 16:45:07.764404] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.460 [2024-09-29 16:45:07.764425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.460 [2024-09-29 16:45:07.768699] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.460 [2024-09-29 16:45:07.777910] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.460 [2024-09-29 16:45:07.778352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.460 [2024-09-29 16:45:07.778393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.460 [2024-09-29 16:45:07.778420] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.460 [2024-09-29 16:45:07.778721] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.460 [2024-09-29 16:45:07.779011] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.460 [2024-09-29 16:45:07.779042] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.460 [2024-09-29 16:45:07.779063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.460 [2024-09-29 16:45:07.783413] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.460 [2024-09-29 16:45:07.792685] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.460 [2024-09-29 16:45:07.793206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.460 [2024-09-29 16:45:07.793247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.460 [2024-09-29 16:45:07.793273] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.460 [2024-09-29 16:45:07.793571] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.460 [2024-09-29 16:45:07.793872] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.460 [2024-09-29 16:45:07.793903] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.460 [2024-09-29 16:45:07.793925] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.460 [2024-09-29 16:45:07.798154] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.460 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3324600 Killed "${NVMF_APP[@]}" "$@" 00:37:07.460 16:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:37:07.460 16:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:37:07.461 16:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:37:07.461 16:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:07.461 16:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:07.461 [2024-09-29 16:45:07.807389] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.461 [2024-09-29 16:45:07.807832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.461 [2024-09-29 16:45:07.807874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.461 [2024-09-29 16:45:07.807900] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.461 [2024-09-29 16:45:07.808199] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.461 [2024-09-29 16:45:07.808499] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.461 [2024-09-29 16:45:07.808540] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.461 [2024-09-29 16:45:07.808562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.461 16:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # nvmfpid=3326220 00:37:07.461 16:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:37:07.461 16:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # waitforlisten 3326220 00:37:07.461 16:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 3326220 ']' 00:37:07.461 16:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:07.461 16:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:07.461 16:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:07.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:07.461 16:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:07.461 16:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:07.461 [2024-09-29 16:45:07.812863] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.461 [2024-09-29 16:45:07.822169] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.461 [2024-09-29 16:45:07.822691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.461 [2024-09-29 16:45:07.822736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.461 [2024-09-29 16:45:07.822763] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.461 [2024-09-29 16:45:07.823066] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.461 [2024-09-29 16:45:07.823361] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.461 [2024-09-29 16:45:07.823392] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.461 [2024-09-29 16:45:07.823414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.461 [2024-09-29 16:45:07.827753] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.461 [2024-09-29 16:45:07.836537] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.461 [2024-09-29 16:45:07.836981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.461 [2024-09-29 16:45:07.837039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.461 [2024-09-29 16:45:07.837062] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.461 [2024-09-29 16:45:07.837350] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.461 [2024-09-29 16:45:07.837627] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.461 [2024-09-29 16:45:07.837686] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.461 [2024-09-29 16:45:07.837709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.461 [2024-09-29 16:45:07.841480] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.461 [2024-09-29 16:45:07.850697] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.461 [2024-09-29 16:45:07.851203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.461 [2024-09-29 16:45:07.851241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.461 [2024-09-29 16:45:07.851264] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.461 [2024-09-29 16:45:07.851553] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.461 [2024-09-29 16:45:07.851848] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.461 [2024-09-29 16:45:07.851878] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.461 [2024-09-29 16:45:07.851897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.461 [2024-09-29 16:45:07.855709] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.461 [2024-09-29 16:45:07.864934] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.461 [2024-09-29 16:45:07.865384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.461 [2024-09-29 16:45:07.865421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.461 [2024-09-29 16:45:07.865445] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.461 [2024-09-29 16:45:07.865748] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.461 [2024-09-29 16:45:07.866033] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.461 [2024-09-29 16:45:07.866060] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.461 [2024-09-29 16:45:07.866079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.461 [2024-09-29 16:45:07.870152] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.461 [2024-09-29 16:45:07.878926] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.461 [2024-09-29 16:45:07.879411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.461 [2024-09-29 16:45:07.879448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.461 [2024-09-29 16:45:07.879488] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.461 [2024-09-29 16:45:07.879812] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.461 [2024-09-29 16:45:07.880083] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.461 [2024-09-29 16:45:07.880109] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.461 [2024-09-29 16:45:07.880127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.461 [2024-09-29 16:45:07.883727] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.461 [2024-09-29 16:45:07.893368] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.461 [2024-09-29 16:45:07.893860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.461 [2024-09-29 16:45:07.893903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.461 [2024-09-29 16:45:07.893928] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.461 [2024-09-29 16:45:07.894228] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.461 [2024-09-29 16:45:07.894472] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.461 [2024-09-29 16:45:07.894496] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.461 [2024-09-29 16:45:07.894514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.461 [2024-09-29 16:45:07.898240] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.461 [2024-09-29 16:45:07.899544] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:37:07.461 [2024-09-29 16:45:07.899690] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:07.461 [2024-09-29 16:45:07.907312] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.461 [2024-09-29 16:45:07.907719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.461 [2024-09-29 16:45:07.907757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.461 [2024-09-29 16:45:07.907780] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.461 [2024-09-29 16:45:07.908071] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.461 [2024-09-29 16:45:07.908314] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.461 [2024-09-29 16:45:07.908339] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.461 [2024-09-29 16:45:07.908357] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.461 [2024-09-29 16:45:07.912070] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.461 [2024-09-29 16:45:07.921409] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.461 [2024-09-29 16:45:07.921864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.461 [2024-09-29 16:45:07.921902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.461 [2024-09-29 16:45:07.921926] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.461 [2024-09-29 16:45:07.922207] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.461 [2024-09-29 16:45:07.922451] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.462 [2024-09-29 16:45:07.922477] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.462 [2024-09-29 16:45:07.922495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.462 [2024-09-29 16:45:07.926182] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.462 [2024-09-29 16:45:07.935760] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.462 [2024-09-29 16:45:07.936230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.462 [2024-09-29 16:45:07.936281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.462 [2024-09-29 16:45:07.936305] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.462 [2024-09-29 16:45:07.936594] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.462 [2024-09-29 16:45:07.936888] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.462 [2024-09-29 16:45:07.936917] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.462 [2024-09-29 16:45:07.936937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.462 [2024-09-29 16:45:07.940831] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.462 [2024-09-29 16:45:07.950045] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.462 [2024-09-29 16:45:07.950446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.462 [2024-09-29 16:45:07.950487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.462 [2024-09-29 16:45:07.950509] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.462 [2024-09-29 16:45:07.950789] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.462 [2024-09-29 16:45:07.951088] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.462 [2024-09-29 16:45:07.951125] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.462 [2024-09-29 16:45:07.951159] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.462 [2024-09-29 16:45:07.955055] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.462 [2024-09-29 16:45:07.964387] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.462 [2024-09-29 16:45:07.964786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.462 [2024-09-29 16:45:07.964824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.462 [2024-09-29 16:45:07.964848] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.462 [2024-09-29 16:45:07.965128] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.462 [2024-09-29 16:45:07.965394] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.462 [2024-09-29 16:45:07.965420] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.462 [2024-09-29 16:45:07.965444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.462 [2024-09-29 16:45:07.969217] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.462 [2024-09-29 16:45:07.978736] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.462 [2024-09-29 16:45:07.979201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.462 [2024-09-29 16:45:07.979236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.462 [2024-09-29 16:45:07.979258] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.462 [2024-09-29 16:45:07.979549] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.462 [2024-09-29 16:45:07.979831] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.462 [2024-09-29 16:45:07.979860] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.462 [2024-09-29 16:45:07.979880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.462 [2024-09-29 16:45:07.983681] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.462 [2024-09-29 16:45:07.992769] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.462 [2024-09-29 16:45:07.993198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.462 [2024-09-29 16:45:07.993232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.462 [2024-09-29 16:45:07.993269] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.462 [2024-09-29 16:45:07.993546] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.462 [2024-09-29 16:45:07.993825] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.462 [2024-09-29 16:45:07.993854] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.462 [2024-09-29 16:45:07.993873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.462 [2024-09-29 16:45:07.997527] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.462 [2024-09-29 16:45:08.006793] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.462 [2024-09-29 16:45:08.007245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.462 [2024-09-29 16:45:08.007296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.462 [2024-09-29 16:45:08.007319] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.462 [2024-09-29 16:45:08.007605] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.462 [2024-09-29 16:45:08.007892] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.462 [2024-09-29 16:45:08.007921] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.462 [2024-09-29 16:45:08.007942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.462 [2024-09-29 16:45:08.011717] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.462 [2024-09-29 16:45:08.021340] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.722 [2024-09-29 16:45:08.021843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.722 [2024-09-29 16:45:08.021895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.722 [2024-09-29 16:45:08.021924] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.722 [2024-09-29 16:45:08.022313] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.722 [2024-09-29 16:45:08.022625] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.722 [2024-09-29 16:45:08.022679] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.722 [2024-09-29 16:45:08.022704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.722 [2024-09-29 16:45:08.026519] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.722 [2024-09-29 16:45:08.035338] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.722 [2024-09-29 16:45:08.035798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.722 [2024-09-29 16:45:08.035837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.722 [2024-09-29 16:45:08.035861] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.722 [2024-09-29 16:45:08.036152] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.722 [2024-09-29 16:45:08.036405] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.722 [2024-09-29 16:45:08.036431] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.722 [2024-09-29 16:45:08.036449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.722 [2024-09-29 16:45:08.040144] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.722 [2024-09-29 16:45:08.049238] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.722 [2024-09-29 16:45:08.049703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.722 [2024-09-29 16:45:08.049742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.722 [2024-09-29 16:45:08.049766] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.722 [2024-09-29 16:45:08.050057] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.722 [2024-09-29 16:45:08.050305] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.722 [2024-09-29 16:45:08.050330] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.722 [2024-09-29 16:45:08.050349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.722 [2024-09-29 16:45:08.053942] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:07.722 [2024-09-29 16:45:08.054003] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.722 [2024-09-29 16:45:08.063325] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.722 [2024-09-29 16:45:08.063905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.722 [2024-09-29 16:45:08.063947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.722 [2024-09-29 16:45:08.063983] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.722 [2024-09-29 16:45:08.064292] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.722 [2024-09-29 16:45:08.064579] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.722 [2024-09-29 16:45:08.064607] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.722 [2024-09-29 16:45:08.064630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.722 [2024-09-29 16:45:08.068385] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.722 [2024-09-29 16:45:08.077443] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.722 [2024-09-29 16:45:08.078008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.722 [2024-09-29 16:45:08.078051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.722 [2024-09-29 16:45:08.078080] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.722 [2024-09-29 16:45:08.078378] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.722 [2024-09-29 16:45:08.078639] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.722 [2024-09-29 16:45:08.078667] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.722 [2024-09-29 16:45:08.078701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.722 [2024-09-29 16:45:08.082415] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.722 [2024-09-29 16:45:08.091811] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.722 [2024-09-29 16:45:08.092251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.722 [2024-09-29 16:45:08.092304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.722 [2024-09-29 16:45:08.092338] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.722 [2024-09-29 16:45:08.092626] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.722 [2024-09-29 16:45:08.092923] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.722 [2024-09-29 16:45:08.092951] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.722 [2024-09-29 16:45:08.092985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.722 [2024-09-29 16:45:08.097316] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.722 [2024-09-29 16:45:08.106740] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.722 [2024-09-29 16:45:08.107219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.722 [2024-09-29 16:45:08.107256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.722 [2024-09-29 16:45:08.107279] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.722 [2024-09-29 16:45:08.107584] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.722 [2024-09-29 16:45:08.107871] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.722 [2024-09-29 16:45:08.107898] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.722 [2024-09-29 16:45:08.107923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.722 [2024-09-29 16:45:08.112115] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.722 [2024-09-29 16:45:08.121281] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.722 [2024-09-29 16:45:08.121798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.722 [2024-09-29 16:45:08.121840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.722 [2024-09-29 16:45:08.121867] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.722 [2024-09-29 16:45:08.122167] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.722 [2024-09-29 16:45:08.122408] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.722 [2024-09-29 16:45:08.122434] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.722 [2024-09-29 16:45:08.122453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.722 [2024-09-29 16:45:08.126636] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.722 [2024-09-29 16:45:08.136019] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.722 [2024-09-29 16:45:08.136513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.722 [2024-09-29 16:45:08.136552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.722 [2024-09-29 16:45:08.136576] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.722 [2024-09-29 16:45:08.136854] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.722 [2024-09-29 16:45:08.137162] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.722 [2024-09-29 16:45:08.137190] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.722 [2024-09-29 16:45:08.137227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.723 [2024-09-29 16:45:08.141384] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.723 [2024-09-29 16:45:08.150865] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.723 [2024-09-29 16:45:08.151381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.723 [2024-09-29 16:45:08.151418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.723 [2024-09-29 16:45:08.151442] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.723 [2024-09-29 16:45:08.151775] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.723 [2024-09-29 16:45:08.152077] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.723 [2024-09-29 16:45:08.152108] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.723 [2024-09-29 16:45:08.152130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.723 [2024-09-29 16:45:08.156404] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.723 [2024-09-29 16:45:08.165574] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.723 [2024-09-29 16:45:08.166078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.723 [2024-09-29 16:45:08.166131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.723 [2024-09-29 16:45:08.166169] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.723 [2024-09-29 16:45:08.166470] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.723 [2024-09-29 16:45:08.166788] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.723 [2024-09-29 16:45:08.166815] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.723 [2024-09-29 16:45:08.166834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.723 [2024-09-29 16:45:08.170993] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.723 [2024-09-29 16:45:08.180043] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.723 [2024-09-29 16:45:08.180523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.723 [2024-09-29 16:45:08.180564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.723 [2024-09-29 16:45:08.180590] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.723 [2024-09-29 16:45:08.180905] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.723 [2024-09-29 16:45:08.181211] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.723 [2024-09-29 16:45:08.181242] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.723 [2024-09-29 16:45:08.181264] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.723 [2024-09-29 16:45:08.185455] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.723 [2024-09-29 16:45:08.194685] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.723 [2024-09-29 16:45:08.195404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.723 [2024-09-29 16:45:08.195449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.723 [2024-09-29 16:45:08.195479] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.723 [2024-09-29 16:45:08.195819] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.723 [2024-09-29 16:45:08.196126] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.723 [2024-09-29 16:45:08.196159] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.723 [2024-09-29 16:45:08.196187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.723 [2024-09-29 16:45:08.200407] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.723 [2024-09-29 16:45:08.209398] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.723 [2024-09-29 16:45:08.209940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.723 [2024-09-29 16:45:08.209995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.723 [2024-09-29 16:45:08.210022] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.723 [2024-09-29 16:45:08.210324] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.723 [2024-09-29 16:45:08.210622] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.723 [2024-09-29 16:45:08.210653] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.723 [2024-09-29 16:45:08.210687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.723 [2024-09-29 16:45:08.214919] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.723 [2024-09-29 16:45:08.224023] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.723 [2024-09-29 16:45:08.224486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.723 [2024-09-29 16:45:08.224529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.723 [2024-09-29 16:45:08.224556] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.723 [2024-09-29 16:45:08.224903] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.723 [2024-09-29 16:45:08.225208] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.723 [2024-09-29 16:45:08.225239] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.723 [2024-09-29 16:45:08.225262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.723 [2024-09-29 16:45:08.229600] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.723 [2024-09-29 16:45:08.238666] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.723 [2024-09-29 16:45:08.239195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.723 [2024-09-29 16:45:08.239238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.723 [2024-09-29 16:45:08.239265] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.723 [2024-09-29 16:45:08.239557] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.723 [2024-09-29 16:45:08.239862] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.723 [2024-09-29 16:45:08.239894] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.723 [2024-09-29 16:45:08.239916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.723 [2024-09-29 16:45:08.244158] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.723 [2024-09-29 16:45:08.253280] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.723 [2024-09-29 16:45:08.253751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.723 [2024-09-29 16:45:08.253790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.723 [2024-09-29 16:45:08.253814] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.723 [2024-09-29 16:45:08.254115] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.723 [2024-09-29 16:45:08.254410] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.723 [2024-09-29 16:45:08.254447] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.723 [2024-09-29 16:45:08.254470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.723 [2024-09-29 16:45:08.258641] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.723 [2024-09-29 16:45:08.267942] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.723 [2024-09-29 16:45:08.268448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.723 [2024-09-29 16:45:08.268489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.723 [2024-09-29 16:45:08.268515] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.723 [2024-09-29 16:45:08.268838] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.723 [2024-09-29 16:45:08.269139] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.723 [2024-09-29 16:45:08.269171] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.723 [2024-09-29 16:45:08.269193] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.723 [2024-09-29 16:45:08.273386] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.723 [2024-09-29 16:45:08.282887] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.723 [2024-09-29 16:45:08.283527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.723 [2024-09-29 16:45:08.283572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.723 [2024-09-29 16:45:08.283599] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.983 [2024-09-29 16:45:08.284045] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.983 [2024-09-29 16:45:08.284361] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.983 [2024-09-29 16:45:08.284394] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.983 [2024-09-29 16:45:08.284416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.983 [2024-09-29 16:45:08.288846] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.983 [2024-09-29 16:45:08.297535] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.983 [2024-09-29 16:45:08.298002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.983 [2024-09-29 16:45:08.298056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.983 [2024-09-29 16:45:08.298081] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.983 [2024-09-29 16:45:08.298384] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.983 [2024-09-29 16:45:08.298691] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.983 [2024-09-29 16:45:08.298734] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.983 [2024-09-29 16:45:08.298752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.983 [2024-09-29 16:45:08.302889] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.983 [2024-09-29 16:45:08.312065] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.983 [2024-09-29 16:45:08.312573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.983 [2024-09-29 16:45:08.312611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.983 [2024-09-29 16:45:08.312635] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.983 [2024-09-29 16:45:08.312932] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.983 [2024-09-29 16:45:08.313046] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:07.983 [2024-09-29 16:45:08.313100] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:07.983 [2024-09-29 16:45:08.313126] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:07.983 [2024-09-29 16:45:08.313151] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:07.983 [2024-09-29 16:45:08.313170] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:07.983 [2024-09-29 16:45:08.313243] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.983 [2024-09-29 16:45:08.313273] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.983 [2024-09-29 16:45:08.313306] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.983 [2024-09-29 16:45:08.313306] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:37:07.983 [2024-09-29 16:45:08.313432] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:37:07.983 [2024-09-29 16:45:08.313434] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:37:07.983 [2024-09-29 16:45:08.317154] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.983 [2024-09-29 16:45:08.326323] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.983 [2024-09-29 16:45:08.327003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.983 [2024-09-29 16:45:08.327053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.983 [2024-09-29 16:45:08.327083] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.983 [2024-09-29 16:45:08.327375] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.983 [2024-09-29 16:45:08.327642] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.983 [2024-09-29 16:45:08.327697] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.983 [2024-09-29 16:45:08.327734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.983 [2024-09-29 16:45:08.331601] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.983 [2024-09-29 16:45:08.340716] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.983 [2024-09-29 16:45:08.341166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.983 [2024-09-29 16:45:08.341206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.983 [2024-09-29 16:45:08.341232] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.984 [2024-09-29 16:45:08.341514] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.984 [2024-09-29 16:45:08.341787] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.984 [2024-09-29 16:45:08.341821] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.984 [2024-09-29 16:45:08.341843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.984 [2024-09-29 16:45:08.345657] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.984 [2024-09-29 16:45:08.354907] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.984 [2024-09-29 16:45:08.355330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.984 [2024-09-29 16:45:08.355367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.984 [2024-09-29 16:45:08.355391] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.984 [2024-09-29 16:45:08.355721] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.984 [2024-09-29 16:45:08.356003] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.984 [2024-09-29 16:45:08.356031] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.984 [2024-09-29 16:45:08.356050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.984 [2024-09-29 16:45:08.359815] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.984 [2024-09-29 16:45:08.368928] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.984 [2024-09-29 16:45:08.369337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.984 [2024-09-29 16:45:08.369374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.984 [2024-09-29 16:45:08.369398] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.984 [2024-09-29 16:45:08.369684] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.984 [2024-09-29 16:45:08.369949] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.984 [2024-09-29 16:45:08.369976] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.984 [2024-09-29 16:45:08.369999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.984 [2024-09-29 16:45:08.373846] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.984 [2024-09-29 16:45:08.383135] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.984 [2024-09-29 16:45:08.383552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.984 [2024-09-29 16:45:08.383589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.984 [2024-09-29 16:45:08.383612] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.984 [2024-09-29 16:45:08.383885] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.984 [2024-09-29 16:45:08.384160] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.984 [2024-09-29 16:45:08.384187] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.984 [2024-09-29 16:45:08.384206] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.984 [2024-09-29 16:45:08.388066] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.984 [2024-09-29 16:45:08.397433] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.984 [2024-09-29 16:45:08.398100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.984 [2024-09-29 16:45:08.398149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.984 [2024-09-29 16:45:08.398180] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.984 [2024-09-29 16:45:08.398477] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.984 [2024-09-29 16:45:08.398780] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.984 [2024-09-29 16:45:08.398810] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.984 [2024-09-29 16:45:08.398837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.984 [2024-09-29 16:45:08.402903] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.984 [2024-09-29 16:45:08.412030] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.984 [2024-09-29 16:45:08.412667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.984 [2024-09-29 16:45:08.412723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.984 [2024-09-29 16:45:08.412753] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.984 [2024-09-29 16:45:08.413048] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.984 [2024-09-29 16:45:08.413318] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.984 [2024-09-29 16:45:08.413347] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.984 [2024-09-29 16:45:08.413371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.984 [2024-09-29 16:45:08.417426] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.984 [2024-09-29 16:45:08.426375] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.984 [2024-09-29 16:45:08.426923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.984 [2024-09-29 16:45:08.426978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.984 [2024-09-29 16:45:08.427005] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.984 [2024-09-29 16:45:08.427292] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.984 [2024-09-29 16:45:08.427555] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.984 [2024-09-29 16:45:08.427583] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.984 [2024-09-29 16:45:08.427606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.984 [2024-09-29 16:45:08.431408] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.984 [2024-09-29 16:45:08.440736] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.984 [2024-09-29 16:45:08.441169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.984 [2024-09-29 16:45:08.441206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.984 [2024-09-29 16:45:08.441236] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.984 [2024-09-29 16:45:08.441514] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.984 [2024-09-29 16:45:08.441784] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.984 [2024-09-29 16:45:08.441811] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.984 [2024-09-29 16:45:08.441831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.984 [2024-09-29 16:45:08.445621] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.984 [2024-09-29 16:45:08.454860] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.984 [2024-09-29 16:45:08.455281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.984 [2024-09-29 16:45:08.455318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.984 [2024-09-29 16:45:08.455342] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.984 [2024-09-29 16:45:08.455619] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.984 [2024-09-29 16:45:08.455905] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.984 [2024-09-29 16:45:08.455934] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.984 [2024-09-29 16:45:08.455954] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.984 [2024-09-29 16:45:08.459774] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.984 [2024-09-29 16:45:08.468994] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.984 [2024-09-29 16:45:08.469417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.984 [2024-09-29 16:45:08.469455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.984 [2024-09-29 16:45:08.469478] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.984 [2024-09-29 16:45:08.469769] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.984 [2024-09-29 16:45:08.470038] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.984 [2024-09-29 16:45:08.470064] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.984 [2024-09-29 16:45:08.470084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.984 [2024-09-29 16:45:08.473895] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.984 [2024-09-29 16:45:08.483266] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.984 [2024-09-29 16:45:08.483705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.984 [2024-09-29 16:45:08.483753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.984 [2024-09-29 16:45:08.483777] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.984 [2024-09-29 16:45:08.484051] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.985 [2024-09-29 16:45:08.484305] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.985 [2024-09-29 16:45:08.484337] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.985 [2024-09-29 16:45:08.484357] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.985 [2024-09-29 16:45:08.488147] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.985 [2024-09-29 16:45:08.497452] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.985 [2024-09-29 16:45:08.497857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.985 [2024-09-29 16:45:08.497893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.985 [2024-09-29 16:45:08.497917] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.985 [2024-09-29 16:45:08.498192] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.985 [2024-09-29 16:45:08.498446] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.985 [2024-09-29 16:45:08.498472] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.985 [2024-09-29 16:45:08.498491] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.985 [2024-09-29 16:45:08.502237] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.985 [2024-09-29 16:45:08.511566] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.985 [2024-09-29 16:45:08.512020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.985 [2024-09-29 16:45:08.512058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.985 [2024-09-29 16:45:08.512081] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.985 [2024-09-29 16:45:08.512354] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.985 [2024-09-29 16:45:08.512608] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.985 [2024-09-29 16:45:08.512634] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.985 [2024-09-29 16:45:08.512668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.985 [2024-09-29 16:45:08.516441] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.985 [2024-09-29 16:45:08.525707] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.985 [2024-09-29 16:45:08.526094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.985 [2024-09-29 16:45:08.526132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.985 [2024-09-29 16:45:08.526155] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.985 [2024-09-29 16:45:08.526428] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.985 [2024-09-29 16:45:08.526693] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.985 [2024-09-29 16:45:08.526727] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.985 [2024-09-29 16:45:08.526746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:07.985 [2024-09-29 16:45:08.530483] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:07.985 [2024-09-29 16:45:08.539919] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:07.985 [2024-09-29 16:45:08.540540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:07.985 [2024-09-29 16:45:08.540590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:07.985 [2024-09-29 16:45:08.540620] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:07.985 [2024-09-29 16:45:08.540911] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:07.985 [2024-09-29 16:45:08.541191] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:07.985 [2024-09-29 16:45:08.541231] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:07.985 [2024-09-29 16:45:08.541272] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:08.245 [2024-09-29 16:45:08.545621] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:08.245 [2024-09-29 16:45:08.554348] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:08.245 [2024-09-29 16:45:08.554951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:08.245 [2024-09-29 16:45:08.555000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:08.245 [2024-09-29 16:45:08.555031] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:08.245 [2024-09-29 16:45:08.555330] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:08.245 [2024-09-29 16:45:08.555588] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:08.245 [2024-09-29 16:45:08.555616] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:08.245 [2024-09-29 16:45:08.555640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:08.245 [2024-09-29 16:45:08.559502] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:08.245 [2024-09-29 16:45:08.568708] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:08.245 [2024-09-29 16:45:08.569217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:08.245 [2024-09-29 16:45:08.569255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:08.245 [2024-09-29 16:45:08.569295] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:08.245 [2024-09-29 16:45:08.569585] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:08.245 [2024-09-29 16:45:08.569892] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:08.245 [2024-09-29 16:45:08.569922] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:08.245 [2024-09-29 16:45:08.569942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:08.245 [2024-09-29 16:45:08.573798] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:08.245 [2024-09-29 16:45:08.582864] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:08.245 [2024-09-29 16:45:08.583319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:08.245 [2024-09-29 16:45:08.583357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:08.245 [2024-09-29 16:45:08.583387] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:08.245 [2024-09-29 16:45:08.583669] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:08.245 [2024-09-29 16:45:08.583967] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:08.245 [2024-09-29 16:45:08.584017] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:08.245 [2024-09-29 16:45:08.584036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:08.245 [2024-09-29 16:45:08.587891] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:08.245 [2024-09-29 16:45:08.597215] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:08.245 [2024-09-29 16:45:08.597623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:08.245 [2024-09-29 16:45:08.597660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:08.245 [2024-09-29 16:45:08.597695] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:08.245 [2024-09-29 16:45:08.597987] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:08.245 [2024-09-29 16:45:08.598244] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:08.245 [2024-09-29 16:45:08.598271] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:08.245 [2024-09-29 16:45:08.598290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:08.246 [2024-09-29 16:45:08.602146] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:08.246 [2024-09-29 16:45:08.611405] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:08.246 [2024-09-29 16:45:08.611812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:08.246 [2024-09-29 16:45:08.611849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:08.246 [2024-09-29 16:45:08.611873] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:08.246 [2024-09-29 16:45:08.612148] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:08.246 [2024-09-29 16:45:08.612412] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:08.246 [2024-09-29 16:45:08.612438] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:08.246 [2024-09-29 16:45:08.612457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:08.246 [2024-09-29 16:45:08.616229] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:08.246 [2024-09-29 16:45:08.625615] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:08.246 [2024-09-29 16:45:08.626079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:08.246 [2024-09-29 16:45:08.626117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:08.246 [2024-09-29 16:45:08.626141] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:08.246 [2024-09-29 16:45:08.626413] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:08.246 [2024-09-29 16:45:08.626685] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:08.246 [2024-09-29 16:45:08.626712] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:08.246 [2024-09-29 16:45:08.626731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:08.246 [2024-09-29 16:45:08.630509] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:08.246 [2024-09-29 16:45:08.639842] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:08.246 [2024-09-29 16:45:08.640248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:08.246 [2024-09-29 16:45:08.640284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:08.246 [2024-09-29 16:45:08.640308] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:08.246 [2024-09-29 16:45:08.640577] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:08.246 [2024-09-29 16:45:08.640849] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:08.246 [2024-09-29 16:45:08.640877] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:08.246 [2024-09-29 16:45:08.640897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:08.246 [2024-09-29 16:45:08.644691] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:08.246 [2024-09-29 16:45:08.654155] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:08.246 [2024-09-29 16:45:08.654605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:08.246 [2024-09-29 16:45:08.654644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:08.246 [2024-09-29 16:45:08.654686] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:08.246 [2024-09-29 16:45:08.654953] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:08.246 [2024-09-29 16:45:08.655251] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:08.246 [2024-09-29 16:45:08.655278] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:08.246 [2024-09-29 16:45:08.655297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:08.246 [2024-09-29 16:45:08.659093] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:08.246 [2024-09-29 16:45:08.668379] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:08.246 [2024-09-29 16:45:08.668885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:08.246 [2024-09-29 16:45:08.668926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:08.246 [2024-09-29 16:45:08.668950] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:08.246 [2024-09-29 16:45:08.669243] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:08.246 [2024-09-29 16:45:08.669515] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:08.246 [2024-09-29 16:45:08.669542] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:08.246 [2024-09-29 16:45:08.669563] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:08.246 [2024-09-29 16:45:08.673512] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:08.246 [2024-09-29 16:45:08.682768] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:08.246 [2024-09-29 16:45:08.683197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:08.246 [2024-09-29 16:45:08.683234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:08.246 [2024-09-29 16:45:08.683268] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:08.246 [2024-09-29 16:45:08.683545] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:08.246 [2024-09-29 16:45:08.683822] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:08.246 [2024-09-29 16:45:08.683850] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:08.246 [2024-09-29 16:45:08.683870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:08.246 [2024-09-29 16:45:08.687755] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:08.246 [2024-09-29 16:45:08.697067] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:08.246 [2024-09-29 16:45:08.697508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:08.246 [2024-09-29 16:45:08.697545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:08.246 [2024-09-29 16:45:08.697569] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:08.246 [2024-09-29 16:45:08.697854] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:08.246 [2024-09-29 16:45:08.698149] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:08.246 [2024-09-29 16:45:08.698176] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:08.246 [2024-09-29 16:45:08.698196] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:08.246 [2024-09-29 16:45:08.702089] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:08.246 [2024-09-29 16:45:08.711287] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:08.246 [2024-09-29 16:45:08.711712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:08.246 [2024-09-29 16:45:08.711750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:08.246 [2024-09-29 16:45:08.711774] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:08.246 [2024-09-29 16:45:08.712062] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:08.246 [2024-09-29 16:45:08.712354] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:08.246 [2024-09-29 16:45:08.712382] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:08.246 [2024-09-29 16:45:08.712402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:08.246 2139.50 IOPS, 8.36 MiB/s [2024-09-29 16:45:08.717867] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:08.246 [2024-09-29 16:45:08.725583] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:08.246 [2024-09-29 16:45:08.726015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:08.246 [2024-09-29 16:45:08.726057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:08.246 [2024-09-29 16:45:08.726081] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:08.246 [2024-09-29 16:45:08.726354] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:08.246 [2024-09-29 16:45:08.726609] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:08.246 [2024-09-29 16:45:08.726636] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:08.246 [2024-09-29 16:45:08.726655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:08.246 [2024-09-29 16:45:08.730500] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:08.246 [2024-09-29 16:45:08.739898] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:08.246 [2024-09-29 16:45:08.740373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:08.246 [2024-09-29 16:45:08.740410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:08.246 [2024-09-29 16:45:08.740433] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:08.246 [2024-09-29 16:45:08.740717] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:08.246 [2024-09-29 16:45:08.740982] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:08.246 [2024-09-29 16:45:08.741008] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:08.246 [2024-09-29 16:45:08.741028] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:08.246 [2024-09-29 16:45:08.744796] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:08.247 [2024-09-29 16:45:08.754032] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:08.247 [2024-09-29 16:45:08.754465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:08.247 [2024-09-29 16:45:08.754503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:08.247 [2024-09-29 16:45:08.754525] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:08.247 [2024-09-29 16:45:08.754809] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:08.247 [2024-09-29 16:45:08.755063] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:08.247 [2024-09-29 16:45:08.755089] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:08.247 [2024-09-29 16:45:08.755109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:08.247 [2024-09-29 16:45:08.758858] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:08.247 [2024-09-29 16:45:08.768256] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:08.247 [2024-09-29 16:45:08.768667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:08.247 [2024-09-29 16:45:08.768712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:08.247 [2024-09-29 16:45:08.768735] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:08.247 [2024-09-29 16:45:08.769016] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:08.247 [2024-09-29 16:45:08.769286] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:08.247 [2024-09-29 16:45:08.769318] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:08.247 [2024-09-29 16:45:08.769337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:08.247 [2024-09-29 16:45:08.773133] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:08.247 [2024-09-29 16:45:08.782383] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:08.247 [2024-09-29 16:45:08.782811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:08.247 [2024-09-29 16:45:08.782848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:08.247 [2024-09-29 16:45:08.782872] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:08.247 [2024-09-29 16:45:08.783145] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:08.247 [2024-09-29 16:45:08.783403] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:08.247 [2024-09-29 16:45:08.783429] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:08.247 [2024-09-29 16:45:08.783448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:08.247 [2024-09-29 16:45:08.787156] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:08.247 [2024-09-29 16:45:08.796512] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:08.247 [2024-09-29 16:45:08.796934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:08.247 [2024-09-29 16:45:08.796972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:08.247 [2024-09-29 16:45:08.796997] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:08.247 [2024-09-29 16:45:08.797270] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:08.247 [2024-09-29 16:45:08.797523] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:08.247 [2024-09-29 16:45:08.797550] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:08.247 [2024-09-29 16:45:08.797569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:08.247 [2024-09-29 16:45:08.801370] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:08.507 [2024-09-29 16:45:08.811075] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:08.507 [2024-09-29 16:45:08.811560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:08.507 [2024-09-29 16:45:08.811600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:08.507 [2024-09-29 16:45:08.811625] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:08.507 [2024-09-29 16:45:08.811924] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:08.507 [2024-09-29 16:45:08.812261] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:08.507 [2024-09-29 16:45:08.812306] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:08.507 [2024-09-29 16:45:08.812327] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:08.507 [2024-09-29 16:45:08.816200] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:08.507 [2024-09-29 16:45:08.825299] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:08.507 [2024-09-29 16:45:08.825732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:08.507 [2024-09-29 16:45:08.825771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:08.507 [2024-09-29 16:45:08.825794] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:08.507 [2024-09-29 16:45:08.826073] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:08.507 [2024-09-29 16:45:08.826326] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:08.507 [2024-09-29 16:45:08.826353] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:08.507 [2024-09-29 16:45:08.826373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:08.507 [2024-09-29 16:45:08.830200] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:08.507 [2024-09-29 16:45:08.839342] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:08.507 [2024-09-29 16:45:08.839789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:08.507 [2024-09-29 16:45:08.839827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:08.507 [2024-09-29 16:45:08.839851] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:08.507 [2024-09-29 16:45:08.840125] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:08.507 [2024-09-29 16:45:08.840378] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:08.507 [2024-09-29 16:45:08.840405] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:08.507 [2024-09-29 16:45:08.840424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:08.507 [2024-09-29 16:45:08.844225] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:08.507 [2024-09-29 16:45:08.853585] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:08.507 [2024-09-29 16:45:08.854074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:08.507 [2024-09-29 16:45:08.854114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:08.507 [2024-09-29 16:45:08.854138] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:08.507 [2024-09-29 16:45:08.854414] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:08.507 [2024-09-29 16:45:08.854685] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:08.507 [2024-09-29 16:45:08.854713] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:08.507 [2024-09-29 16:45:08.854732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:08.507 [2024-09-29 16:45:08.858461] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:08.507 [2024-09-29 16:45:08.867619] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:08.507 [2024-09-29 16:45:08.868066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:08.507 [2024-09-29 16:45:08.868109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:08.507 [2024-09-29 16:45:08.868133] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:08.507 [2024-09-29 16:45:08.868409] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:08.507 [2024-09-29 16:45:08.868682] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:08.507 [2024-09-29 16:45:08.868710] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:08.508 [2024-09-29 16:45:08.868729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:08.508 [2024-09-29 16:45:08.872451] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:08.508 [2024-09-29 16:45:08.881807] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:08.508 [2024-09-29 16:45:08.882228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:08.508 [2024-09-29 16:45:08.882265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:08.508 [2024-09-29 16:45:08.882288] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:08.508 [2024-09-29 16:45:08.882551] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:08.508 [2024-09-29 16:45:08.882835] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:08.508 [2024-09-29 16:45:08.882863] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:08.508 [2024-09-29 16:45:08.882882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:08.508 [2024-09-29 16:45:08.886947] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:08.508 16:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:08.508 16:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:37:08.508 16:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:37:08.508 16:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:08.508 16:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:08.508 [2024-09-29 16:45:08.896050] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:08.508 [2024-09-29 16:45:08.896451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:08.508 [2024-09-29 16:45:08.896498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:08.508 [2024-09-29 16:45:08.896522] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:08.508 [2024-09-29 16:45:08.896796] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:08.508 [2024-09-29 16:45:08.897062] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:08.508 [2024-09-29 16:45:08.897089] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:08.508 [2024-09-29 16:45:08.897109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:08.508 [2024-09-29 16:45:08.901024] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:08.508 [2024-09-29 16:45:08.910390] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:08.508 [2024-09-29 16:45:08.910838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:08.508 [2024-09-29 16:45:08.910875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:08.508 [2024-09-29 16:45:08.910899] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:08.508 [2024-09-29 16:45:08.911173] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:08.508 [2024-09-29 16:45:08.911430] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:08.508 [2024-09-29 16:45:08.911456] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:08.508 [2024-09-29 16:45:08.911475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:08.508 16:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:08.508 16:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:08.508 16:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.508 16:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:08.508 [2024-09-29 16:45:08.915340] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:08.508 [2024-09-29 16:45:08.915948] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:08.508 16:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.508 16:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:08.508 [2024-09-29 16:45:08.924695] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:08.508 16:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.508 16:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:08.508 [2024-09-29 16:45:08.925096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:08.508 [2024-09-29 16:45:08.925134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:08.508 [2024-09-29 16:45:08.925157] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:08.508 [2024-09-29 16:45:08.925420] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:08.508 [2024-09-29 16:45:08.925696] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:08.508 [2024-09-29 16:45:08.925723] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:08.508 [2024-09-29 16:45:08.925743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:08.508 [2024-09-29 16:45:08.929639] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:08.508 [2024-09-29 16:45:08.939036] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:08.508 [2024-09-29 16:45:08.939670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:08.508 [2024-09-29 16:45:08.939735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:08.508 [2024-09-29 16:45:08.939765] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:08.508 [2024-09-29 16:45:08.940065] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:08.508 [2024-09-29 16:45:08.940336] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:08.508 [2024-09-29 16:45:08.940373] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:08.508 [2024-09-29 16:45:08.940398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:08.508 [2024-09-29 16:45:08.944405] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:08.508 [2024-09-29 16:45:08.953349] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:08.508 [2024-09-29 16:45:08.953867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:08.508 [2024-09-29 16:45:08.953910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:08.508 [2024-09-29 16:45:08.953937] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:08.508 [2024-09-29 16:45:08.954219] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:08.508 [2024-09-29 16:45:08.954480] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:08.508 [2024-09-29 16:45:08.954507] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:08.508 [2024-09-29 16:45:08.954529] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:08.508 [2024-09-29 16:45:08.958379] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:08.508 [2024-09-29 16:45:08.967501] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:08.508 [2024-09-29 16:45:08.967978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:08.508 [2024-09-29 16:45:08.968015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:08.509 [2024-09-29 16:45:08.968038] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:08.509 [2024-09-29 16:45:08.968316] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:08.509 [2024-09-29 16:45:08.968574] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:08.509 [2024-09-29 16:45:08.968601] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:08.509 [2024-09-29 16:45:08.968635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:08.509 [2024-09-29 16:45:08.972434] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:08.509 [2024-09-29 16:45:08.981682] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:08.509 [2024-09-29 16:45:08.982101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:08.509 [2024-09-29 16:45:08.982137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:08.509 [2024-09-29 16:45:08.982161] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:08.509 [2024-09-29 16:45:08.982437] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:08.509 [2024-09-29 16:45:08.982723] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:08.509 [2024-09-29 16:45:08.982751] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:08.509 [2024-09-29 16:45:08.982771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:08.509 [2024-09-29 16:45:08.986511] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:08.509 [2024-09-29 16:45:08.995733] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:08.509 [2024-09-29 16:45:08.996147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:08.509 [2024-09-29 16:45:08.996184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:08.509 [2024-09-29 16:45:08.996207] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:08.509 [2024-09-29 16:45:08.996484] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:08.509 [2024-09-29 16:45:08.996774] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:08.509 [2024-09-29 16:45:08.996803] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:08.509 [2024-09-29 16:45:08.996823] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:08.509 Malloc0 00:37:08.509 16:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.509 16:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:08.509 16:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.509 16:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:08.509 [2024-09-29 16:45:09.000638] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:08.509 16:45:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.509 16:45:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:08.509 16:45:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.509 16:45:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:08.509 [2024-09-29 16:45:09.010059] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:08.509 [2024-09-29 16:45:09.010489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:08.509 [2024-09-29 16:45:09.010526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:08.509 [2024-09-29 16:45:09.010549] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:08.509 [2024-09-29 16:45:09.010823] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:08.509 [2024-09-29 16:45:09.011100] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:08.509 [2024-09-29 16:45:09.011127] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:08.509 [2024-09-29 16:45:09.011147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:08.509 [2024-09-29 16:45:09.015070] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:08.509 16:45:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.509 16:45:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:08.509 16:45:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.509 16:45:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:08.509 [2024-09-29 16:45:09.019905] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:08.509 16:45:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.509 16:45:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3325164 00:37:08.509 [2024-09-29 16:45:09.024306] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:08.766 [2024-09-29 16:45:09.111013] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:37:17.243 2356.86 IOPS, 9.21 MiB/s 2832.88 IOPS, 11.07 MiB/s 3203.89 IOPS, 12.52 MiB/s 3512.50 IOPS, 13.72 MiB/s 3774.82 IOPS, 14.75 MiB/s 3984.17 IOPS, 15.56 MiB/s 4156.54 IOPS, 16.24 MiB/s 4299.93 IOPS, 16.80 MiB/s 4414.53 IOPS, 17.24 MiB/s 00:37:17.243 Latency(us) 00:37:17.243 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:17.243 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:37:17.243 Verification LBA range: start 0x0 length 0x4000 00:37:17.243 Nvme1n1 : 15.01 4418.77 17.26 9081.24 0.00 9452.92 1110.47 37671.06 00:37:17.243 =================================================================================================================== 00:37:17.243 Total : 4418.77 17.26 9081.24 0.00 9452.92 1110.47 37671.06 00:37:18.178 16:45:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:37:18.178 16:45:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:18.178 16:45:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.178 16:45:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:18.178 16:45:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.178 16:45:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:37:18.178 16:45:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:37:18.178 16:45:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # nvmfcleanup 00:37:18.178 16:45:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:37:18.178 16:45:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:18.178 16:45:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:37:18.178 16:45:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:18.178 16:45:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:18.178 rmmod nvme_tcp 00:37:18.436 rmmod nvme_fabrics 00:37:18.436 rmmod nvme_keyring 00:37:18.436 16:45:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:18.436 16:45:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:37:18.436 16:45:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:37:18.436 16:45:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@513 -- # '[' -n 3326220 ']' 00:37:18.436 16:45:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@514 -- # killprocess 3326220 00:37:18.436 16:45:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 3326220 ']' 00:37:18.436 16:45:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 3326220 00:37:18.436 16:45:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:37:18.436 16:45:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:18.436 16:45:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3326220 00:37:18.436 16:45:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:18.436 16:45:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:18.436 16:45:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3326220' 00:37:18.436 killing process with pid 3326220 00:37:18.436 16:45:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 3326220 00:37:18.437 16:45:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 3326220 00:37:19.811 16:45:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:37:19.811 16:45:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:37:19.811 16:45:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:37:19.811 16:45:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:37:19.811 16:45:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@787 -- # iptables-save 00:37:19.811 16:45:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:37:19.811 16:45:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@787 -- # iptables-restore 00:37:19.811 16:45:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:19.811 16:45:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:19.811 16:45:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:19.811 16:45:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:19.811 16:45:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:22.341 16:45:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:22.341 00:37:22.341 real 0m27.111s 00:37:22.341 user 1m14.550s 00:37:22.341 sys 0m4.554s 00:37:22.341 16:45:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:22.341 16:45:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:22.341 ************************************ 00:37:22.341 END TEST nvmf_bdevperf 00:37:22.341 ************************************ 00:37:22.341 16:45:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:37:22.341 16:45:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:37:22.341 16:45:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:22.341 16:45:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:22.341 ************************************ 00:37:22.341 START TEST nvmf_target_disconnect 00:37:22.341 ************************************ 00:37:22.341 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:37:22.341 * Looking for test storage... 00:37:22.341 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:22.341 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:37:22.341 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # lcov --version 00:37:22.341 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:37:22.341 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:37:22.341 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:22.341 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:22.341 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:22.341 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:37:22.341 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:37:22.341 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:37:22.341 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:37:22.341 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:37:22.341 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:37:22.341 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:37:22.341 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:22.341 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:37:22.341 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:37:22.341 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:22.341 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:22.341 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:37:22.341 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:37:22.341 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:22.341 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:37:22.341 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:37:22.341 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:37:22.341 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:37:22.341 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:22.341 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:37:22.341 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:37:22.341 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:22.341 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:22.341 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:37:22.341 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:22.341 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:37:22.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:22.341 --rc genhtml_branch_coverage=1 00:37:22.341 --rc genhtml_function_coverage=1 00:37:22.341 --rc genhtml_legend=1 00:37:22.341 --rc geninfo_all_blocks=1 00:37:22.341 --rc geninfo_unexecuted_blocks=1 00:37:22.341 00:37:22.341 ' 00:37:22.341 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:37:22.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:22.341 --rc genhtml_branch_coverage=1 00:37:22.341 --rc genhtml_function_coverage=1 00:37:22.341 --rc genhtml_legend=1 00:37:22.341 --rc geninfo_all_blocks=1 00:37:22.341 --rc geninfo_unexecuted_blocks=1 00:37:22.341 00:37:22.341 ' 00:37:22.341 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:37:22.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:22.341 --rc genhtml_branch_coverage=1 00:37:22.341 --rc genhtml_function_coverage=1 00:37:22.341 --rc genhtml_legend=1 00:37:22.341 --rc geninfo_all_blocks=1 00:37:22.341 --rc geninfo_unexecuted_blocks=1 00:37:22.341 00:37:22.341 ' 00:37:22.341 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:37:22.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:22.341 --rc genhtml_branch_coverage=1 00:37:22.341 --rc genhtml_function_coverage=1 00:37:22.341 --rc genhtml_legend=1 00:37:22.341 --rc geninfo_all_blocks=1 00:37:22.341 --rc geninfo_unexecuted_blocks=1 00:37:22.341 00:37:22.341 ' 00:37:22.341 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:22.341 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:37:22.341 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:22.341 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:22.341 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:22.341 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:22.341 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:22.341 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:22.341 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:22.341 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:22.341 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:22.342 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:22.342 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:22.342 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:22.342 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:22.342 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:22.342 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:22.342 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:22.342 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:22.342 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:37:22.342 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:22.342 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:22.342 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:22.342 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:22.342 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:22.342 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:22.342 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:37:22.342 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:22.342 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:37:22.342 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:22.342 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:22.342 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:22.342 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:22.342 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:22.342 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:22.342 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:22.342 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:22.342 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:22.342 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:22.342 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:37:22.342 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:37:22.342 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:37:22.342 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:37:22.342 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:37:22.342 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:22.342 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@472 -- # prepare_net_devs 00:37:22.342 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@434 -- # local -g is_hw=no 00:37:22.342 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@436 -- # remove_spdk_ns 00:37:22.342 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:22.342 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:22.342 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:22.342 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:37:22.342 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:37:22.342 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:37:22.342 16:45:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:24.244 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:24.244 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:37:24.244 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:24.244 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:24.244 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:24.244 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:24.244 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:24.244 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:37:24.244 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:24.244 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:37:24.244 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:37:24.244 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:37:24.244 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:37:24.244 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:37:24.244 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:37:24.244 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:24.244 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:24.244 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:24.244 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:24.244 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:24.244 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:24.244 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:24.244 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:24.244 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:24.244 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:24.244 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:24.244 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:37:24.244 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:37:24.244 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:37:24.244 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:37:24.244 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:37:24.244 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:37:24.244 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:37:24.244 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:24.244 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:24.244 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:37:24.244 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:24.245 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ up == up ]] 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:24.245 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ up == up ]] 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:24.245 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # is_hw=yes 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:24.245 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:24.245 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.151 ms 00:37:24.245 00:37:24.245 --- 10.0.0.2 ping statistics --- 00:37:24.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:24.245 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:24.245 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:24.245 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:37:24.245 00:37:24.245 --- 10.0.0.1 ping statistics --- 00:37:24.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:24.245 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # return 0 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:24.245 ************************************ 00:37:24.245 START TEST nvmf_target_disconnect_tc1 00:37:24.245 ************************************ 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:37:24.245 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:24.246 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:24.246 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:24.246 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:24.246 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:24.246 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:24.246 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:24.246 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:24.246 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:37:24.246 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:24.504 [2024-09-29 16:45:24.946200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:24.504 [2024-09-29 16:45:24.946328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:37:24.504 [2024-09-29 16:45:24.946424] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:37:24.504 [2024-09-29 16:45:24.946461] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:37:24.504 [2024-09-29 16:45:24.946487] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:37:24.504 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:37:24.504 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:37:24.504 Initializing NVMe Controllers 00:37:24.504 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:37:24.504 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:24.504 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:24.504 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:24.504 00:37:24.504 real 0m0.220s 00:37:24.504 user 0m0.097s 00:37:24.504 sys 0m0.123s 00:37:24.504 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:24.504 16:45:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:37:24.504 ************************************ 00:37:24.504 END TEST nvmf_target_disconnect_tc1 00:37:24.504 ************************************ 00:37:24.504 16:45:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:37:24.504 16:45:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:24.504 16:45:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:24.504 16:45:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:24.504 ************************************ 00:37:24.504 START TEST nvmf_target_disconnect_tc2 00:37:24.504 ************************************ 00:37:24.504 16:45:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:37:24.504 16:45:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:37:24.504 16:45:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:37:24.504 16:45:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:37:24.504 16:45:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:24.504 16:45:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:24.504 16:45:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # nvmfpid=3329855 00:37:24.504 16:45:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:37:24.505 16:45:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # waitforlisten 3329855 00:37:24.505 16:45:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3329855 ']' 00:37:24.505 16:45:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:24.505 16:45:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:24.505 16:45:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:24.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:24.505 16:45:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:24.505 16:45:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:24.762 [2024-09-29 16:45:25.136443] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:37:24.762 [2024-09-29 16:45:25.136597] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:24.762 [2024-09-29 16:45:25.277480] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:25.020 [2024-09-29 16:45:25.504945] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:25.020 [2024-09-29 16:45:25.505044] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:25.020 [2024-09-29 16:45:25.505066] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:25.020 [2024-09-29 16:45:25.505085] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:25.020 [2024-09-29 16:45:25.505100] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:25.020 [2024-09-29 16:45:25.505379] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:37:25.020 [2024-09-29 16:45:25.505445] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:37:25.020 [2024-09-29 16:45:25.505507] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:37:25.020 [2024-09-29 16:45:25.505514] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:37:25.583 16:45:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:25.583 16:45:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:37:25.583 16:45:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:37:25.583 16:45:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:25.583 16:45:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:25.583 16:45:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:25.583 16:45:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:25.583 16:45:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:25.583 16:45:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:25.841 Malloc0 00:37:25.841 16:45:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:25.841 16:45:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:37:25.841 16:45:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:25.841 16:45:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:25.841 [2024-09-29 16:45:26.196730] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:25.841 16:45:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:25.841 16:45:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:25.841 16:45:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:25.841 16:45:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:25.841 16:45:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:25.841 16:45:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:25.841 16:45:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:25.841 16:45:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:25.841 16:45:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:25.841 16:45:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:25.841 16:45:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:25.841 16:45:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:25.841 [2024-09-29 16:45:26.226621] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:25.841 16:45:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:25.841 16:45:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:25.841 16:45:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:25.841 16:45:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:25.841 16:45:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:25.841 16:45:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3330009 00:37:25.841 16:45:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:25.841 16:45:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:37:27.747 16:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3329855 00:37:27.747 16:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:37:27.747 Read completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Read completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Read completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Read completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Read completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Read completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Read completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Read completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Read completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Read completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Read completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Read completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Write completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Read completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Read completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Write completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Read completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Read completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Read completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Write completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Write completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Read completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Write completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Read completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Write completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Write completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Read completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Read completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Read completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Write completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Read completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Read completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Read completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Read completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Read completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Read completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Read completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 [2024-09-29 16:45:28.264133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:37:27.747 Read completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Read completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Read completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Read completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Write completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Read completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Write completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Write completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Read completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Read completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Read completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Read completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Write completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Write completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Write completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Read completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Read completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Write completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Read completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Write completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Write completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Write completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Write completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Write completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Read completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Write completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Read completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 [2024-09-29 16:45:28.264788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:27.747 Read completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Read completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Read completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Read completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Read completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Read completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Read completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Read completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Read completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Read completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Read completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Read completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Write completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Read completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Write completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Write completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Read completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Read completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Write completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Write completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Read completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Read completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Write completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Write completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Read completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Read completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Read completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Read completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Read completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Write completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Write completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Write completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 [2024-09-29 16:45:28.265416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:27.747 Read completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Read completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Read completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Read completed with error (sct=0, sc=8) 00:37:27.747 starting I/O failed 00:37:27.747 Read completed with error (sct=0, sc=8) 00:37:27.748 starting I/O failed 00:37:27.748 Write completed with error (sct=0, sc=8) 00:37:27.748 starting I/O failed 00:37:27.748 Write completed with error (sct=0, sc=8) 00:37:27.748 starting I/O failed 00:37:27.748 Write completed with error (sct=0, sc=8) 00:37:27.748 starting I/O failed 00:37:27.748 Write completed with error (sct=0, sc=8) 00:37:27.748 starting I/O failed 00:37:27.748 Write completed with error (sct=0, sc=8) 00:37:27.748 starting I/O failed 00:37:27.748 Read completed with error (sct=0, sc=8) 00:37:27.748 starting I/O failed 00:37:27.748 Read completed with error (sct=0, sc=8) 00:37:27.748 starting I/O failed 00:37:27.748 Read completed with error (sct=0, sc=8) 00:37:27.748 starting I/O failed 00:37:27.748 Read completed with error (sct=0, sc=8) 00:37:27.748 starting I/O failed 00:37:27.748 Write completed with error (sct=0, sc=8) 00:37:27.748 starting I/O failed 00:37:27.748 Write completed with error (sct=0, sc=8) 00:37:27.748 starting I/O failed 00:37:27.748 Write completed with error (sct=0, sc=8) 00:37:27.748 starting I/O failed 00:37:27.748 Read completed with error (sct=0, sc=8) 00:37:27.748 starting I/O failed 00:37:27.748 Write completed with error (sct=0, sc=8) 00:37:27.748 starting I/O failed 00:37:27.748 Write completed with error (sct=0, sc=8) 00:37:27.748 starting I/O failed 00:37:27.748 Write completed with error (sct=0, sc=8) 00:37:27.748 starting I/O failed 00:37:27.748 Read completed with error (sct=0, sc=8) 00:37:27.748 starting I/O failed 00:37:27.748 Write completed with error (sct=0, sc=8) 00:37:27.748 starting I/O failed 00:37:27.748 Write completed with error (sct=0, sc=8) 00:37:27.748 starting I/O failed 00:37:27.748 Write completed with error (sct=0, sc=8) 00:37:27.748 starting I/O failed 00:37:27.748 Write completed with error (sct=0, sc=8) 00:37:27.748 starting I/O failed 00:37:27.748 Write completed with error (sct=0, sc=8) 00:37:27.748 starting I/O failed 00:37:27.748 Read completed with error (sct=0, sc=8) 00:37:27.748 starting I/O failed 00:37:27.748 Read completed with error (sct=0, sc=8) 00:37:27.748 starting I/O failed 00:37:27.748 Read completed with error (sct=0, sc=8) 00:37:27.748 starting I/O failed 00:37:27.748 Write completed with error (sct=0, sc=8) 00:37:27.748 starting I/O failed 00:37:27.748 Read completed with error (sct=0, sc=8) 00:37:27.748 starting I/O failed 00:37:27.748 [2024-09-29 16:45:28.266083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:27.748 [2024-09-29 16:45:28.266350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.748 [2024-09-29 16:45:28.266391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:27.748 qpair failed and we were unable to recover it. 00:37:27.748 [2024-09-29 16:45:28.266587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.748 [2024-09-29 16:45:28.266623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:27.748 qpair failed and we were unable to recover it. 00:37:27.748 [2024-09-29 16:45:28.266775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.748 [2024-09-29 16:45:28.266810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:27.748 qpair failed and we were unable to recover it. 00:37:27.748 [2024-09-29 16:45:28.266936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.748 [2024-09-29 16:45:28.266975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:27.748 qpair failed and we were unable to recover it. 00:37:27.748 [2024-09-29 16:45:28.267125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.748 [2024-09-29 16:45:28.267159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:27.748 qpair failed and we were unable to recover it. 00:37:27.748 [2024-09-29 16:45:28.267315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.748 [2024-09-29 16:45:28.267348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:27.748 qpair failed and we were unable to recover it. 00:37:27.748 [2024-09-29 16:45:28.267501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.748 [2024-09-29 16:45:28.267535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:27.748 qpair failed and we were unable to recover it. 00:37:27.748 [2024-09-29 16:45:28.267716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.748 [2024-09-29 16:45:28.267766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:27.748 qpair failed and we were unable to recover it. 00:37:27.748 [2024-09-29 16:45:28.267923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.748 [2024-09-29 16:45:28.267969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:27.748 qpair failed and we were unable to recover it. 00:37:27.748 [2024-09-29 16:45:28.268098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.748 [2024-09-29 16:45:28.268132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:27.748 qpair failed and we were unable to recover it. 00:37:27.748 [2024-09-29 16:45:28.268281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.748 [2024-09-29 16:45:28.268314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:27.748 qpair failed and we were unable to recover it. 00:37:27.748 [2024-09-29 16:45:28.268459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.748 [2024-09-29 16:45:28.268492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:27.748 qpair failed and we were unable to recover it. 00:37:27.748 [2024-09-29 16:45:28.268661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.748 [2024-09-29 16:45:28.268718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:27.748 qpair failed and we were unable to recover it. 00:37:27.748 [2024-09-29 16:45:28.268854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.748 [2024-09-29 16:45:28.268890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:27.748 qpair failed and we were unable to recover it. 00:37:27.748 [2024-09-29 16:45:28.269072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.748 [2024-09-29 16:45:28.269106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:27.748 qpair failed and we were unable to recover it. 00:37:27.748 [2024-09-29 16:45:28.269283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.748 [2024-09-29 16:45:28.269335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:27.748 qpair failed and we were unable to recover it. 00:37:27.748 [2024-09-29 16:45:28.269464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.748 [2024-09-29 16:45:28.269501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:27.748 qpair failed and we were unable to recover it. 00:37:27.748 [2024-09-29 16:45:28.269680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.748 [2024-09-29 16:45:28.269713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:27.748 qpair failed and we were unable to recover it. 00:37:27.748 [2024-09-29 16:45:28.269853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.748 [2024-09-29 16:45:28.269886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:27.748 qpair failed and we were unable to recover it. 00:37:27.748 [2024-09-29 16:45:28.270038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.748 [2024-09-29 16:45:28.270070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:27.748 qpair failed and we were unable to recover it. 00:37:27.748 [2024-09-29 16:45:28.270233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.748 [2024-09-29 16:45:28.270265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:27.748 qpair failed and we were unable to recover it. 00:37:27.748 [2024-09-29 16:45:28.270412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.748 [2024-09-29 16:45:28.270445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:27.748 qpair failed and we were unable to recover it. 00:37:27.748 [2024-09-29 16:45:28.270587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.748 [2024-09-29 16:45:28.270620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:27.748 qpair failed and we were unable to recover it. 00:37:27.748 [2024-09-29 16:45:28.270762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.748 [2024-09-29 16:45:28.270796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:27.748 qpair failed and we were unable to recover it. 00:37:27.748 [2024-09-29 16:45:28.270964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.748 [2024-09-29 16:45:28.271004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:27.748 qpair failed and we were unable to recover it. 00:37:27.748 [2024-09-29 16:45:28.271183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.748 [2024-09-29 16:45:28.271216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:27.748 qpair failed and we were unable to recover it. 00:37:27.748 [2024-09-29 16:45:28.271364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.748 [2024-09-29 16:45:28.271397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:27.748 qpair failed and we were unable to recover it. 00:37:27.748 [2024-09-29 16:45:28.271540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.748 [2024-09-29 16:45:28.271590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:27.748 qpair failed and we were unable to recover it. 00:37:27.748 [2024-09-29 16:45:28.271767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.748 [2024-09-29 16:45:28.271815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:27.749 qpair failed and we were unable to recover it. 00:37:27.749 [2024-09-29 16:45:28.272007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.749 [2024-09-29 16:45:28.272044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:27.749 qpair failed and we were unable to recover it. 00:37:27.749 [2024-09-29 16:45:28.272226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.749 [2024-09-29 16:45:28.272261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:27.749 qpair failed and we were unable to recover it. 00:37:27.749 [2024-09-29 16:45:28.272376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.749 [2024-09-29 16:45:28.272410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:27.749 qpair failed and we were unable to recover it. 00:37:27.749 [2024-09-29 16:45:28.272562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.749 [2024-09-29 16:45:28.272595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:27.749 qpair failed and we were unable to recover it. 00:37:27.749 [2024-09-29 16:45:28.272732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.749 [2024-09-29 16:45:28.272766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:27.749 qpair failed and we were unable to recover it. 00:37:27.749 [2024-09-29 16:45:28.272894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.749 [2024-09-29 16:45:28.272927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:27.749 qpair failed and we were unable to recover it. 00:37:27.749 [2024-09-29 16:45:28.273053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.749 [2024-09-29 16:45:28.273087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:27.749 qpair failed and we were unable to recover it. 00:37:27.749 [2024-09-29 16:45:28.273256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.749 [2024-09-29 16:45:28.273289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:27.749 qpair failed and we were unable to recover it. 00:37:27.749 [2024-09-29 16:45:28.273449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.749 [2024-09-29 16:45:28.273496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:27.749 qpair failed and we were unable to recover it. 00:37:27.749 [2024-09-29 16:45:28.273665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.749 [2024-09-29 16:45:28.273709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:27.749 qpair failed and we were unable to recover it. 00:37:27.749 [2024-09-29 16:45:28.273842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.749 [2024-09-29 16:45:28.273891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:27.749 qpair failed and we were unable to recover it. 00:37:27.749 [2024-09-29 16:45:28.274046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.749 [2024-09-29 16:45:28.274080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:27.749 qpair failed and we were unable to recover it. 00:37:27.749 [2024-09-29 16:45:28.274203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.749 [2024-09-29 16:45:28.274235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:27.749 qpair failed and we were unable to recover it. 00:37:27.749 [2024-09-29 16:45:28.274378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.749 [2024-09-29 16:45:28.274410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:27.749 qpair failed and we were unable to recover it. 00:37:27.749 [2024-09-29 16:45:28.274521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.749 [2024-09-29 16:45:28.274554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:27.749 qpair failed and we were unable to recover it. 00:37:27.749 [2024-09-29 16:45:28.274729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.749 [2024-09-29 16:45:28.274762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:27.749 qpair failed and we were unable to recover it. 00:37:27.749 [2024-09-29 16:45:28.274913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.749 [2024-09-29 16:45:28.274945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:27.749 qpair failed and we were unable to recover it. 00:37:27.749 [2024-09-29 16:45:28.275088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.749 [2024-09-29 16:45:28.275121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:27.749 qpair failed and we were unable to recover it. 00:37:27.749 [2024-09-29 16:45:28.275236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.749 [2024-09-29 16:45:28.275268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:27.749 qpair failed and we were unable to recover it. 00:37:27.749 [2024-09-29 16:45:28.275388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.749 [2024-09-29 16:45:28.275421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:27.749 qpair failed and we were unable to recover it. 00:37:27.749 [2024-09-29 16:45:28.275536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.749 [2024-09-29 16:45:28.275569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:27.749 qpair failed and we were unable to recover it. 00:37:27.749 [2024-09-29 16:45:28.275711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.749 [2024-09-29 16:45:28.275744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:27.749 qpair failed and we were unable to recover it. 00:37:27.749 [2024-09-29 16:45:28.275874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.749 [2024-09-29 16:45:28.275911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:27.749 qpair failed and we were unable to recover it. 00:37:27.749 [2024-09-29 16:45:28.276084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.749 [2024-09-29 16:45:28.276118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:27.749 qpair failed and we were unable to recover it. 00:37:27.749 [2024-09-29 16:45:28.276288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.749 [2024-09-29 16:45:28.276321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:27.749 qpair failed and we were unable to recover it. 00:37:27.749 [2024-09-29 16:45:28.276443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.749 [2024-09-29 16:45:28.276477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:27.749 qpair failed and we were unable to recover it. 00:37:27.749 [2024-09-29 16:45:28.276685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.749 [2024-09-29 16:45:28.276719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:27.749 qpair failed and we were unable to recover it. 00:37:27.749 [2024-09-29 16:45:28.276877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.749 [2024-09-29 16:45:28.276911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:27.749 qpair failed and we were unable to recover it. 00:37:27.749 [2024-09-29 16:45:28.277059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.749 [2024-09-29 16:45:28.277111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:27.749 qpair failed and we were unable to recover it. 00:37:27.749 [2024-09-29 16:45:28.277300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.749 [2024-09-29 16:45:28.277333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:27.749 qpair failed and we were unable to recover it. 00:37:27.749 [2024-09-29 16:45:28.277472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.749 [2024-09-29 16:45:28.277505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:27.749 qpair failed and we were unable to recover it. 00:37:27.749 [2024-09-29 16:45:28.277658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.749 [2024-09-29 16:45:28.277701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:27.749 qpair failed and we were unable to recover it. 00:37:27.749 [2024-09-29 16:45:28.277882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.749 [2024-09-29 16:45:28.277930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:27.749 qpair failed and we were unable to recover it. 00:37:27.749 [2024-09-29 16:45:28.278089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.749 [2024-09-29 16:45:28.278127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:27.749 qpair failed and we were unable to recover it. 00:37:27.749 [2024-09-29 16:45:28.278257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.749 [2024-09-29 16:45:28.278293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:27.749 qpair failed and we were unable to recover it. 00:37:27.749 [2024-09-29 16:45:28.278438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.749 [2024-09-29 16:45:28.278479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:27.749 qpair failed and we were unable to recover it. 00:37:27.749 [2024-09-29 16:45:28.278662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.749 [2024-09-29 16:45:28.278738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:27.749 qpair failed and we were unable to recover it. 00:37:27.749 [2024-09-29 16:45:28.278906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.749 [2024-09-29 16:45:28.278953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:27.749 qpair failed and we were unable to recover it. 00:37:27.749 [2024-09-29 16:45:28.279132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.750 [2024-09-29 16:45:28.279172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:27.750 qpair failed and we were unable to recover it. 00:37:27.750 [2024-09-29 16:45:28.279384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.750 [2024-09-29 16:45:28.279439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:27.750 qpair failed and we were unable to recover it. 00:37:27.750 [2024-09-29 16:45:28.279574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.750 [2024-09-29 16:45:28.279607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:27.750 qpair failed and we were unable to recover it. 00:37:27.750 [2024-09-29 16:45:28.279736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.750 [2024-09-29 16:45:28.279770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:27.750 qpair failed and we were unable to recover it. 00:37:27.750 [2024-09-29 16:45:28.279915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.750 [2024-09-29 16:45:28.279948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:27.750 qpair failed and we were unable to recover it. 00:37:27.750 [2024-09-29 16:45:28.280083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.750 [2024-09-29 16:45:28.280116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:27.750 qpair failed and we were unable to recover it. 00:37:27.750 [2024-09-29 16:45:28.280305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.750 [2024-09-29 16:45:28.280342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:27.750 qpair failed and we were unable to recover it. 00:37:27.750 [2024-09-29 16:45:28.280542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.750 [2024-09-29 16:45:28.280575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:27.750 qpair failed and we were unable to recover it. 00:37:27.750 [2024-09-29 16:45:28.280726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.750 [2024-09-29 16:45:28.280760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:27.750 qpair failed and we were unable to recover it. 00:37:27.750 [2024-09-29 16:45:28.280902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.750 [2024-09-29 16:45:28.280936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:27.750 qpair failed and we were unable to recover it. 00:37:27.750 [2024-09-29 16:45:28.281095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.750 [2024-09-29 16:45:28.281131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:27.750 qpair failed and we were unable to recover it. 00:37:27.750 [2024-09-29 16:45:28.281343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.750 [2024-09-29 16:45:28.281380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:27.750 qpair failed and we were unable to recover it. 00:37:27.750 [2024-09-29 16:45:28.281518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.750 [2024-09-29 16:45:28.281556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:27.750 qpair failed and we were unable to recover it. 00:37:27.750 [2024-09-29 16:45:28.281724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.750 [2024-09-29 16:45:28.281757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:27.750 qpair failed and we were unable to recover it. 00:37:27.750 [2024-09-29 16:45:28.281923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.750 [2024-09-29 16:45:28.281972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:27.750 qpair failed and we were unable to recover it. 00:37:27.750 [2024-09-29 16:45:28.282119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.750 [2024-09-29 16:45:28.282173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:27.750 qpair failed and we were unable to recover it. 00:37:27.750 [2024-09-29 16:45:28.282317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.750 [2024-09-29 16:45:28.282351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:27.750 qpair failed and we were unable to recover it. 00:37:27.750 [2024-09-29 16:45:28.282489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.750 [2024-09-29 16:45:28.282522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:27.750 qpair failed and we were unable to recover it. 00:37:27.750 [2024-09-29 16:45:28.282668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.750 [2024-09-29 16:45:28.282708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:27.750 qpair failed and we were unable to recover it. 00:37:27.750 [2024-09-29 16:45:28.282853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.750 [2024-09-29 16:45:28.282887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:27.750 qpair failed and we were unable to recover it. 00:37:27.750 [2024-09-29 16:45:28.283033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.750 [2024-09-29 16:45:28.283067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:27.750 qpair failed and we were unable to recover it. 00:37:27.750 [2024-09-29 16:45:28.283179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.750 [2024-09-29 16:45:28.283212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:27.750 qpair failed and we were unable to recover it. 00:37:27.750 [2024-09-29 16:45:28.283325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.750 [2024-09-29 16:45:28.283358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:27.750 qpair failed and we were unable to recover it. 00:37:27.750 [2024-09-29 16:45:28.283476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.750 [2024-09-29 16:45:28.283509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:27.750 qpair failed and we were unable to recover it. 00:37:27.750 [2024-09-29 16:45:28.283657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.750 [2024-09-29 16:45:28.283713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:27.750 qpair failed and we were unable to recover it. 00:37:27.750 [2024-09-29 16:45:28.283870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.750 [2024-09-29 16:45:28.283906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:27.750 qpair failed and we were unable to recover it. 00:37:27.750 [2024-09-29 16:45:28.284110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.750 [2024-09-29 16:45:28.284148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:27.750 qpair failed and we were unable to recover it. 00:37:27.750 [2024-09-29 16:45:28.284340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.750 [2024-09-29 16:45:28.284376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:27.750 qpair failed and we were unable to recover it. 00:37:27.750 [2024-09-29 16:45:28.284515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.750 [2024-09-29 16:45:28.284547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:27.750 qpair failed and we were unable to recover it. 00:37:27.750 [2024-09-29 16:45:28.284729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.750 [2024-09-29 16:45:28.284762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:27.750 qpair failed and we were unable to recover it. 00:37:27.750 [2024-09-29 16:45:28.284912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.750 [2024-09-29 16:45:28.284945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:27.750 qpair failed and we were unable to recover it. 00:37:27.750 [2024-09-29 16:45:28.285078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.750 [2024-09-29 16:45:28.285114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:27.750 qpair failed and we were unable to recover it. 00:37:27.750 [2024-09-29 16:45:28.285288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.750 [2024-09-29 16:45:28.285324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:27.750 qpair failed and we were unable to recover it. 00:37:27.750 [2024-09-29 16:45:28.285466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.750 [2024-09-29 16:45:28.285501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:27.750 qpair failed and we were unable to recover it. 00:37:27.750 [2024-09-29 16:45:28.285678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.750 [2024-09-29 16:45:28.285712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:27.750 qpair failed and we were unable to recover it. 00:37:27.750 [2024-09-29 16:45:28.285855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.750 [2024-09-29 16:45:28.285889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:27.750 qpair failed and we were unable to recover it. 00:37:27.750 [2024-09-29 16:45:28.286059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.750 [2024-09-29 16:45:28.286092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:27.750 qpair failed and we were unable to recover it. 00:37:27.750 [2024-09-29 16:45:28.286233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.750 [2024-09-29 16:45:28.286271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:27.750 qpair failed and we were unable to recover it. 00:37:27.750 [2024-09-29 16:45:28.286410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.750 [2024-09-29 16:45:28.286443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:27.751 qpair failed and we were unable to recover it. 00:37:27.751 [2024-09-29 16:45:28.286607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.751 [2024-09-29 16:45:28.286643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:27.751 qpair failed and we were unable to recover it. 00:37:27.751 [2024-09-29 16:45:28.286816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.751 [2024-09-29 16:45:28.286864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:27.751 qpair failed and we were unable to recover it. 00:37:27.751 [2024-09-29 16:45:28.287064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.751 [2024-09-29 16:45:28.287134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:27.751 qpair failed and we were unable to recover it. 00:37:27.751 [2024-09-29 16:45:28.287326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.751 [2024-09-29 16:45:28.287361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:27.751 qpair failed and we were unable to recover it. 00:37:27.751 [2024-09-29 16:45:28.287509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.751 [2024-09-29 16:45:28.287543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:27.751 qpair failed and we were unable to recover it. 00:37:27.751 [2024-09-29 16:45:28.287721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.751 [2024-09-29 16:45:28.287756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:27.751 qpair failed and we were unable to recover it. 00:37:27.751 [2024-09-29 16:45:28.287897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.751 [2024-09-29 16:45:28.287930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:27.751 qpair failed and we were unable to recover it. 00:37:27.751 [2024-09-29 16:45:28.288073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.751 [2024-09-29 16:45:28.288106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:27.751 qpair failed and we were unable to recover it. 00:37:27.751 [2024-09-29 16:45:28.288223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.751 [2024-09-29 16:45:28.288257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:27.751 qpair failed and we were unable to recover it. 00:37:27.751 [2024-09-29 16:45:28.288399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.751 [2024-09-29 16:45:28.288432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:27.751 qpair failed and we were unable to recover it. 00:37:27.751 [2024-09-29 16:45:28.288568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.751 [2024-09-29 16:45:28.288602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:27.751 qpair failed and we were unable to recover it. 00:37:27.751 [2024-09-29 16:45:28.288745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.751 [2024-09-29 16:45:28.288793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:27.751 qpair failed and we were unable to recover it. 00:37:27.751 [2024-09-29 16:45:28.288952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.751 [2024-09-29 16:45:28.288999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:27.751 qpair failed and we were unable to recover it. 00:37:27.751 [2024-09-29 16:45:28.289151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.751 [2024-09-29 16:45:28.289186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:27.751 qpair failed and we were unable to recover it. 00:37:27.751 [2024-09-29 16:45:28.289296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.751 [2024-09-29 16:45:28.289329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:27.751 qpair failed and we were unable to recover it. 00:37:27.751 [2024-09-29 16:45:28.289468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.751 [2024-09-29 16:45:28.289501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:27.751 qpair failed and we were unable to recover it. 00:37:27.751 [2024-09-29 16:45:28.289667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.751 [2024-09-29 16:45:28.289720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:27.751 qpair failed and we were unable to recover it. 00:37:27.751 [2024-09-29 16:45:28.289892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.751 [2024-09-29 16:45:28.289932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:27.751 qpair failed and we were unable to recover it. 00:37:27.751 [2024-09-29 16:45:28.290093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.751 [2024-09-29 16:45:28.290130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:27.751 qpair failed and we were unable to recover it. 00:37:27.751 [2024-09-29 16:45:28.290282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.751 [2024-09-29 16:45:28.290342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:27.751 qpair failed and we were unable to recover it. 00:37:27.751 [2024-09-29 16:45:28.290475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.751 [2024-09-29 16:45:28.290510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:27.751 qpair failed and we were unable to recover it. 00:37:27.751 [2024-09-29 16:45:28.290683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.751 [2024-09-29 16:45:28.290731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:27.751 qpair failed and we were unable to recover it. 00:37:27.751 [2024-09-29 16:45:28.290884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.751 [2024-09-29 16:45:28.290920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:27.751 qpair failed and we were unable to recover it. 00:37:27.751 [2024-09-29 16:45:28.291039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.751 [2024-09-29 16:45:28.291074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:27.751 qpair failed and we were unable to recover it. 00:37:27.751 [2024-09-29 16:45:28.291184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.751 [2024-09-29 16:45:28.291218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:27.751 qpair failed and we were unable to recover it. 00:37:27.751 [2024-09-29 16:45:28.291393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.751 [2024-09-29 16:45:28.291427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:27.751 qpair failed and we were unable to recover it. 00:37:27.751 [2024-09-29 16:45:28.291558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.751 [2024-09-29 16:45:28.291596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:27.751 qpair failed and we were unable to recover it. 00:37:27.751 [2024-09-29 16:45:28.291734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.751 [2024-09-29 16:45:28.291768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:27.751 qpair failed and we were unable to recover it. 00:37:27.751 [2024-09-29 16:45:28.291911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.751 [2024-09-29 16:45:28.291946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:27.751 qpair failed and we were unable to recover it. 00:37:27.751 [2024-09-29 16:45:28.292090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.751 [2024-09-29 16:45:28.292124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:27.751 qpair failed and we were unable to recover it. 00:37:27.751 [2024-09-29 16:45:28.292265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.751 [2024-09-29 16:45:28.292303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:27.751 qpair failed and we were unable to recover it. 00:37:27.751 [2024-09-29 16:45:28.292457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.751 [2024-09-29 16:45:28.292494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:27.751 qpair failed and we were unable to recover it. 00:37:27.752 [2024-09-29 16:45:28.292688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.752 [2024-09-29 16:45:28.292740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:27.752 qpair failed and we were unable to recover it. 00:37:27.752 [2024-09-29 16:45:28.292891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.752 [2024-09-29 16:45:28.292925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:27.752 qpair failed and we were unable to recover it. 00:37:27.752 [2024-09-29 16:45:28.293034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.752 [2024-09-29 16:45:28.293067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:27.752 qpair failed and we were unable to recover it. 00:37:27.752 [2024-09-29 16:45:28.293178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.752 [2024-09-29 16:45:28.293213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:27.752 qpair failed and we were unable to recover it. 00:37:27.752 [2024-09-29 16:45:28.293382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.752 [2024-09-29 16:45:28.293435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:27.752 qpair failed and we were unable to recover it. 00:37:27.752 [2024-09-29 16:45:28.293589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.752 [2024-09-29 16:45:28.293623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:27.752 qpair failed and we were unable to recover it. 00:37:27.752 [2024-09-29 16:45:28.293790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.752 [2024-09-29 16:45:28.293843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:27.752 qpair failed and we were unable to recover it. 00:37:27.752 [2024-09-29 16:45:28.293969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.752 [2024-09-29 16:45:28.294005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:27.752 qpair failed and we were unable to recover it. 00:37:27.752 [2024-09-29 16:45:28.294133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.752 [2024-09-29 16:45:28.294169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:27.752 qpair failed and we were unable to recover it. 00:37:27.752 [2024-09-29 16:45:28.294337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.752 [2024-09-29 16:45:28.294375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:27.752 qpair failed and we were unable to recover it. 00:37:27.752 [2024-09-29 16:45:28.294566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.752 [2024-09-29 16:45:28.294600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:27.752 qpair failed and we were unable to recover it. 00:37:27.752 [2024-09-29 16:45:28.294744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.752 [2024-09-29 16:45:28.294778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:27.752 qpair failed and we were unable to recover it. 00:37:27.752 [2024-09-29 16:45:28.294914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.752 [2024-09-29 16:45:28.294951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:27.752 qpair failed and we were unable to recover it. 00:37:27.752 [2024-09-29 16:45:28.295103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.752 [2024-09-29 16:45:28.295141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:27.752 qpair failed and we were unable to recover it. 00:37:27.752 [2024-09-29 16:45:28.295299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.752 [2024-09-29 16:45:28.295336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:27.752 qpair failed and we were unable to recover it. 00:37:27.752 [2024-09-29 16:45:28.295510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.752 [2024-09-29 16:45:28.295546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:27.752 qpair failed and we were unable to recover it. 00:37:27.752 [2024-09-29 16:45:28.295657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.752 [2024-09-29 16:45:28.295700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:27.752 qpair failed and we were unable to recover it. 00:37:27.752 [2024-09-29 16:45:28.295870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.752 [2024-09-29 16:45:28.295917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:27.752 qpair failed and we were unable to recover it. 00:37:27.752 [2024-09-29 16:45:28.296083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.752 [2024-09-29 16:45:28.296122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:27.752 qpair failed and we were unable to recover it. 00:37:27.752 [2024-09-29 16:45:28.296286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.752 [2024-09-29 16:45:28.296344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:27.752 qpair failed and we were unable to recover it. 00:37:27.752 [2024-09-29 16:45:28.296596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.752 [2024-09-29 16:45:28.296630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:27.752 qpair failed and we were unable to recover it. 00:37:27.752 [2024-09-29 16:45:28.296787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.752 [2024-09-29 16:45:28.296822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:27.752 qpair failed and we were unable to recover it. 00:37:27.752 [2024-09-29 16:45:28.296932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.752 [2024-09-29 16:45:28.296984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:27.752 qpair failed and we were unable to recover it. 00:37:27.752 [2024-09-29 16:45:28.297111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.752 [2024-09-29 16:45:28.297162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:27.752 qpair failed and we were unable to recover it. 00:37:27.752 [2024-09-29 16:45:28.297317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.752 [2024-09-29 16:45:28.297354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:27.752 qpair failed and we were unable to recover it. 00:37:27.752 [2024-09-29 16:45:28.297521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.752 [2024-09-29 16:45:28.297554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:27.752 qpair failed and we were unable to recover it. 00:37:27.752 [2024-09-29 16:45:28.297704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.752 [2024-09-29 16:45:28.297738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:27.752 qpair failed and we were unable to recover it. 00:37:27.752 [2024-09-29 16:45:28.297879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.752 [2024-09-29 16:45:28.297912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:27.752 qpair failed and we were unable to recover it. 00:37:27.752 [2024-09-29 16:45:28.298079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.752 [2024-09-29 16:45:28.298115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:27.752 qpair failed and we were unable to recover it. 00:37:27.752 [2024-09-29 16:45:28.298329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.752 [2024-09-29 16:45:28.298366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:27.752 qpair failed and we were unable to recover it. 00:37:27.752 [2024-09-29 16:45:28.298499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.752 [2024-09-29 16:45:28.298541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:27.752 qpair failed and we were unable to recover it. 00:37:27.752 [2024-09-29 16:45:28.298706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.752 [2024-09-29 16:45:28.298758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:27.752 qpair failed and we were unable to recover it. 00:37:27.752 [2024-09-29 16:45:28.298926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.752 [2024-09-29 16:45:28.298974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:27.752 qpair failed and we were unable to recover it. 00:37:27.752 [2024-09-29 16:45:28.299184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.752 [2024-09-29 16:45:28.299248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:27.752 qpair failed and we were unable to recover it. 00:37:27.752 [2024-09-29 16:45:28.299358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.752 [2024-09-29 16:45:28.299393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:27.752 qpair failed and we were unable to recover it. 00:37:27.752 [2024-09-29 16:45:28.299532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.752 [2024-09-29 16:45:28.299565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:27.752 qpair failed and we were unable to recover it. 00:37:27.752 [2024-09-29 16:45:28.299734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.752 [2024-09-29 16:45:28.299767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:27.752 qpair failed and we were unable to recover it. 00:37:27.752 [2024-09-29 16:45:28.299913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.752 [2024-09-29 16:45:28.299947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:27.752 qpair failed and we were unable to recover it. 00:37:27.752 [2024-09-29 16:45:28.300088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.753 [2024-09-29 16:45:28.300121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:27.753 qpair failed and we were unable to recover it. 00:37:27.753 [2024-09-29 16:45:28.300270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.753 [2024-09-29 16:45:28.300303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:27.753 qpair failed and we were unable to recover it. 00:37:27.753 [2024-09-29 16:45:28.300414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.753 [2024-09-29 16:45:28.300449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:27.753 qpair failed and we were unable to recover it. 00:37:27.753 [2024-09-29 16:45:28.300626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.753 [2024-09-29 16:45:28.300660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:27.753 qpair failed and we were unable to recover it. 00:37:27.753 [2024-09-29 16:45:28.300791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.753 [2024-09-29 16:45:28.300825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:27.753 qpair failed and we were unable to recover it. 00:37:27.753 [2024-09-29 16:45:28.300967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.753 [2024-09-29 16:45:28.301000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:27.753 qpair failed and we were unable to recover it. 00:37:27.753 [2024-09-29 16:45:28.301151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.753 [2024-09-29 16:45:28.301184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:27.753 qpair failed and we were unable to recover it. 00:37:27.753 [2024-09-29 16:45:28.301301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.753 [2024-09-29 16:45:28.301335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:27.753 qpair failed and we were unable to recover it. 00:37:27.753 [2024-09-29 16:45:28.301490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.753 [2024-09-29 16:45:28.301529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:27.753 qpair failed and we were unable to recover it. 00:37:27.753 [2024-09-29 16:45:28.301705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.753 [2024-09-29 16:45:28.301739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:27.753 qpair failed and we were unable to recover it. 00:37:27.753 [2024-09-29 16:45:28.301875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.753 [2024-09-29 16:45:28.301909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:27.753 qpair failed and we were unable to recover it. 00:37:27.753 [2024-09-29 16:45:28.302054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.753 [2024-09-29 16:45:28.302088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:27.753 qpair failed and we were unable to recover it. 00:37:27.753 [2024-09-29 16:45:28.302241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.753 [2024-09-29 16:45:28.302275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:27.753 qpair failed and we were unable to recover it. 00:37:27.753 [2024-09-29 16:45:28.302420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.753 [2024-09-29 16:45:28.302454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:27.753 qpair failed and we were unable to recover it. 00:37:27.753 [2024-09-29 16:45:28.302600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.753 [2024-09-29 16:45:28.302633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:27.753 qpair failed and we were unable to recover it. 00:37:27.753 [2024-09-29 16:45:28.302806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.753 [2024-09-29 16:45:28.302875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:27.753 qpair failed and we were unable to recover it. 00:37:27.753 [2024-09-29 16:45:28.303079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.753 [2024-09-29 16:45:28.303115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:27.753 qpair failed and we were unable to recover it. 00:37:27.753 [2024-09-29 16:45:28.303257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.753 [2024-09-29 16:45:28.303291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:27.753 qpair failed and we were unable to recover it. 00:37:27.753 [2024-09-29 16:45:28.303423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.753 [2024-09-29 16:45:28.303458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:27.753 qpair failed and we were unable to recover it. 00:37:27.753 [2024-09-29 16:45:28.303598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.753 [2024-09-29 16:45:28.303632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:27.753 qpair failed and we were unable to recover it. 00:37:27.753 [2024-09-29 16:45:28.303818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.753 [2024-09-29 16:45:28.303874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:27.753 qpair failed and we were unable to recover it. 00:37:27.753 [2024-09-29 16:45:28.304089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.753 [2024-09-29 16:45:28.304142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:27.753 qpair failed and we were unable to recover it. 00:37:27.753 [2024-09-29 16:45:28.304316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.753 [2024-09-29 16:45:28.304378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:27.753 qpair failed and we were unable to recover it. 00:37:27.753 [2024-09-29 16:45:28.304494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.753 [2024-09-29 16:45:28.304527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:27.753 qpair failed and we were unable to recover it. 00:37:27.753 [2024-09-29 16:45:28.304635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.753 [2024-09-29 16:45:28.304669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:27.753 qpair failed and we were unable to recover it. 00:37:27.753 [2024-09-29 16:45:28.304831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.753 [2024-09-29 16:45:28.304884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:27.753 qpair failed and we were unable to recover it. 00:37:27.753 [2024-09-29 16:45:28.304996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.753 [2024-09-29 16:45:28.305030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:27.753 qpair failed and we were unable to recover it. 00:37:27.753 [2024-09-29 16:45:28.305199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.753 [2024-09-29 16:45:28.305231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:27.753 qpair failed and we were unable to recover it. 00:37:27.753 [2024-09-29 16:45:28.305352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.753 [2024-09-29 16:45:28.305386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:27.753 qpair failed and we were unable to recover it. 00:37:27.753 [2024-09-29 16:45:28.305547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.753 [2024-09-29 16:45:28.305595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:27.753 qpair failed and we were unable to recover it. 00:37:27.753 [2024-09-29 16:45:28.305733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.753 [2024-09-29 16:45:28.305783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:27.753 qpair failed and we were unable to recover it. 00:37:27.753 [2024-09-29 16:45:28.306006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:27.753 [2024-09-29 16:45:28.306052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.040 qpair failed and we were unable to recover it. 00:37:28.040 [2024-09-29 16:45:28.306241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.040 [2024-09-29 16:45:28.306277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.040 qpair failed and we were unable to recover it. 00:37:28.040 [2024-09-29 16:45:28.306396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.040 [2024-09-29 16:45:28.306430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.040 qpair failed and we were unable to recover it. 00:37:28.040 [2024-09-29 16:45:28.306594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.040 [2024-09-29 16:45:28.306628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.040 qpair failed and we were unable to recover it. 00:37:28.040 [2024-09-29 16:45:28.306786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.040 [2024-09-29 16:45:28.306819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.040 qpair failed and we were unable to recover it. 00:37:28.040 [2024-09-29 16:45:28.306962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.040 [2024-09-29 16:45:28.306996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.040 qpair failed and we were unable to recover it. 00:37:28.040 [2024-09-29 16:45:28.307140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.040 [2024-09-29 16:45:28.307174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.040 qpair failed and we were unable to recover it. 00:37:28.040 [2024-09-29 16:45:28.307313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.040 [2024-09-29 16:45:28.307346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.040 qpair failed and we were unable to recover it. 00:37:28.040 [2024-09-29 16:45:28.307485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.040 [2024-09-29 16:45:28.307518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.040 qpair failed and we were unable to recover it. 00:37:28.040 [2024-09-29 16:45:28.307663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.040 [2024-09-29 16:45:28.307710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.040 qpair failed and we were unable to recover it. 00:37:28.040 [2024-09-29 16:45:28.307874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.040 [2024-09-29 16:45:28.307907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.040 qpair failed and we were unable to recover it. 00:37:28.040 [2024-09-29 16:45:28.308083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.040 [2024-09-29 16:45:28.308131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.040 qpair failed and we were unable to recover it. 00:37:28.040 [2024-09-29 16:45:28.308316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.041 [2024-09-29 16:45:28.308352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.041 qpair failed and we were unable to recover it. 00:37:28.041 [2024-09-29 16:45:28.308496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.041 [2024-09-29 16:45:28.308530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.041 qpair failed and we were unable to recover it. 00:37:28.041 [2024-09-29 16:45:28.308636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.041 [2024-09-29 16:45:28.308670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.041 qpair failed and we were unable to recover it. 00:37:28.041 [2024-09-29 16:45:28.308836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.041 [2024-09-29 16:45:28.308883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.041 qpair failed and we were unable to recover it. 00:37:28.041 [2024-09-29 16:45:28.309043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.041 [2024-09-29 16:45:28.309078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.041 qpair failed and we were unable to recover it. 00:37:28.041 [2024-09-29 16:45:28.309236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.041 [2024-09-29 16:45:28.309294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.041 qpair failed and we were unable to recover it. 00:37:28.041 [2024-09-29 16:45:28.309408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.041 [2024-09-29 16:45:28.309443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.041 qpair failed and we were unable to recover it. 00:37:28.041 [2024-09-29 16:45:28.309565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.041 [2024-09-29 16:45:28.309597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.041 qpair failed and we were unable to recover it. 00:37:28.041 [2024-09-29 16:45:28.309732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.041 [2024-09-29 16:45:28.309798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.041 qpair failed and we were unable to recover it. 00:37:28.041 [2024-09-29 16:45:28.310019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.041 [2024-09-29 16:45:28.310077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.041 qpair failed and we were unable to recover it. 00:37:28.041 [2024-09-29 16:45:28.310234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.041 [2024-09-29 16:45:28.310290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.041 qpair failed and we were unable to recover it. 00:37:28.041 [2024-09-29 16:45:28.310465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.041 [2024-09-29 16:45:28.310499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.041 qpair failed and we were unable to recover it. 00:37:28.041 [2024-09-29 16:45:28.310665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.041 [2024-09-29 16:45:28.310705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.041 qpair failed and we were unable to recover it. 00:37:28.041 [2024-09-29 16:45:28.310847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.041 [2024-09-29 16:45:28.310881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.041 qpair failed and we were unable to recover it. 00:37:28.041 [2024-09-29 16:45:28.311015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.041 [2024-09-29 16:45:28.311080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.041 qpair failed and we were unable to recover it. 00:37:28.041 [2024-09-29 16:45:28.311317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.041 [2024-09-29 16:45:28.311380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.041 qpair failed and we were unable to recover it. 00:37:28.041 [2024-09-29 16:45:28.311570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.041 [2024-09-29 16:45:28.311604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.041 qpair failed and we were unable to recover it. 00:37:28.041 [2024-09-29 16:45:28.311743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.041 [2024-09-29 16:45:28.311778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.041 qpair failed and we were unable to recover it. 00:37:28.041 [2024-09-29 16:45:28.311953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.041 [2024-09-29 16:45:28.311997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.041 qpair failed and we were unable to recover it. 00:37:28.041 [2024-09-29 16:45:28.312181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.041 [2024-09-29 16:45:28.312214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.041 qpair failed and we were unable to recover it. 00:37:28.041 [2024-09-29 16:45:28.312378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.041 [2024-09-29 16:45:28.312411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.041 qpair failed and we were unable to recover it. 00:37:28.041 [2024-09-29 16:45:28.312532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.041 [2024-09-29 16:45:28.312565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.041 qpair failed and we were unable to recover it. 00:37:28.041 [2024-09-29 16:45:28.312753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.041 [2024-09-29 16:45:28.312787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.041 qpair failed and we were unable to recover it. 00:37:28.041 [2024-09-29 16:45:28.312895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.041 [2024-09-29 16:45:28.312928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.041 qpair failed and we were unable to recover it. 00:37:28.041 [2024-09-29 16:45:28.313096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.041 [2024-09-29 16:45:28.313133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.041 qpair failed and we were unable to recover it. 00:37:28.041 [2024-09-29 16:45:28.313285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.041 [2024-09-29 16:45:28.313342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.041 qpair failed and we were unable to recover it. 00:37:28.041 [2024-09-29 16:45:28.313511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.041 [2024-09-29 16:45:28.313544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.041 qpair failed and we were unable to recover it. 00:37:28.041 [2024-09-29 16:45:28.313653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.041 [2024-09-29 16:45:28.313694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.041 qpair failed and we were unable to recover it. 00:37:28.041 [2024-09-29 16:45:28.313814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.041 [2024-09-29 16:45:28.313847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.041 qpair failed and we were unable to recover it. 00:37:28.041 [2024-09-29 16:45:28.313990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.041 [2024-09-29 16:45:28.314028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.041 qpair failed and we were unable to recover it. 00:37:28.041 [2024-09-29 16:45:28.314233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.041 [2024-09-29 16:45:28.314270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.041 qpair failed and we were unable to recover it. 00:37:28.041 [2024-09-29 16:45:28.314450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.041 [2024-09-29 16:45:28.314502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.041 qpair failed and we were unable to recover it. 00:37:28.041 [2024-09-29 16:45:28.314683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.041 [2024-09-29 16:45:28.314750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.041 qpair failed and we were unable to recover it. 00:37:28.041 [2024-09-29 16:45:28.314904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.041 [2024-09-29 16:45:28.314972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.041 qpair failed and we were unable to recover it. 00:37:28.041 [2024-09-29 16:45:28.315097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.041 [2024-09-29 16:45:28.315135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.041 qpair failed and we were unable to recover it. 00:37:28.041 [2024-09-29 16:45:28.315333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.041 [2024-09-29 16:45:28.315368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.041 qpair failed and we were unable to recover it. 00:37:28.041 [2024-09-29 16:45:28.315510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.041 [2024-09-29 16:45:28.315543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.041 qpair failed and we were unable to recover it. 00:37:28.041 [2024-09-29 16:45:28.315667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.041 [2024-09-29 16:45:28.315709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.041 qpair failed and we were unable to recover it. 00:37:28.041 [2024-09-29 16:45:28.315882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.042 [2024-09-29 16:45:28.315920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.042 qpair failed and we were unable to recover it. 00:37:28.042 [2024-09-29 16:45:28.316077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.042 [2024-09-29 16:45:28.316114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.042 qpair failed and we were unable to recover it. 00:37:28.042 [2024-09-29 16:45:28.316245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.042 [2024-09-29 16:45:28.316282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.042 qpair failed and we were unable to recover it. 00:37:28.042 [2024-09-29 16:45:28.316519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.042 [2024-09-29 16:45:28.316591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.042 qpair failed and we were unable to recover it. 00:37:28.042 [2024-09-29 16:45:28.316811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.042 [2024-09-29 16:45:28.316859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.042 qpair failed and we were unable to recover it. 00:37:28.042 [2024-09-29 16:45:28.317090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.042 [2024-09-29 16:45:28.317130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.042 qpair failed and we were unable to recover it. 00:37:28.042 [2024-09-29 16:45:28.317290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.042 [2024-09-29 16:45:28.317329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.042 qpair failed and we were unable to recover it. 00:37:28.042 [2024-09-29 16:45:28.317485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.042 [2024-09-29 16:45:28.317552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.042 qpair failed and we were unable to recover it. 00:37:28.042 [2024-09-29 16:45:28.317722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.042 [2024-09-29 16:45:28.317758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.042 qpair failed and we were unable to recover it. 00:37:28.042 [2024-09-29 16:45:28.317901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.042 [2024-09-29 16:45:28.317936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.042 qpair failed and we were unable to recover it. 00:37:28.042 [2024-09-29 16:45:28.318060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.042 [2024-09-29 16:45:28.318113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.042 qpair failed and we were unable to recover it. 00:37:28.042 [2024-09-29 16:45:28.318275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.042 [2024-09-29 16:45:28.318325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.042 qpair failed and we were unable to recover it. 00:37:28.042 [2024-09-29 16:45:28.318462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.042 [2024-09-29 16:45:28.318523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.042 qpair failed and we were unable to recover it. 00:37:28.042 [2024-09-29 16:45:28.318693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.042 [2024-09-29 16:45:28.318741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.042 qpair failed and we were unable to recover it. 00:37:28.042 [2024-09-29 16:45:28.318921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.042 [2024-09-29 16:45:28.318958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.042 qpair failed and we were unable to recover it. 00:37:28.042 [2024-09-29 16:45:28.319138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.042 [2024-09-29 16:45:28.319173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.042 qpair failed and we were unable to recover it. 00:37:28.042 [2024-09-29 16:45:28.319320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.042 [2024-09-29 16:45:28.319354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.042 qpair failed and we were unable to recover it. 00:37:28.042 [2024-09-29 16:45:28.319475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.042 [2024-09-29 16:45:28.319510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.042 qpair failed and we were unable to recover it. 00:37:28.042 [2024-09-29 16:45:28.319652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.042 [2024-09-29 16:45:28.319693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.042 qpair failed and we were unable to recover it. 00:37:28.042 [2024-09-29 16:45:28.319845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.042 [2024-09-29 16:45:28.319885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.042 qpair failed and we were unable to recover it. 00:37:28.042 [2024-09-29 16:45:28.320034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.042 [2024-09-29 16:45:28.320096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.042 qpair failed and we were unable to recover it. 00:37:28.042 [2024-09-29 16:45:28.320291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.042 [2024-09-29 16:45:28.320342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.042 qpair failed and we were unable to recover it. 00:37:28.042 [2024-09-29 16:45:28.320490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.042 [2024-09-29 16:45:28.320524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.042 qpair failed and we were unable to recover it. 00:37:28.042 [2024-09-29 16:45:28.320751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.042 [2024-09-29 16:45:28.320787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.042 qpair failed and we were unable to recover it. 00:37:28.042 [2024-09-29 16:45:28.320930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.042 [2024-09-29 16:45:28.320977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.042 qpair failed and we were unable to recover it. 00:37:28.042 [2024-09-29 16:45:28.321157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.042 [2024-09-29 16:45:28.321221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.042 qpair failed and we were unable to recover it. 00:37:28.042 [2024-09-29 16:45:28.321374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.042 [2024-09-29 16:45:28.321410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.042 qpair failed and we were unable to recover it. 00:37:28.042 [2024-09-29 16:45:28.321526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.042 [2024-09-29 16:45:28.321560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.042 qpair failed and we were unable to recover it. 00:37:28.042 [2024-09-29 16:45:28.321684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.042 [2024-09-29 16:45:28.321719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.042 qpair failed and we were unable to recover it. 00:37:28.042 [2024-09-29 16:45:28.321900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.042 [2024-09-29 16:45:28.321936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.042 qpair failed and we were unable to recover it. 00:37:28.042 [2024-09-29 16:45:28.322108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.042 [2024-09-29 16:45:28.322142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.042 qpair failed and we were unable to recover it. 00:37:28.042 [2024-09-29 16:45:28.322286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.042 [2024-09-29 16:45:28.322319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.042 qpair failed and we were unable to recover it. 00:37:28.042 [2024-09-29 16:45:28.322457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.042 [2024-09-29 16:45:28.322490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.042 qpair failed and we were unable to recover it. 00:37:28.042 [2024-09-29 16:45:28.322609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.042 [2024-09-29 16:45:28.322643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.042 qpair failed and we were unable to recover it. 00:37:28.042 [2024-09-29 16:45:28.322782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.042 [2024-09-29 16:45:28.322831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.042 qpair failed and we were unable to recover it. 00:37:28.042 [2024-09-29 16:45:28.323004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.042 [2024-09-29 16:45:28.323040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.042 qpair failed and we were unable to recover it. 00:37:28.042 [2024-09-29 16:45:28.323261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.042 [2024-09-29 16:45:28.323314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.042 qpair failed and we were unable to recover it. 00:37:28.042 [2024-09-29 16:45:28.323519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.042 [2024-09-29 16:45:28.323554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.042 qpair failed and we were unable to recover it. 00:37:28.042 [2024-09-29 16:45:28.323667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.042 [2024-09-29 16:45:28.323708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.043 qpair failed and we were unable to recover it. 00:37:28.043 [2024-09-29 16:45:28.323833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.043 [2024-09-29 16:45:28.323871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.043 qpair failed and we were unable to recover it. 00:37:28.043 [2024-09-29 16:45:28.324030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.043 [2024-09-29 16:45:28.324077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.043 qpair failed and we were unable to recover it. 00:37:28.043 [2024-09-29 16:45:28.324199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.043 [2024-09-29 16:45:28.324235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.043 qpair failed and we were unable to recover it. 00:37:28.043 [2024-09-29 16:45:28.324381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.043 [2024-09-29 16:45:28.324414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.043 qpair failed and we were unable to recover it. 00:37:28.043 [2024-09-29 16:45:28.324561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.043 [2024-09-29 16:45:28.324594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.043 qpair failed and we were unable to recover it. 00:37:28.043 [2024-09-29 16:45:28.324731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.043 [2024-09-29 16:45:28.324765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.043 qpair failed and we were unable to recover it. 00:37:28.043 [2024-09-29 16:45:28.324904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.043 [2024-09-29 16:45:28.324940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.043 qpair failed and we were unable to recover it. 00:37:28.043 [2024-09-29 16:45:28.325111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.043 [2024-09-29 16:45:28.325165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.043 qpair failed and we were unable to recover it. 00:37:28.043 [2024-09-29 16:45:28.325367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.043 [2024-09-29 16:45:28.325440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.043 qpair failed and we were unable to recover it. 00:37:28.043 [2024-09-29 16:45:28.325590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.043 [2024-09-29 16:45:28.325623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.043 qpair failed and we were unable to recover it. 00:37:28.043 [2024-09-29 16:45:28.325750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.043 [2024-09-29 16:45:28.325784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.043 qpair failed and we were unable to recover it. 00:37:28.043 [2024-09-29 16:45:28.325927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.043 [2024-09-29 16:45:28.325964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.043 qpair failed and we were unable to recover it. 00:37:28.043 [2024-09-29 16:45:28.326088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.043 [2024-09-29 16:45:28.326124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.043 qpair failed and we were unable to recover it. 00:37:28.043 [2024-09-29 16:45:28.326297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.043 [2024-09-29 16:45:28.326348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.043 qpair failed and we were unable to recover it. 00:37:28.043 [2024-09-29 16:45:28.326502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.043 [2024-09-29 16:45:28.326535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.043 qpair failed and we were unable to recover it. 00:37:28.043 [2024-09-29 16:45:28.326658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.043 [2024-09-29 16:45:28.326700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.043 qpair failed and we were unable to recover it. 00:37:28.043 [2024-09-29 16:45:28.326807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.043 [2024-09-29 16:45:28.326840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.043 qpair failed and we were unable to recover it. 00:37:28.043 [2024-09-29 16:45:28.326972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.043 [2024-09-29 16:45:28.327004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.043 qpair failed and we were unable to recover it. 00:37:28.043 [2024-09-29 16:45:28.327130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.043 [2024-09-29 16:45:28.327167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.043 qpair failed and we were unable to recover it. 00:37:28.043 [2024-09-29 16:45:28.327350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.043 [2024-09-29 16:45:28.327388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.043 qpair failed and we were unable to recover it. 00:37:28.043 [2024-09-29 16:45:28.327551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.043 [2024-09-29 16:45:28.327584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.043 qpair failed and we were unable to recover it. 00:37:28.043 [2024-09-29 16:45:28.327715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.043 [2024-09-29 16:45:28.327749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.043 qpair failed and we were unable to recover it. 00:37:28.043 [2024-09-29 16:45:28.327864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.043 [2024-09-29 16:45:28.327897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.043 qpair failed and we were unable to recover it. 00:37:28.043 [2024-09-29 16:45:28.328042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.043 [2024-09-29 16:45:28.328094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.043 qpair failed and we were unable to recover it. 00:37:28.043 [2024-09-29 16:45:28.328295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.043 [2024-09-29 16:45:28.328331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.043 qpair failed and we were unable to recover it. 00:37:28.043 [2024-09-29 16:45:28.328482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.043 [2024-09-29 16:45:28.328518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.043 qpair failed and we were unable to recover it. 00:37:28.043 [2024-09-29 16:45:28.328685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.043 [2024-09-29 16:45:28.328719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.043 qpair failed and we were unable to recover it. 00:37:28.043 [2024-09-29 16:45:28.328865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.043 [2024-09-29 16:45:28.328897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.043 qpair failed and we were unable to recover it. 00:37:28.043 [2024-09-29 16:45:28.329015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.043 [2024-09-29 16:45:28.329048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.043 qpair failed and we were unable to recover it. 00:37:28.043 [2024-09-29 16:45:28.329237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.043 [2024-09-29 16:45:28.329273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.043 qpair failed and we were unable to recover it. 00:37:28.043 [2024-09-29 16:45:28.329400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.043 [2024-09-29 16:45:28.329437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.043 qpair failed and we were unable to recover it. 00:37:28.043 [2024-09-29 16:45:28.329613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.043 [2024-09-29 16:45:28.329661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.043 qpair failed and we were unable to recover it. 00:37:28.043 [2024-09-29 16:45:28.329834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.043 [2024-09-29 16:45:28.329871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.043 qpair failed and we were unable to recover it. 00:37:28.043 [2024-09-29 16:45:28.330006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.043 [2024-09-29 16:45:28.330045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.043 qpair failed and we were unable to recover it. 00:37:28.043 [2024-09-29 16:45:28.330222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.043 [2024-09-29 16:45:28.330274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.043 qpair failed and we were unable to recover it. 00:37:28.043 [2024-09-29 16:45:28.330464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.043 [2024-09-29 16:45:28.330517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.043 qpair failed and we were unable to recover it. 00:37:28.043 [2024-09-29 16:45:28.330685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.043 [2024-09-29 16:45:28.330734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.043 qpair failed and we were unable to recover it. 00:37:28.043 [2024-09-29 16:45:28.330868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.043 [2024-09-29 16:45:28.330902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.043 qpair failed and we were unable to recover it. 00:37:28.043 [2024-09-29 16:45:28.331079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.044 [2024-09-29 16:45:28.331112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.044 qpair failed and we were unable to recover it. 00:37:28.044 [2024-09-29 16:45:28.331278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.044 [2024-09-29 16:45:28.331310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.044 qpair failed and we were unable to recover it. 00:37:28.044 [2024-09-29 16:45:28.331427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.044 [2024-09-29 16:45:28.331461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.044 qpair failed and we were unable to recover it. 00:37:28.044 [2024-09-29 16:45:28.331613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.044 [2024-09-29 16:45:28.331649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.044 qpair failed and we were unable to recover it. 00:37:28.044 [2024-09-29 16:45:28.331802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.044 [2024-09-29 16:45:28.331837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.044 qpair failed and we were unable to recover it. 00:37:28.044 [2024-09-29 16:45:28.331999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.044 [2024-09-29 16:45:28.332050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.044 qpair failed and we were unable to recover it. 00:37:28.044 [2024-09-29 16:45:28.332224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.044 [2024-09-29 16:45:28.332257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.044 qpair failed and we were unable to recover it. 00:37:28.044 [2024-09-29 16:45:28.332410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.044 [2024-09-29 16:45:28.332444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.044 qpair failed and we were unable to recover it. 00:37:28.044 [2024-09-29 16:45:28.332601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.044 [2024-09-29 16:45:28.332649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.044 qpair failed and we were unable to recover it. 00:37:28.044 [2024-09-29 16:45:28.332833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.044 [2024-09-29 16:45:28.332880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.044 qpair failed and we were unable to recover it. 00:37:28.044 [2024-09-29 16:45:28.333067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.044 [2024-09-29 16:45:28.333126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.044 qpair failed and we were unable to recover it. 00:37:28.044 [2024-09-29 16:45:28.333363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.044 [2024-09-29 16:45:28.333427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.044 qpair failed and we were unable to recover it. 00:37:28.044 [2024-09-29 16:45:28.333613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.044 [2024-09-29 16:45:28.333650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.044 qpair failed and we were unable to recover it. 00:37:28.044 [2024-09-29 16:45:28.333813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.044 [2024-09-29 16:45:28.333847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.044 qpair failed and we were unable to recover it. 00:37:28.044 [2024-09-29 16:45:28.333980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.044 [2024-09-29 16:45:28.334028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.044 qpair failed and we were unable to recover it. 00:37:28.044 [2024-09-29 16:45:28.334155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.044 [2024-09-29 16:45:28.334190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.044 qpair failed and we were unable to recover it. 00:37:28.044 [2024-09-29 16:45:28.334305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.044 [2024-09-29 16:45:28.334339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.044 qpair failed and we were unable to recover it. 00:37:28.044 [2024-09-29 16:45:28.334463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.044 [2024-09-29 16:45:28.334499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.044 qpair failed and we were unable to recover it. 00:37:28.044 [2024-09-29 16:45:28.334632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.044 [2024-09-29 16:45:28.334687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.044 qpair failed and we were unable to recover it. 00:37:28.044 [2024-09-29 16:45:28.334831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.044 [2024-09-29 16:45:28.334879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.044 qpair failed and we were unable to recover it. 00:37:28.044 [2024-09-29 16:45:28.335031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.044 [2024-09-29 16:45:28.335066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.044 qpair failed and we were unable to recover it. 00:37:28.044 [2024-09-29 16:45:28.335241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.044 [2024-09-29 16:45:28.335274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.044 qpair failed and we were unable to recover it. 00:37:28.044 [2024-09-29 16:45:28.335410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.044 [2024-09-29 16:45:28.335444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.044 qpair failed and we were unable to recover it. 00:37:28.044 [2024-09-29 16:45:28.335553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.044 [2024-09-29 16:45:28.335587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.044 qpair failed and we were unable to recover it. 00:37:28.044 [2024-09-29 16:45:28.335750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.044 [2024-09-29 16:45:28.335787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.044 qpair failed and we were unable to recover it. 00:37:28.044 [2024-09-29 16:45:28.336008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.044 [2024-09-29 16:45:28.336061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.044 qpair failed and we were unable to recover it. 00:37:28.044 [2024-09-29 16:45:28.336274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.044 [2024-09-29 16:45:28.336329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.044 qpair failed and we were unable to recover it. 00:37:28.044 [2024-09-29 16:45:28.336527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.044 [2024-09-29 16:45:28.336567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.044 qpair failed and we were unable to recover it. 00:37:28.044 [2024-09-29 16:45:28.336731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.044 [2024-09-29 16:45:28.336767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.044 qpair failed and we were unable to recover it. 00:37:28.044 [2024-09-29 16:45:28.336922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.044 [2024-09-29 16:45:28.336956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.044 qpair failed and we were unable to recover it. 00:37:28.044 [2024-09-29 16:45:28.337096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.044 [2024-09-29 16:45:28.337130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.044 qpair failed and we were unable to recover it. 00:37:28.044 [2024-09-29 16:45:28.337262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.044 [2024-09-29 16:45:28.337321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.044 qpair failed and we were unable to recover it. 00:37:28.044 [2024-09-29 16:45:28.337491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.044 [2024-09-29 16:45:28.337524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.044 qpair failed and we were unable to recover it. 00:37:28.044 [2024-09-29 16:45:28.337642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.044 [2024-09-29 16:45:28.337684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.044 qpair failed and we were unable to recover it. 00:37:28.044 [2024-09-29 16:45:28.337834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.044 [2024-09-29 16:45:28.337869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.044 qpair failed and we were unable to recover it. 00:37:28.044 [2024-09-29 16:45:28.338008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.044 [2024-09-29 16:45:28.338041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.044 qpair failed and we were unable to recover it. 00:37:28.044 [2024-09-29 16:45:28.338181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.044 [2024-09-29 16:45:28.338214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.044 qpair failed and we were unable to recover it. 00:37:28.044 [2024-09-29 16:45:28.338362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.044 [2024-09-29 16:45:28.338397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.044 qpair failed and we were unable to recover it. 00:37:28.044 [2024-09-29 16:45:28.338504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.044 [2024-09-29 16:45:28.338537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.045 qpair failed and we were unable to recover it. 00:37:28.045 [2024-09-29 16:45:28.338701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.045 [2024-09-29 16:45:28.338748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.045 qpair failed and we were unable to recover it. 00:37:28.045 [2024-09-29 16:45:28.338879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.045 [2024-09-29 16:45:28.338914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.045 qpair failed and we were unable to recover it. 00:37:28.045 [2024-09-29 16:45:28.339072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.045 [2024-09-29 16:45:28.339124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.045 qpair failed and we were unable to recover it. 00:37:28.045 [2024-09-29 16:45:28.339257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.045 [2024-09-29 16:45:28.339307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.045 qpair failed and we were unable to recover it. 00:37:28.045 [2024-09-29 16:45:28.339415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.045 [2024-09-29 16:45:28.339448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.045 qpair failed and we were unable to recover it. 00:37:28.045 [2024-09-29 16:45:28.339622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.045 [2024-09-29 16:45:28.339656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.045 qpair failed and we were unable to recover it. 00:37:28.045 [2024-09-29 16:45:28.339785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.045 [2024-09-29 16:45:28.339817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.045 qpair failed and we were unable to recover it. 00:37:28.045 [2024-09-29 16:45:28.339957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.045 [2024-09-29 16:45:28.339989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.045 qpair failed and we were unable to recover it. 00:37:28.045 [2024-09-29 16:45:28.340096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.045 [2024-09-29 16:45:28.340129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.045 qpair failed and we were unable to recover it. 00:37:28.045 [2024-09-29 16:45:28.340247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.045 [2024-09-29 16:45:28.340280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.045 qpair failed and we were unable to recover it. 00:37:28.045 [2024-09-29 16:45:28.340399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.045 [2024-09-29 16:45:28.340434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.045 qpair failed and we were unable to recover it. 00:37:28.045 [2024-09-29 16:45:28.340600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.045 [2024-09-29 16:45:28.340637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.045 qpair failed and we were unable to recover it. 00:37:28.045 [2024-09-29 16:45:28.340756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.045 [2024-09-29 16:45:28.340789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.045 qpair failed and we were unable to recover it. 00:37:28.045 [2024-09-29 16:45:28.340932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.045 [2024-09-29 16:45:28.340971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.045 qpair failed and we were unable to recover it. 00:37:28.045 [2024-09-29 16:45:28.341114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.045 [2024-09-29 16:45:28.341147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.045 qpair failed and we were unable to recover it. 00:37:28.045 [2024-09-29 16:45:28.341254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.045 [2024-09-29 16:45:28.341288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.045 qpair failed and we were unable to recover it. 00:37:28.045 [2024-09-29 16:45:28.341404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.045 [2024-09-29 16:45:28.341438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.045 qpair failed and we were unable to recover it. 00:37:28.045 [2024-09-29 16:45:28.341564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.045 [2024-09-29 16:45:28.341597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.045 qpair failed and we were unable to recover it. 00:37:28.045 [2024-09-29 16:45:28.341775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.045 [2024-09-29 16:45:28.341823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.045 qpair failed and we were unable to recover it. 00:37:28.045 [2024-09-29 16:45:28.341944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.045 [2024-09-29 16:45:28.341982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.045 qpair failed and we were unable to recover it. 00:37:28.045 [2024-09-29 16:45:28.342097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.045 [2024-09-29 16:45:28.342131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.045 qpair failed and we were unable to recover it. 00:37:28.045 [2024-09-29 16:45:28.342273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.045 [2024-09-29 16:45:28.342307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.045 qpair failed and we were unable to recover it. 00:37:28.045 [2024-09-29 16:45:28.342454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.045 [2024-09-29 16:45:28.342488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.045 qpair failed and we were unable to recover it. 00:37:28.045 [2024-09-29 16:45:28.342604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.045 [2024-09-29 16:45:28.342639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.045 qpair failed and we were unable to recover it. 00:37:28.045 [2024-09-29 16:45:28.342820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.045 [2024-09-29 16:45:28.342855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.045 qpair failed and we were unable to recover it. 00:37:28.045 [2024-09-29 16:45:28.343003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.045 [2024-09-29 16:45:28.343070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.045 qpair failed and we were unable to recover it. 00:37:28.045 [2024-09-29 16:45:28.343209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.045 [2024-09-29 16:45:28.343249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.045 qpair failed and we were unable to recover it. 00:37:28.045 [2024-09-29 16:45:28.343370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.045 [2024-09-29 16:45:28.343408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.045 qpair failed and we were unable to recover it. 00:37:28.045 [2024-09-29 16:45:28.343563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.045 [2024-09-29 16:45:28.343597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.045 qpair failed and we were unable to recover it. 00:37:28.045 [2024-09-29 16:45:28.343768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.045 [2024-09-29 16:45:28.343815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.045 qpair failed and we were unable to recover it. 00:37:28.045 [2024-09-29 16:45:28.343942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.045 [2024-09-29 16:45:28.343978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.045 qpair failed and we were unable to recover it. 00:37:28.045 [2024-09-29 16:45:28.344147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.045 [2024-09-29 16:45:28.344180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.045 qpair failed and we were unable to recover it. 00:37:28.046 [2024-09-29 16:45:28.344373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.046 [2024-09-29 16:45:28.344430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.046 qpair failed and we were unable to recover it. 00:37:28.046 [2024-09-29 16:45:28.344583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.046 [2024-09-29 16:45:28.344635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.046 qpair failed and we were unable to recover it. 00:37:28.046 [2024-09-29 16:45:28.344802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.046 [2024-09-29 16:45:28.344850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.046 qpair failed and we were unable to recover it. 00:37:28.046 [2024-09-29 16:45:28.345073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.046 [2024-09-29 16:45:28.345144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.046 qpair failed and we were unable to recover it. 00:37:28.046 [2024-09-29 16:45:28.345264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.046 [2024-09-29 16:45:28.345316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.046 qpair failed and we were unable to recover it. 00:37:28.046 [2024-09-29 16:45:28.345455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.046 [2024-09-29 16:45:28.345489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.046 qpair failed and we were unable to recover it. 00:37:28.046 [2024-09-29 16:45:28.345638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.046 [2024-09-29 16:45:28.345681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.046 qpair failed and we were unable to recover it. 00:37:28.046 [2024-09-29 16:45:28.345824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.046 [2024-09-29 16:45:28.345871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.046 qpair failed and we were unable to recover it. 00:37:28.046 [2024-09-29 16:45:28.346020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.046 [2024-09-29 16:45:28.346055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.046 qpair failed and we were unable to recover it. 00:37:28.046 [2024-09-29 16:45:28.346211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.046 [2024-09-29 16:45:28.346245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.046 qpair failed and we were unable to recover it. 00:37:28.046 [2024-09-29 16:45:28.346429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.046 [2024-09-29 16:45:28.346464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.046 qpair failed and we were unable to recover it. 00:37:28.046 [2024-09-29 16:45:28.346620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.046 [2024-09-29 16:45:28.346693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.046 qpair failed and we were unable to recover it. 00:37:28.046 [2024-09-29 16:45:28.346849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.046 [2024-09-29 16:45:28.346886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.046 qpair failed and we were unable to recover it. 00:37:28.046 [2024-09-29 16:45:28.347057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.046 [2024-09-29 16:45:28.347122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.046 qpair failed and we were unable to recover it. 00:37:28.046 [2024-09-29 16:45:28.347296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.046 [2024-09-29 16:45:28.347355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.046 qpair failed and we were unable to recover it. 00:37:28.046 [2024-09-29 16:45:28.347544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.046 [2024-09-29 16:45:28.347579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.046 qpair failed and we were unable to recover it. 00:37:28.046 [2024-09-29 16:45:28.347721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.046 [2024-09-29 16:45:28.347755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.046 qpair failed and we were unable to recover it. 00:37:28.046 [2024-09-29 16:45:28.347943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.046 [2024-09-29 16:45:28.347991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.046 qpair failed and we were unable to recover it. 00:37:28.046 [2024-09-29 16:45:28.348140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.046 [2024-09-29 16:45:28.348175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.046 qpair failed and we were unable to recover it. 00:37:28.046 [2024-09-29 16:45:28.348323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.046 [2024-09-29 16:45:28.348362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.046 qpair failed and we were unable to recover it. 00:37:28.046 [2024-09-29 16:45:28.348511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.046 [2024-09-29 16:45:28.348545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.046 qpair failed and we were unable to recover it. 00:37:28.046 [2024-09-29 16:45:28.348711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.046 [2024-09-29 16:45:28.348744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.046 qpair failed and we were unable to recover it. 00:37:28.046 [2024-09-29 16:45:28.348882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.046 [2024-09-29 16:45:28.348930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.046 qpair failed and we were unable to recover it. 00:37:28.046 [2024-09-29 16:45:28.349091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.046 [2024-09-29 16:45:28.349127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.046 qpair failed and we were unable to recover it. 00:37:28.046 [2024-09-29 16:45:28.349287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.046 [2024-09-29 16:45:28.349372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.046 qpair failed and we were unable to recover it. 00:37:28.046 [2024-09-29 16:45:28.349519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.046 [2024-09-29 16:45:28.349553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.046 qpair failed and we were unable to recover it. 00:37:28.046 [2024-09-29 16:45:28.349698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.046 [2024-09-29 16:45:28.349731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.046 qpair failed and we were unable to recover it. 00:37:28.046 [2024-09-29 16:45:28.349865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.046 [2024-09-29 16:45:28.349902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.046 qpair failed and we were unable to recover it. 00:37:28.046 [2024-09-29 16:45:28.350063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.046 [2024-09-29 16:45:28.350097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.046 qpair failed and we were unable to recover it. 00:37:28.046 [2024-09-29 16:45:28.350237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.046 [2024-09-29 16:45:28.350270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.046 qpair failed and we were unable to recover it. 00:37:28.046 [2024-09-29 16:45:28.350413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.046 [2024-09-29 16:45:28.350447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.046 qpair failed and we were unable to recover it. 00:37:28.046 [2024-09-29 16:45:28.350569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.046 [2024-09-29 16:45:28.350603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.046 qpair failed and we were unable to recover it. 00:37:28.046 [2024-09-29 16:45:28.350775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.046 [2024-09-29 16:45:28.350809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.046 qpair failed and we were unable to recover it. 00:37:28.046 [2024-09-29 16:45:28.350918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.046 [2024-09-29 16:45:28.350951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.046 qpair failed and we were unable to recover it. 00:37:28.046 [2024-09-29 16:45:28.351115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.046 [2024-09-29 16:45:28.351151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.046 qpair failed and we were unable to recover it. 00:37:28.046 [2024-09-29 16:45:28.351289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.046 [2024-09-29 16:45:28.351341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.046 qpair failed and we were unable to recover it. 00:37:28.046 [2024-09-29 16:45:28.351495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.046 [2024-09-29 16:45:28.351531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.046 qpair failed and we were unable to recover it. 00:37:28.046 [2024-09-29 16:45:28.351668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.046 [2024-09-29 16:45:28.351710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.046 qpair failed and we were unable to recover it. 00:37:28.046 [2024-09-29 16:45:28.351898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.047 [2024-09-29 16:45:28.351946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.047 qpair failed and we were unable to recover it. 00:37:28.047 [2024-09-29 16:45:28.352093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.047 [2024-09-29 16:45:28.352129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.047 qpair failed and we were unable to recover it. 00:37:28.047 [2024-09-29 16:45:28.352286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.047 [2024-09-29 16:45:28.352322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.047 qpair failed and we were unable to recover it. 00:37:28.047 [2024-09-29 16:45:28.352475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.047 [2024-09-29 16:45:28.352530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.047 qpair failed and we were unable to recover it. 00:37:28.047 [2024-09-29 16:45:28.352697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.047 [2024-09-29 16:45:28.352732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.047 qpair failed and we were unable to recover it. 00:37:28.047 [2024-09-29 16:45:28.352860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.047 [2024-09-29 16:45:28.352914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.047 qpair failed and we were unable to recover it. 00:37:28.047 [2024-09-29 16:45:28.353066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.047 [2024-09-29 16:45:28.353128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.047 qpair failed and we were unable to recover it. 00:37:28.047 [2024-09-29 16:45:28.353303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.047 [2024-09-29 16:45:28.353357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.047 qpair failed and we were unable to recover it. 00:37:28.047 [2024-09-29 16:45:28.353518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.047 [2024-09-29 16:45:28.353554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.047 qpair failed and we were unable to recover it. 00:37:28.047 [2024-09-29 16:45:28.353686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.047 [2024-09-29 16:45:28.353721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.047 qpair failed and we were unable to recover it. 00:37:28.047 [2024-09-29 16:45:28.353866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.047 [2024-09-29 16:45:28.353899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.047 qpair failed and we were unable to recover it. 00:37:28.047 [2024-09-29 16:45:28.354040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.047 [2024-09-29 16:45:28.354073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.047 qpair failed and we were unable to recover it. 00:37:28.047 [2024-09-29 16:45:28.354219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.047 [2024-09-29 16:45:28.354269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.047 qpair failed and we were unable to recover it. 00:37:28.047 [2024-09-29 16:45:28.354410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.047 [2024-09-29 16:45:28.354442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.047 qpair failed and we were unable to recover it. 00:37:28.047 [2024-09-29 16:45:28.354575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.047 [2024-09-29 16:45:28.354612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.047 qpair failed and we were unable to recover it. 00:37:28.047 [2024-09-29 16:45:28.354761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.047 [2024-09-29 16:45:28.354797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.047 qpair failed and we were unable to recover it. 00:37:28.047 [2024-09-29 16:45:28.354957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.047 [2024-09-29 16:45:28.355009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.047 qpair failed and we were unable to recover it. 00:37:28.047 [2024-09-29 16:45:28.355143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.047 [2024-09-29 16:45:28.355179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.047 qpair failed and we were unable to recover it. 00:37:28.047 [2024-09-29 16:45:28.355353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.047 [2024-09-29 16:45:28.355409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.047 qpair failed and we were unable to recover it. 00:37:28.047 [2024-09-29 16:45:28.355593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.047 [2024-09-29 16:45:28.355628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.047 qpair failed and we were unable to recover it. 00:37:28.047 [2024-09-29 16:45:28.355777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.047 [2024-09-29 16:45:28.355811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.047 qpair failed and we were unable to recover it. 00:37:28.047 [2024-09-29 16:45:28.355932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.047 [2024-09-29 16:45:28.355978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.047 qpair failed and we were unable to recover it. 00:37:28.047 [2024-09-29 16:45:28.356122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.047 [2024-09-29 16:45:28.356159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.047 qpair failed and we were unable to recover it. 00:37:28.047 [2024-09-29 16:45:28.356322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.047 [2024-09-29 16:45:28.356374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.047 qpair failed and we were unable to recover it. 00:37:28.047 [2024-09-29 16:45:28.356532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.047 [2024-09-29 16:45:28.356565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.047 qpair failed and we were unable to recover it. 00:37:28.047 [2024-09-29 16:45:28.356732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.047 [2024-09-29 16:45:28.356767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.047 qpair failed and we were unable to recover it. 00:37:28.047 [2024-09-29 16:45:28.356910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.047 [2024-09-29 16:45:28.356942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.047 qpair failed and we were unable to recover it. 00:37:28.047 [2024-09-29 16:45:28.357122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.047 [2024-09-29 16:45:28.357173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.047 qpair failed and we were unable to recover it. 00:37:28.047 [2024-09-29 16:45:28.357311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.047 [2024-09-29 16:45:28.357344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.047 qpair failed and we were unable to recover it. 00:37:28.047 [2024-09-29 16:45:28.357510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.047 [2024-09-29 16:45:28.357547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.047 qpair failed and we were unable to recover it. 00:37:28.047 [2024-09-29 16:45:28.357724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.047 [2024-09-29 16:45:28.357759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.047 qpair failed and we were unable to recover it. 00:37:28.047 [2024-09-29 16:45:28.357900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.047 [2024-09-29 16:45:28.357933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.047 qpair failed and we were unable to recover it. 00:37:28.047 [2024-09-29 16:45:28.358129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.047 [2024-09-29 16:45:28.358164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.047 qpair failed and we were unable to recover it. 00:37:28.047 [2024-09-29 16:45:28.358302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.047 [2024-09-29 16:45:28.358335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.047 qpair failed and we were unable to recover it. 00:37:28.047 [2024-09-29 16:45:28.358458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.047 [2024-09-29 16:45:28.358491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.047 qpair failed and we were unable to recover it. 00:37:28.047 [2024-09-29 16:45:28.358703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.047 [2024-09-29 16:45:28.358736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.047 qpair failed and we were unable to recover it. 00:37:28.047 [2024-09-29 16:45:28.358892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.047 [2024-09-29 16:45:28.358925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.047 qpair failed and we were unable to recover it. 00:37:28.047 [2024-09-29 16:45:28.359036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.047 [2024-09-29 16:45:28.359069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.047 qpair failed and we were unable to recover it. 00:37:28.047 [2024-09-29 16:45:28.359253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.047 [2024-09-29 16:45:28.359287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.047 qpair failed and we were unable to recover it. 00:37:28.048 [2024-09-29 16:45:28.359437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.048 [2024-09-29 16:45:28.359469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.048 qpair failed and we were unable to recover it. 00:37:28.048 [2024-09-29 16:45:28.359647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.048 [2024-09-29 16:45:28.359687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.048 qpair failed and we were unable to recover it. 00:37:28.048 [2024-09-29 16:45:28.359800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.048 [2024-09-29 16:45:28.359833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.048 qpair failed and we were unable to recover it. 00:37:28.048 [2024-09-29 16:45:28.360005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.048 [2024-09-29 16:45:28.360038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.048 qpair failed and we were unable to recover it. 00:37:28.048 [2024-09-29 16:45:28.360172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.048 [2024-09-29 16:45:28.360208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.048 qpair failed and we were unable to recover it. 00:37:28.048 [2024-09-29 16:45:28.360384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.048 [2024-09-29 16:45:28.360433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.048 qpair failed and we were unable to recover it. 00:37:28.048 [2024-09-29 16:45:28.360577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.048 [2024-09-29 16:45:28.360610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.048 qpair failed and we were unable to recover it. 00:37:28.048 [2024-09-29 16:45:28.360797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.048 [2024-09-29 16:45:28.360830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.048 qpair failed and we were unable to recover it. 00:37:28.048 [2024-09-29 16:45:28.360945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.048 [2024-09-29 16:45:28.360979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.048 qpair failed and we were unable to recover it. 00:37:28.048 [2024-09-29 16:45:28.361119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.048 [2024-09-29 16:45:28.361152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.048 qpair failed and we were unable to recover it. 00:37:28.048 [2024-09-29 16:45:28.361292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.048 [2024-09-29 16:45:28.361326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.048 qpair failed and we were unable to recover it. 00:37:28.048 [2024-09-29 16:45:28.361489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.048 [2024-09-29 16:45:28.361538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.048 qpair failed and we were unable to recover it. 00:37:28.048 [2024-09-29 16:45:28.361715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.048 [2024-09-29 16:45:28.361749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.048 qpair failed and we were unable to recover it. 00:37:28.048 [2024-09-29 16:45:28.361867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.048 [2024-09-29 16:45:28.361900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.048 qpair failed and we were unable to recover it. 00:37:28.048 [2024-09-29 16:45:28.362081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.048 [2024-09-29 16:45:28.362114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.048 qpair failed and we were unable to recover it. 00:37:28.048 [2024-09-29 16:45:28.362266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.048 [2024-09-29 16:45:28.362298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.048 qpair failed and we were unable to recover it. 00:37:28.048 [2024-09-29 16:45:28.362468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.048 [2024-09-29 16:45:28.362505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.048 qpair failed and we were unable to recover it. 00:37:28.048 [2024-09-29 16:45:28.362661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.048 [2024-09-29 16:45:28.362722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.048 qpair failed and we were unable to recover it. 00:37:28.048 [2024-09-29 16:45:28.362887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.048 [2024-09-29 16:45:28.362919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.048 qpair failed and we were unable to recover it. 00:37:28.048 [2024-09-29 16:45:28.363080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.048 [2024-09-29 16:45:28.363116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.048 qpair failed and we were unable to recover it. 00:37:28.048 [2024-09-29 16:45:28.363274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.048 [2024-09-29 16:45:28.363311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.048 qpair failed and we were unable to recover it. 00:37:28.048 [2024-09-29 16:45:28.363540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.048 [2024-09-29 16:45:28.363572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.048 qpair failed and we were unable to recover it. 00:37:28.048 [2024-09-29 16:45:28.363695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.048 [2024-09-29 16:45:28.363728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.048 qpair failed and we were unable to recover it. 00:37:28.048 [2024-09-29 16:45:28.363877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.048 [2024-09-29 16:45:28.363910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.048 qpair failed and we were unable to recover it. 00:37:28.048 [2024-09-29 16:45:28.364020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.048 [2024-09-29 16:45:28.364053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.048 qpair failed and we were unable to recover it. 00:37:28.048 [2024-09-29 16:45:28.364164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.048 [2024-09-29 16:45:28.364198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.048 qpair failed and we were unable to recover it. 00:37:28.048 [2024-09-29 16:45:28.364364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.048 [2024-09-29 16:45:28.364397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.048 qpair failed and we were unable to recover it. 00:37:28.048 [2024-09-29 16:45:28.364605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.048 [2024-09-29 16:45:28.364639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.048 qpair failed and we were unable to recover it. 00:37:28.048 [2024-09-29 16:45:28.364784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.048 [2024-09-29 16:45:28.364817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.048 qpair failed and we were unable to recover it. 00:37:28.048 [2024-09-29 16:45:28.364923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.048 [2024-09-29 16:45:28.364955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.048 qpair failed and we were unable to recover it. 00:37:28.048 [2024-09-29 16:45:28.365069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.048 [2024-09-29 16:45:28.365102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.048 qpair failed and we were unable to recover it. 00:37:28.048 [2024-09-29 16:45:28.365283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.048 [2024-09-29 16:45:28.365334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.048 qpair failed and we were unable to recover it. 00:37:28.048 [2024-09-29 16:45:28.365454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.048 [2024-09-29 16:45:28.365486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.048 qpair failed and we were unable to recover it. 00:37:28.048 [2024-09-29 16:45:28.365629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.048 [2024-09-29 16:45:28.365662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.048 qpair failed and we were unable to recover it. 00:37:28.048 [2024-09-29 16:45:28.365810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.048 [2024-09-29 16:45:28.365843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.048 qpair failed and we were unable to recover it. 00:37:28.048 [2024-09-29 16:45:28.366013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.048 [2024-09-29 16:45:28.366061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.048 qpair failed and we were unable to recover it. 00:37:28.048 [2024-09-29 16:45:28.366207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.048 [2024-09-29 16:45:28.366241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.048 qpair failed and we were unable to recover it. 00:37:28.048 [2024-09-29 16:45:28.366353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.048 [2024-09-29 16:45:28.366385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.048 qpair failed and we were unable to recover it. 00:37:28.049 [2024-09-29 16:45:28.366525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.049 [2024-09-29 16:45:28.366558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.049 qpair failed and we were unable to recover it. 00:37:28.049 [2024-09-29 16:45:28.366785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.049 [2024-09-29 16:45:28.366833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.049 qpair failed and we were unable to recover it. 00:37:28.049 [2024-09-29 16:45:28.366986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.049 [2024-09-29 16:45:28.367023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.049 qpair failed and we were unable to recover it. 00:37:28.049 [2024-09-29 16:45:28.367174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.049 [2024-09-29 16:45:28.367209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.049 qpair failed and we were unable to recover it. 00:37:28.049 [2024-09-29 16:45:28.367366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.049 [2024-09-29 16:45:28.367400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.049 qpair failed and we were unable to recover it. 00:37:28.049 [2024-09-29 16:45:28.367542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.049 [2024-09-29 16:45:28.367577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.049 qpair failed and we were unable to recover it. 00:37:28.049 [2024-09-29 16:45:28.367744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.049 [2024-09-29 16:45:28.367779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.049 qpair failed and we were unable to recover it. 00:37:28.049 [2024-09-29 16:45:28.367902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.049 [2024-09-29 16:45:28.367936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.049 qpair failed and we were unable to recover it. 00:37:28.049 [2024-09-29 16:45:28.368083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.049 [2024-09-29 16:45:28.368116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.049 qpair failed and we were unable to recover it. 00:37:28.049 [2024-09-29 16:45:28.368221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.049 [2024-09-29 16:45:28.368254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.049 qpair failed and we were unable to recover it. 00:37:28.049 [2024-09-29 16:45:28.368393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.049 [2024-09-29 16:45:28.368427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.049 qpair failed and we were unable to recover it. 00:37:28.049 [2024-09-29 16:45:28.368570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.049 [2024-09-29 16:45:28.368607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.049 qpair failed and we were unable to recover it. 00:37:28.049 [2024-09-29 16:45:28.368728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.049 [2024-09-29 16:45:28.368762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.049 qpair failed and we were unable to recover it. 00:37:28.049 [2024-09-29 16:45:28.368915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.049 [2024-09-29 16:45:28.368951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.049 qpair failed and we were unable to recover it. 00:37:28.049 [2024-09-29 16:45:28.369132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.049 [2024-09-29 16:45:28.369168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.049 qpair failed and we were unable to recover it. 00:37:28.049 [2024-09-29 16:45:28.369328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.049 [2024-09-29 16:45:28.369365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.049 qpair failed and we were unable to recover it. 00:37:28.049 [2024-09-29 16:45:28.369506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.049 [2024-09-29 16:45:28.369542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.049 qpair failed and we were unable to recover it. 00:37:28.049 [2024-09-29 16:45:28.369688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.049 [2024-09-29 16:45:28.369722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.049 qpair failed and we were unable to recover it. 00:37:28.049 [2024-09-29 16:45:28.369864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.049 [2024-09-29 16:45:28.369897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.049 qpair failed and we were unable to recover it. 00:37:28.049 [2024-09-29 16:45:28.370061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.049 [2024-09-29 16:45:28.370113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.049 qpair failed and we were unable to recover it. 00:37:28.049 [2024-09-29 16:45:28.370294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.049 [2024-09-29 16:45:28.370346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.049 qpair failed and we were unable to recover it. 00:37:28.049 [2024-09-29 16:45:28.370461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.049 [2024-09-29 16:45:28.370495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.049 qpair failed and we were unable to recover it. 00:37:28.049 [2024-09-29 16:45:28.370638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.049 [2024-09-29 16:45:28.370680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.049 qpair failed and we were unable to recover it. 00:37:28.049 [2024-09-29 16:45:28.370824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.049 [2024-09-29 16:45:28.370861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.049 qpair failed and we were unable to recover it. 00:37:28.049 [2024-09-29 16:45:28.371019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.049 [2024-09-29 16:45:28.371052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.049 qpair failed and we were unable to recover it. 00:37:28.049 [2024-09-29 16:45:28.371167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.049 [2024-09-29 16:45:28.371201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.049 qpair failed and we were unable to recover it. 00:37:28.049 [2024-09-29 16:45:28.371333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.049 [2024-09-29 16:45:28.371369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.049 qpair failed and we were unable to recover it. 00:37:28.049 [2024-09-29 16:45:28.371516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.049 [2024-09-29 16:45:28.371552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.049 qpair failed and we were unable to recover it. 00:37:28.049 [2024-09-29 16:45:28.371723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.049 [2024-09-29 16:45:28.371758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.049 qpair failed and we were unable to recover it. 00:37:28.049 [2024-09-29 16:45:28.371885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.049 [2024-09-29 16:45:28.371924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.049 qpair failed and we were unable to recover it. 00:37:28.049 [2024-09-29 16:45:28.372134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.049 [2024-09-29 16:45:28.372186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.049 qpair failed and we were unable to recover it. 00:37:28.049 [2024-09-29 16:45:28.372391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.049 [2024-09-29 16:45:28.372446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.049 qpair failed and we were unable to recover it. 00:37:28.049 [2024-09-29 16:45:28.372560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.049 [2024-09-29 16:45:28.372593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.049 qpair failed and we were unable to recover it. 00:37:28.049 [2024-09-29 16:45:28.372744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.049 [2024-09-29 16:45:28.372797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.049 qpair failed and we were unable to recover it. 00:37:28.050 [2024-09-29 16:45:28.372961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.050 [2024-09-29 16:45:28.372999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.050 qpair failed and we were unable to recover it. 00:37:28.050 [2024-09-29 16:45:28.373138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.050 [2024-09-29 16:45:28.373176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.050 qpair failed and we were unable to recover it. 00:37:28.050 [2024-09-29 16:45:28.373338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.050 [2024-09-29 16:45:28.373372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.050 qpair failed and we were unable to recover it. 00:37:28.050 [2024-09-29 16:45:28.373549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.050 [2024-09-29 16:45:28.373582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.050 qpair failed and we were unable to recover it. 00:37:28.050 [2024-09-29 16:45:28.373738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.050 [2024-09-29 16:45:28.373771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.050 qpair failed and we were unable to recover it. 00:37:28.050 [2024-09-29 16:45:28.373908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.050 [2024-09-29 16:45:28.373942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.050 qpair failed and we were unable to recover it. 00:37:28.050 [2024-09-29 16:45:28.374160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.050 [2024-09-29 16:45:28.374196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.050 qpair failed and we were unable to recover it. 00:37:28.050 [2024-09-29 16:45:28.374351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.050 [2024-09-29 16:45:28.374387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.050 qpair failed and we were unable to recover it. 00:37:28.050 [2024-09-29 16:45:28.374555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.050 [2024-09-29 16:45:28.374588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.050 qpair failed and we were unable to recover it. 00:37:28.050 [2024-09-29 16:45:28.374738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.050 [2024-09-29 16:45:28.374772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.050 qpair failed and we were unable to recover it. 00:37:28.050 [2024-09-29 16:45:28.374913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.050 [2024-09-29 16:45:28.374946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.050 qpair failed and we were unable to recover it. 00:37:28.050 [2024-09-29 16:45:28.375084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.050 [2024-09-29 16:45:28.375121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.050 qpair failed and we were unable to recover it. 00:37:28.050 [2024-09-29 16:45:28.375259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.050 [2024-09-29 16:45:28.375310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.050 qpair failed and we were unable to recover it. 00:37:28.050 [2024-09-29 16:45:28.375495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.050 [2024-09-29 16:45:28.375531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.050 qpair failed and we were unable to recover it. 00:37:28.050 [2024-09-29 16:45:28.375657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.050 [2024-09-29 16:45:28.375730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.050 qpair failed and we were unable to recover it. 00:37:28.050 [2024-09-29 16:45:28.375871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.050 [2024-09-29 16:45:28.375907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.050 qpair failed and we were unable to recover it. 00:37:28.050 [2024-09-29 16:45:28.376062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.050 [2024-09-29 16:45:28.376099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.050 qpair failed and we were unable to recover it. 00:37:28.050 [2024-09-29 16:45:28.376232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.050 [2024-09-29 16:45:28.376274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.050 qpair failed and we were unable to recover it. 00:37:28.050 [2024-09-29 16:45:28.376449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.050 [2024-09-29 16:45:28.376516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.050 qpair failed and we were unable to recover it. 00:37:28.050 [2024-09-29 16:45:28.376635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.050 [2024-09-29 16:45:28.376681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.050 qpair failed and we were unable to recover it. 00:37:28.050 [2024-09-29 16:45:28.376880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.050 [2024-09-29 16:45:28.376932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.050 qpair failed and we were unable to recover it. 00:37:28.050 [2024-09-29 16:45:28.377080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.050 [2024-09-29 16:45:28.377114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.050 qpair failed and we were unable to recover it. 00:37:28.050 [2024-09-29 16:45:28.377259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.050 [2024-09-29 16:45:28.377291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.050 qpair failed and we were unable to recover it. 00:37:28.050 [2024-09-29 16:45:28.377400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.050 [2024-09-29 16:45:28.377439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.050 qpair failed and we were unable to recover it. 00:37:28.050 [2024-09-29 16:45:28.377556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.050 [2024-09-29 16:45:28.377589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.050 qpair failed and we were unable to recover it. 00:37:28.050 [2024-09-29 16:45:28.377722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.050 [2024-09-29 16:45:28.377755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.050 qpair failed and we were unable to recover it. 00:37:28.050 [2024-09-29 16:45:28.377861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.050 [2024-09-29 16:45:28.377893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.050 qpair failed and we were unable to recover it. 00:37:28.050 [2024-09-29 16:45:28.378044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.050 [2024-09-29 16:45:28.378082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.050 qpair failed and we were unable to recover it. 00:37:28.050 [2024-09-29 16:45:28.378220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.050 [2024-09-29 16:45:28.378256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.050 qpair failed and we were unable to recover it. 00:37:28.050 [2024-09-29 16:45:28.378404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.050 [2024-09-29 16:45:28.378441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.050 qpair failed and we were unable to recover it. 00:37:28.050 [2024-09-29 16:45:28.378582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.050 [2024-09-29 16:45:28.378617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.050 qpair failed and we were unable to recover it. 00:37:28.050 [2024-09-29 16:45:28.378807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.050 [2024-09-29 16:45:28.378841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.050 qpair failed and we were unable to recover it. 00:37:28.050 [2024-09-29 16:45:28.379055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.050 [2024-09-29 16:45:28.379089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.050 qpair failed and we were unable to recover it. 00:37:28.050 [2024-09-29 16:45:28.379230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.050 [2024-09-29 16:45:28.379264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.050 qpair failed and we were unable to recover it. 00:37:28.050 [2024-09-29 16:45:28.379397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.050 [2024-09-29 16:45:28.379432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.050 qpair failed and we were unable to recover it. 00:37:28.050 [2024-09-29 16:45:28.379556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.050 [2024-09-29 16:45:28.379589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.050 qpair failed and we were unable to recover it. 00:37:28.050 [2024-09-29 16:45:28.379703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.050 [2024-09-29 16:45:28.379737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.050 qpair failed and we were unable to recover it. 00:37:28.050 [2024-09-29 16:45:28.379851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.050 [2024-09-29 16:45:28.379884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.050 qpair failed and we were unable to recover it. 00:37:28.050 [2024-09-29 16:45:28.380058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.051 [2024-09-29 16:45:28.380091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.051 qpair failed and we were unable to recover it. 00:37:28.051 [2024-09-29 16:45:28.380253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.051 [2024-09-29 16:45:28.380289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.051 qpair failed and we were unable to recover it. 00:37:28.051 [2024-09-29 16:45:28.380467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.051 [2024-09-29 16:45:28.380503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.051 qpair failed and we were unable to recover it. 00:37:28.051 [2024-09-29 16:45:28.380622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.051 [2024-09-29 16:45:28.380682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.051 qpair failed and we were unable to recover it. 00:37:28.051 [2024-09-29 16:45:28.380854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.051 [2024-09-29 16:45:28.380889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.051 qpair failed and we were unable to recover it. 00:37:28.051 [2024-09-29 16:45:28.381049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.051 [2024-09-29 16:45:28.381100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.051 qpair failed and we were unable to recover it. 00:37:28.051 [2024-09-29 16:45:28.381287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.051 [2024-09-29 16:45:28.381321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.051 qpair failed and we were unable to recover it. 00:37:28.051 [2024-09-29 16:45:28.381436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.051 [2024-09-29 16:45:28.381469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.051 qpair failed and we were unable to recover it. 00:37:28.051 [2024-09-29 16:45:28.381608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.051 [2024-09-29 16:45:28.381641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.051 qpair failed and we were unable to recover it. 00:37:28.051 [2024-09-29 16:45:28.381765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.051 [2024-09-29 16:45:28.381799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.051 qpair failed and we were unable to recover it. 00:37:28.051 [2024-09-29 16:45:28.381938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.051 [2024-09-29 16:45:28.381971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.051 qpair failed and we were unable to recover it. 00:37:28.051 [2024-09-29 16:45:28.382136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.051 [2024-09-29 16:45:28.382184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.051 qpair failed and we were unable to recover it. 00:37:28.051 [2024-09-29 16:45:28.382321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.051 [2024-09-29 16:45:28.382359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.051 qpair failed and we were unable to recover it. 00:37:28.051 [2024-09-29 16:45:28.382525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.051 [2024-09-29 16:45:28.382560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.051 qpair failed and we were unable to recover it. 00:37:28.051 [2024-09-29 16:45:28.382729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.051 [2024-09-29 16:45:28.382781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.051 qpair failed and we were unable to recover it. 00:37:28.051 [2024-09-29 16:45:28.382941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.051 [2024-09-29 16:45:28.382973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.051 qpair failed and we were unable to recover it. 00:37:28.051 [2024-09-29 16:45:28.383085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.051 [2024-09-29 16:45:28.383118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.051 qpair failed and we were unable to recover it. 00:37:28.051 [2024-09-29 16:45:28.383239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.051 [2024-09-29 16:45:28.383274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.051 qpair failed and we were unable to recover it. 00:37:28.051 [2024-09-29 16:45:28.383444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.051 [2024-09-29 16:45:28.383477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.051 qpair failed and we were unable to recover it. 00:37:28.051 [2024-09-29 16:45:28.383596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.051 [2024-09-29 16:45:28.383635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.051 qpair failed and we were unable to recover it. 00:37:28.051 [2024-09-29 16:45:28.383807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.051 [2024-09-29 16:45:28.383845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.051 qpair failed and we were unable to recover it. 00:37:28.051 [2024-09-29 16:45:28.383968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.051 [2024-09-29 16:45:28.384004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.051 qpair failed and we were unable to recover it. 00:37:28.051 [2024-09-29 16:45:28.384178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.051 [2024-09-29 16:45:28.384210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.051 qpair failed and we were unable to recover it. 00:37:28.051 [2024-09-29 16:45:28.384363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.051 [2024-09-29 16:45:28.384396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.051 qpair failed and we were unable to recover it. 00:37:28.051 [2024-09-29 16:45:28.384529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.051 [2024-09-29 16:45:28.384578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.051 qpair failed and we were unable to recover it. 00:37:28.051 [2024-09-29 16:45:28.384759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.051 [2024-09-29 16:45:28.384793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.051 qpair failed and we were unable to recover it. 00:37:28.051 [2024-09-29 16:45:28.384950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.051 [2024-09-29 16:45:28.384987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.051 qpair failed and we were unable to recover it. 00:37:28.051 [2024-09-29 16:45:28.385153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.051 [2024-09-29 16:45:28.385186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.051 qpair failed and we were unable to recover it. 00:37:28.051 [2024-09-29 16:45:28.385306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.051 [2024-09-29 16:45:28.385339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.051 qpair failed and we were unable to recover it. 00:37:28.051 [2024-09-29 16:45:28.385532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.051 [2024-09-29 16:45:28.385585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.051 qpair failed and we were unable to recover it. 00:37:28.051 [2024-09-29 16:45:28.385735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.051 [2024-09-29 16:45:28.385770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.051 qpair failed and we were unable to recover it. 00:37:28.051 [2024-09-29 16:45:28.385899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.051 [2024-09-29 16:45:28.385951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.051 qpair failed and we were unable to recover it. 00:37:28.051 [2024-09-29 16:45:28.386076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.051 [2024-09-29 16:45:28.386126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.051 qpair failed and we were unable to recover it. 00:37:28.051 [2024-09-29 16:45:28.386263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.051 [2024-09-29 16:45:28.386313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.051 qpair failed and we were unable to recover it. 00:37:28.051 [2024-09-29 16:45:28.386464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.051 [2024-09-29 16:45:28.386497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.051 qpair failed and we were unable to recover it. 00:37:28.051 [2024-09-29 16:45:28.386666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.051 [2024-09-29 16:45:28.386709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.051 qpair failed and we were unable to recover it. 00:37:28.051 [2024-09-29 16:45:28.386856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.051 [2024-09-29 16:45:28.386889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.051 qpair failed and we were unable to recover it. 00:37:28.051 [2024-09-29 16:45:28.387026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.051 [2024-09-29 16:45:28.387060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.051 qpair failed and we were unable to recover it. 00:37:28.051 [2024-09-29 16:45:28.387178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.051 [2024-09-29 16:45:28.387212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.051 qpair failed and we were unable to recover it. 00:37:28.052 [2024-09-29 16:45:28.387383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.052 [2024-09-29 16:45:28.387428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.052 qpair failed and we were unable to recover it. 00:37:28.052 [2024-09-29 16:45:28.387531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.052 [2024-09-29 16:45:28.387564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.052 qpair failed and we were unable to recover it. 00:37:28.052 [2024-09-29 16:45:28.387736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.052 [2024-09-29 16:45:28.387771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.052 qpair failed and we were unable to recover it. 00:37:28.052 [2024-09-29 16:45:28.387914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.052 [2024-09-29 16:45:28.387947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.052 qpair failed and we were unable to recover it. 00:37:28.052 [2024-09-29 16:45:28.388064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.052 [2024-09-29 16:45:28.388096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.052 qpair failed and we were unable to recover it. 00:37:28.052 [2024-09-29 16:45:28.388234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.052 [2024-09-29 16:45:28.388267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.052 qpair failed and we were unable to recover it. 00:37:28.052 [2024-09-29 16:45:28.388434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.052 [2024-09-29 16:45:28.388467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.052 qpair failed and we were unable to recover it. 00:37:28.052 [2024-09-29 16:45:28.388647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.052 [2024-09-29 16:45:28.388687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.052 qpair failed and we were unable to recover it. 00:37:28.052 [2024-09-29 16:45:28.388803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.052 [2024-09-29 16:45:28.388836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.052 qpair failed and we were unable to recover it. 00:37:28.052 [2024-09-29 16:45:28.388980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.052 [2024-09-29 16:45:28.389013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.052 qpair failed and we were unable to recover it. 00:37:28.052 [2024-09-29 16:45:28.389146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.052 [2024-09-29 16:45:28.389179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.052 qpair failed and we were unable to recover it. 00:37:28.052 [2024-09-29 16:45:28.389350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.052 [2024-09-29 16:45:28.389383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.052 qpair failed and we were unable to recover it. 00:37:28.052 [2024-09-29 16:45:28.389517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.052 [2024-09-29 16:45:28.389550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.052 qpair failed and we were unable to recover it. 00:37:28.052 [2024-09-29 16:45:28.389675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.052 [2024-09-29 16:45:28.389710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.052 qpair failed and we were unable to recover it. 00:37:28.052 [2024-09-29 16:45:28.389843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.052 [2024-09-29 16:45:28.389890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.052 qpair failed and we were unable to recover it. 00:37:28.052 [2024-09-29 16:45:28.390037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.052 [2024-09-29 16:45:28.390072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.052 qpair failed and we were unable to recover it. 00:37:28.052 [2024-09-29 16:45:28.390214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.052 [2024-09-29 16:45:28.390248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.052 qpair failed and we were unable to recover it. 00:37:28.052 [2024-09-29 16:45:28.390380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.052 [2024-09-29 16:45:28.390415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.052 qpair failed and we were unable to recover it. 00:37:28.052 [2024-09-29 16:45:28.390556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.052 [2024-09-29 16:45:28.390590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.052 qpair failed and we were unable to recover it. 00:37:28.052 [2024-09-29 16:45:28.390731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.052 [2024-09-29 16:45:28.390765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.052 qpair failed and we were unable to recover it. 00:37:28.052 [2024-09-29 16:45:28.390966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.052 [2024-09-29 16:45:28.391004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.052 qpair failed and we were unable to recover it. 00:37:28.052 [2024-09-29 16:45:28.391151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.052 [2024-09-29 16:45:28.391204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.052 qpair failed and we were unable to recover it. 00:37:28.052 [2024-09-29 16:45:28.391366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.052 [2024-09-29 16:45:28.391403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.052 qpair failed and we were unable to recover it. 00:37:28.052 [2024-09-29 16:45:28.391599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.052 [2024-09-29 16:45:28.391635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.052 qpair failed and we were unable to recover it. 00:37:28.052 [2024-09-29 16:45:28.391815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.052 [2024-09-29 16:45:28.391868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.052 qpair failed and we were unable to recover it. 00:37:28.052 [2024-09-29 16:45:28.392029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.052 [2024-09-29 16:45:28.392085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.052 qpair failed and we were unable to recover it. 00:37:28.052 [2024-09-29 16:45:28.392220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.052 [2024-09-29 16:45:28.392271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.052 qpair failed and we were unable to recover it. 00:37:28.052 [2024-09-29 16:45:28.392412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.052 [2024-09-29 16:45:28.392444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.052 qpair failed and we were unable to recover it. 00:37:28.052 [2024-09-29 16:45:28.392564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.052 [2024-09-29 16:45:28.392598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.052 qpair failed and we were unable to recover it. 00:37:28.052 [2024-09-29 16:45:28.392765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.052 [2024-09-29 16:45:28.392812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.052 qpair failed and we were unable to recover it. 00:37:28.052 [2024-09-29 16:45:28.392966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.052 [2024-09-29 16:45:28.393001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.052 qpair failed and we were unable to recover it. 00:37:28.052 [2024-09-29 16:45:28.393122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.052 [2024-09-29 16:45:28.393156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.052 qpair failed and we were unable to recover it. 00:37:28.052 [2024-09-29 16:45:28.393301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.052 [2024-09-29 16:45:28.393335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.052 qpair failed and we were unable to recover it. 00:37:28.052 [2024-09-29 16:45:28.393448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.052 [2024-09-29 16:45:28.393480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.052 qpair failed and we were unable to recover it. 00:37:28.052 [2024-09-29 16:45:28.393631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.052 [2024-09-29 16:45:28.393665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.052 qpair failed and we were unable to recover it. 00:37:28.052 [2024-09-29 16:45:28.393887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.052 [2024-09-29 16:45:28.393941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.052 qpair failed and we were unable to recover it. 00:37:28.052 [2024-09-29 16:45:28.394070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.052 [2024-09-29 16:45:28.394121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.052 qpair failed and we were unable to recover it. 00:37:28.052 [2024-09-29 16:45:28.394308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.052 [2024-09-29 16:45:28.394361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.052 qpair failed and we were unable to recover it. 00:37:28.052 [2024-09-29 16:45:28.394503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.053 [2024-09-29 16:45:28.394536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.053 qpair failed and we were unable to recover it. 00:37:28.053 [2024-09-29 16:45:28.394677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.053 [2024-09-29 16:45:28.394711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.053 qpair failed and we were unable to recover it. 00:37:28.053 [2024-09-29 16:45:28.394853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.053 [2024-09-29 16:45:28.394886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.053 qpair failed and we were unable to recover it. 00:37:28.053 [2024-09-29 16:45:28.395037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.053 [2024-09-29 16:45:28.395072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.053 qpair failed and we were unable to recover it. 00:37:28.053 [2024-09-29 16:45:28.395209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.053 [2024-09-29 16:45:28.395242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.053 qpair failed and we were unable to recover it. 00:37:28.053 [2024-09-29 16:45:28.395407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.053 [2024-09-29 16:45:28.395440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.053 qpair failed and we were unable to recover it. 00:37:28.053 [2024-09-29 16:45:28.395553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.053 [2024-09-29 16:45:28.395586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.053 qpair failed and we were unable to recover it. 00:37:28.053 [2024-09-29 16:45:28.395716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.053 [2024-09-29 16:45:28.395751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.053 qpair failed and we were unable to recover it. 00:37:28.053 [2024-09-29 16:45:28.395918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.053 [2024-09-29 16:45:28.395951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.053 qpair failed and we were unable to recover it. 00:37:28.053 [2024-09-29 16:45:28.396134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.053 [2024-09-29 16:45:28.396169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.053 qpair failed and we were unable to recover it. 00:37:28.053 [2024-09-29 16:45:28.396332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.053 [2024-09-29 16:45:28.396368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.053 qpair failed and we were unable to recover it. 00:37:28.053 [2024-09-29 16:45:28.396503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.053 [2024-09-29 16:45:28.396540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.053 qpair failed and we were unable to recover it. 00:37:28.053 [2024-09-29 16:45:28.396730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.053 [2024-09-29 16:45:28.396777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.053 qpair failed and we were unable to recover it. 00:37:28.053 [2024-09-29 16:45:28.396923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.053 [2024-09-29 16:45:28.396963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.053 qpair failed and we were unable to recover it. 00:37:28.053 [2024-09-29 16:45:28.397107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.053 [2024-09-29 16:45:28.397145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.053 qpair failed and we were unable to recover it. 00:37:28.053 [2024-09-29 16:45:28.397289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.053 [2024-09-29 16:45:28.397325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.053 qpair failed and we were unable to recover it. 00:37:28.053 [2024-09-29 16:45:28.397462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.053 [2024-09-29 16:45:28.397496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.053 qpair failed and we were unable to recover it. 00:37:28.053 [2024-09-29 16:45:28.397641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.053 [2024-09-29 16:45:28.397682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.053 qpair failed and we were unable to recover it. 00:37:28.053 [2024-09-29 16:45:28.397832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.053 [2024-09-29 16:45:28.397866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.053 qpair failed and we were unable to recover it. 00:37:28.053 [2024-09-29 16:45:28.398021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.053 [2024-09-29 16:45:28.398059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.053 qpair failed and we were unable to recover it. 00:37:28.053 [2024-09-29 16:45:28.398203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.053 [2024-09-29 16:45:28.398238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.053 qpair failed and we were unable to recover it. 00:37:28.053 [2024-09-29 16:45:28.398386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.053 [2024-09-29 16:45:28.398420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.053 qpair failed and we were unable to recover it. 00:37:28.053 [2024-09-29 16:45:28.398540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.053 [2024-09-29 16:45:28.398579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.053 qpair failed and we were unable to recover it. 00:37:28.053 [2024-09-29 16:45:28.398721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.053 [2024-09-29 16:45:28.398756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.053 qpair failed and we were unable to recover it. 00:37:28.053 [2024-09-29 16:45:28.398867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.053 [2024-09-29 16:45:28.398901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.053 qpair failed and we were unable to recover it. 00:37:28.053 [2024-09-29 16:45:28.399052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.053 [2024-09-29 16:45:28.399086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.053 qpair failed and we were unable to recover it. 00:37:28.053 [2024-09-29 16:45:28.399222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.053 [2024-09-29 16:45:28.399256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.053 qpair failed and we were unable to recover it. 00:37:28.053 [2024-09-29 16:45:28.399392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.053 [2024-09-29 16:45:28.399439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.053 qpair failed and we were unable to recover it. 00:37:28.053 [2024-09-29 16:45:28.399593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.053 [2024-09-29 16:45:28.399628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.053 qpair failed and we were unable to recover it. 00:37:28.053 [2024-09-29 16:45:28.399815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.053 [2024-09-29 16:45:28.399849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.053 qpair failed and we were unable to recover it. 00:37:28.053 [2024-09-29 16:45:28.400080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.053 [2024-09-29 16:45:28.400139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.053 qpair failed and we were unable to recover it. 00:37:28.053 [2024-09-29 16:45:28.400297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.053 [2024-09-29 16:45:28.400334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.053 qpair failed and we were unable to recover it. 00:37:28.053 [2024-09-29 16:45:28.400464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.053 [2024-09-29 16:45:28.400500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.053 qpair failed and we were unable to recover it. 00:37:28.053 [2024-09-29 16:45:28.400660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.053 [2024-09-29 16:45:28.400704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.053 qpair failed and we were unable to recover it. 00:37:28.053 [2024-09-29 16:45:28.400821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.053 [2024-09-29 16:45:28.400855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.053 qpair failed and we were unable to recover it. 00:37:28.053 [2024-09-29 16:45:28.400976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.053 [2024-09-29 16:45:28.401010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.053 qpair failed and we were unable to recover it. 00:37:28.053 [2024-09-29 16:45:28.401203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.053 [2024-09-29 16:45:28.401256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.053 qpair failed and we were unable to recover it. 00:37:28.053 [2024-09-29 16:45:28.401379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.053 [2024-09-29 16:45:28.401414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.053 qpair failed and we were unable to recover it. 00:37:28.053 [2024-09-29 16:45:28.401541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.053 [2024-09-29 16:45:28.401589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.053 qpair failed and we were unable to recover it. 00:37:28.054 [2024-09-29 16:45:28.401751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.054 [2024-09-29 16:45:28.401798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.054 qpair failed and we were unable to recover it. 00:37:28.054 [2024-09-29 16:45:28.401938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.054 [2024-09-29 16:45:28.401975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.054 qpair failed and we were unable to recover it. 00:37:28.054 [2024-09-29 16:45:28.402135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.054 [2024-09-29 16:45:28.402188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.054 qpair failed and we were unable to recover it. 00:37:28.054 [2024-09-29 16:45:28.402317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.054 [2024-09-29 16:45:28.402354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.054 qpair failed and we were unable to recover it. 00:37:28.054 [2024-09-29 16:45:28.402501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.054 [2024-09-29 16:45:28.402538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.054 qpair failed and we were unable to recover it. 00:37:28.054 [2024-09-29 16:45:28.402750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.054 [2024-09-29 16:45:28.402784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.054 qpair failed and we were unable to recover it. 00:37:28.054 [2024-09-29 16:45:28.402923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.054 [2024-09-29 16:45:28.402976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.054 qpair failed and we were unable to recover it. 00:37:28.054 [2024-09-29 16:45:28.403124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.054 [2024-09-29 16:45:28.403160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.054 qpair failed and we were unable to recover it. 00:37:28.054 [2024-09-29 16:45:28.403378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.054 [2024-09-29 16:45:28.403416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.054 qpair failed and we were unable to recover it. 00:37:28.054 [2024-09-29 16:45:28.403573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.054 [2024-09-29 16:45:28.403610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.054 qpair failed and we were unable to recover it. 00:37:28.054 [2024-09-29 16:45:28.403763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.054 [2024-09-29 16:45:28.403797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.054 qpair failed and we were unable to recover it. 00:37:28.054 [2024-09-29 16:45:28.403914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.054 [2024-09-29 16:45:28.403947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.054 qpair failed and we were unable to recover it. 00:37:28.054 [2024-09-29 16:45:28.404072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.054 [2024-09-29 16:45:28.404122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.054 qpair failed and we were unable to recover it. 00:37:28.054 [2024-09-29 16:45:28.404286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.054 [2024-09-29 16:45:28.404322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.054 qpair failed and we were unable to recover it. 00:37:28.054 [2024-09-29 16:45:28.404455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.054 [2024-09-29 16:45:28.404507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.054 qpair failed and we were unable to recover it. 00:37:28.054 [2024-09-29 16:45:28.404661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.054 [2024-09-29 16:45:28.404727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.054 qpair failed and we were unable to recover it. 00:37:28.054 [2024-09-29 16:45:28.404877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.054 [2024-09-29 16:45:28.404910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.054 qpair failed and we were unable to recover it. 00:37:28.054 [2024-09-29 16:45:28.405052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.054 [2024-09-29 16:45:28.405089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.054 qpair failed and we were unable to recover it. 00:37:28.054 [2024-09-29 16:45:28.405223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.054 [2024-09-29 16:45:28.405273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.054 qpair failed and we were unable to recover it. 00:37:28.054 [2024-09-29 16:45:28.405427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.054 [2024-09-29 16:45:28.405465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.054 qpair failed and we were unable to recover it. 00:37:28.054 [2024-09-29 16:45:28.405627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.054 [2024-09-29 16:45:28.405660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.054 qpair failed and we were unable to recover it. 00:37:28.054 [2024-09-29 16:45:28.405787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.054 [2024-09-29 16:45:28.405821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.054 qpair failed and we were unable to recover it. 00:37:28.054 [2024-09-29 16:45:28.405936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.054 [2024-09-29 16:45:28.405988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.054 qpair failed and we were unable to recover it. 00:37:28.054 [2024-09-29 16:45:28.406206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.054 [2024-09-29 16:45:28.406250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.054 qpair failed and we were unable to recover it. 00:37:28.054 [2024-09-29 16:45:28.406417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.054 [2024-09-29 16:45:28.406468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.054 qpair failed and we were unable to recover it. 00:37:28.054 [2024-09-29 16:45:28.406602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.054 [2024-09-29 16:45:28.406638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.054 qpair failed and we were unable to recover it. 00:37:28.054 [2024-09-29 16:45:28.406811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.054 [2024-09-29 16:45:28.406845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.054 qpair failed and we were unable to recover it. 00:37:28.054 [2024-09-29 16:45:28.406970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.054 [2024-09-29 16:45:28.407018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.054 qpair failed and we were unable to recover it. 00:37:28.054 [2024-09-29 16:45:28.407195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.054 [2024-09-29 16:45:28.407250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.054 qpair failed and we were unable to recover it. 00:37:28.054 [2024-09-29 16:45:28.407462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.054 [2024-09-29 16:45:28.407501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.054 qpair failed and we were unable to recover it. 00:37:28.054 [2024-09-29 16:45:28.407688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.054 [2024-09-29 16:45:28.407740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.054 qpair failed and we were unable to recover it. 00:37:28.054 [2024-09-29 16:45:28.407883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.054 [2024-09-29 16:45:28.407916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.054 qpair failed and we were unable to recover it. 00:37:28.054 [2024-09-29 16:45:28.408093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.054 [2024-09-29 16:45:28.408127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.054 qpair failed and we were unable to recover it. 00:37:28.054 [2024-09-29 16:45:28.408298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.054 [2024-09-29 16:45:28.408342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.054 qpair failed and we were unable to recover it. 00:37:28.054 [2024-09-29 16:45:28.408495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.055 [2024-09-29 16:45:28.408532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.055 qpair failed and we were unable to recover it. 00:37:28.055 [2024-09-29 16:45:28.408667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.055 [2024-09-29 16:45:28.408713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.055 qpair failed and we were unable to recover it. 00:37:28.055 [2024-09-29 16:45:28.408834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.055 [2024-09-29 16:45:28.408869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.055 qpair failed and we were unable to recover it. 00:37:28.055 [2024-09-29 16:45:28.409011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.055 [2024-09-29 16:45:28.409063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.055 qpair failed and we were unable to recover it. 00:37:28.055 [2024-09-29 16:45:28.409217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.055 [2024-09-29 16:45:28.409254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.055 qpair failed and we were unable to recover it. 00:37:28.055 [2024-09-29 16:45:28.409406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.055 [2024-09-29 16:45:28.409443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.055 qpair failed and we were unable to recover it. 00:37:28.055 [2024-09-29 16:45:28.409585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.055 [2024-09-29 16:45:28.409618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.055 qpair failed and we were unable to recover it. 00:37:28.055 [2024-09-29 16:45:28.409764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.055 [2024-09-29 16:45:28.409797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.055 qpair failed and we were unable to recover it. 00:37:28.055 [2024-09-29 16:45:28.409968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.055 [2024-09-29 16:45:28.410003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.055 qpair failed and we were unable to recover it. 00:37:28.055 [2024-09-29 16:45:28.410124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.055 [2024-09-29 16:45:28.410159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.055 qpair failed and we were unable to recover it. 00:37:28.055 [2024-09-29 16:45:28.410306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.055 [2024-09-29 16:45:28.410339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.055 qpair failed and we were unable to recover it. 00:37:28.055 [2024-09-29 16:45:28.410506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.055 [2024-09-29 16:45:28.410543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.055 qpair failed and we were unable to recover it. 00:37:28.055 [2024-09-29 16:45:28.410696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.055 [2024-09-29 16:45:28.410747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.055 qpair failed and we were unable to recover it. 00:37:28.055 [2024-09-29 16:45:28.410913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.055 [2024-09-29 16:45:28.410960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.055 qpair failed and we were unable to recover it. 00:37:28.055 [2024-09-29 16:45:28.411171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.055 [2024-09-29 16:45:28.411232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.055 qpair failed and we were unable to recover it. 00:37:28.055 [2024-09-29 16:45:28.411388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.055 [2024-09-29 16:45:28.411425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.055 qpair failed and we were unable to recover it. 00:37:28.055 [2024-09-29 16:45:28.411556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.055 [2024-09-29 16:45:28.411592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.055 qpair failed and we were unable to recover it. 00:37:28.055 [2024-09-29 16:45:28.411718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.055 [2024-09-29 16:45:28.411752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.055 qpair failed and we were unable to recover it. 00:37:28.055 [2024-09-29 16:45:28.411896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.055 [2024-09-29 16:45:28.411929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.055 qpair failed and we were unable to recover it. 00:37:28.055 [2024-09-29 16:45:28.412087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.055 [2024-09-29 16:45:28.412136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.055 qpair failed and we were unable to recover it. 00:37:28.055 [2024-09-29 16:45:28.412289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.055 [2024-09-29 16:45:28.412339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.055 qpair failed and we were unable to recover it. 00:37:28.055 [2024-09-29 16:45:28.412503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.055 [2024-09-29 16:45:28.412539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.055 qpair failed and we were unable to recover it. 00:37:28.055 [2024-09-29 16:45:28.412689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.055 [2024-09-29 16:45:28.412742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.055 qpair failed and we were unable to recover it. 00:37:28.055 [2024-09-29 16:45:28.412885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.055 [2024-09-29 16:45:28.412918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.055 qpair failed and we were unable to recover it. 00:37:28.055 [2024-09-29 16:45:28.413076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.055 [2024-09-29 16:45:28.413113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.055 qpair failed and we were unable to recover it. 00:37:28.055 [2024-09-29 16:45:28.413248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.055 [2024-09-29 16:45:28.413285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.055 qpair failed and we were unable to recover it. 00:37:28.055 [2024-09-29 16:45:28.413437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.055 [2024-09-29 16:45:28.413473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.055 qpair failed and we were unable to recover it. 00:37:28.055 [2024-09-29 16:45:28.413628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.055 [2024-09-29 16:45:28.413664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.055 qpair failed and we were unable to recover it. 00:37:28.055 [2024-09-29 16:45:28.413829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.055 [2024-09-29 16:45:28.413862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.055 qpair failed and we were unable to recover it. 00:37:28.055 [2024-09-29 16:45:28.414059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.055 [2024-09-29 16:45:28.414112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.055 qpair failed and we were unable to recover it. 00:37:28.055 [2024-09-29 16:45:28.414280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.055 [2024-09-29 16:45:28.414315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.055 qpair failed and we were unable to recover it. 00:37:28.055 [2024-09-29 16:45:28.414489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.055 [2024-09-29 16:45:28.414523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.055 qpair failed and we were unable to recover it. 00:37:28.055 [2024-09-29 16:45:28.414694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.055 [2024-09-29 16:45:28.414728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.055 qpair failed and we were unable to recover it. 00:37:28.055 [2024-09-29 16:45:28.414850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.055 [2024-09-29 16:45:28.414882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.055 qpair failed and we were unable to recover it. 00:37:28.055 [2024-09-29 16:45:28.414998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.055 [2024-09-29 16:45:28.415035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.055 qpair failed and we were unable to recover it. 00:37:28.055 [2024-09-29 16:45:28.415189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.055 [2024-09-29 16:45:28.415222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.055 qpair failed and we were unable to recover it. 00:37:28.055 [2024-09-29 16:45:28.415368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.055 [2024-09-29 16:45:28.415400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.055 qpair failed and we were unable to recover it. 00:37:28.055 [2024-09-29 16:45:28.415570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.055 [2024-09-29 16:45:28.415618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.055 qpair failed and we were unable to recover it. 00:37:28.055 [2024-09-29 16:45:28.415821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.056 [2024-09-29 16:45:28.415859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.056 qpair failed and we were unable to recover it. 00:37:28.056 [2024-09-29 16:45:28.416029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.056 [2024-09-29 16:45:28.416096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.056 qpair failed and we were unable to recover it. 00:37:28.056 [2024-09-29 16:45:28.416371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.056 [2024-09-29 16:45:28.416433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.056 qpair failed and we were unable to recover it. 00:37:28.056 [2024-09-29 16:45:28.416591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.056 [2024-09-29 16:45:28.416629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.056 qpair failed and we were unable to recover it. 00:37:28.056 [2024-09-29 16:45:28.416775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.056 [2024-09-29 16:45:28.416810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.056 qpair failed and we were unable to recover it. 00:37:28.056 [2024-09-29 16:45:28.416967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.056 [2024-09-29 16:45:28.417002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.056 qpair failed and we were unable to recover it. 00:37:28.056 [2024-09-29 16:45:28.417150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.056 [2024-09-29 16:45:28.417183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.056 qpair failed and we were unable to recover it. 00:37:28.056 [2024-09-29 16:45:28.417341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.056 [2024-09-29 16:45:28.417415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.056 qpair failed and we were unable to recover it. 00:37:28.056 [2024-09-29 16:45:28.417559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.056 [2024-09-29 16:45:28.417598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.056 qpair failed and we were unable to recover it. 00:37:28.056 [2024-09-29 16:45:28.417794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.056 [2024-09-29 16:45:28.417827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.056 qpair failed and we were unable to recover it. 00:37:28.056 [2024-09-29 16:45:28.418021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.056 [2024-09-29 16:45:28.418059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.056 qpair failed and we were unable to recover it. 00:37:28.056 [2024-09-29 16:45:28.418289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.056 [2024-09-29 16:45:28.418322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.056 qpair failed and we were unable to recover it. 00:37:28.056 [2024-09-29 16:45:28.418460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.056 [2024-09-29 16:45:28.418493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.056 qpair failed and we were unable to recover it. 00:37:28.056 [2024-09-29 16:45:28.418606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.056 [2024-09-29 16:45:28.418638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.056 qpair failed and we were unable to recover it. 00:37:28.056 [2024-09-29 16:45:28.418791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.056 [2024-09-29 16:45:28.418824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.056 qpair failed and we were unable to recover it. 00:37:28.056 [2024-09-29 16:45:28.418957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.056 [2024-09-29 16:45:28.418993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.056 qpair failed and we were unable to recover it. 00:37:28.056 [2024-09-29 16:45:28.419206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.056 [2024-09-29 16:45:28.419262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.056 qpair failed and we were unable to recover it. 00:37:28.056 [2024-09-29 16:45:28.419401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.056 [2024-09-29 16:45:28.419433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.056 qpair failed and we were unable to recover it. 00:37:28.056 [2024-09-29 16:45:28.419576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.056 [2024-09-29 16:45:28.419609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.056 qpair failed and we were unable to recover it. 00:37:28.056 [2024-09-29 16:45:28.419720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.056 [2024-09-29 16:45:28.419755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.056 qpair failed and we were unable to recover it. 00:37:28.056 [2024-09-29 16:45:28.419871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.056 [2024-09-29 16:45:28.419904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.056 qpair failed and we were unable to recover it. 00:37:28.056 [2024-09-29 16:45:28.420020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.056 [2024-09-29 16:45:28.420054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.056 qpair failed and we were unable to recover it. 00:37:28.056 [2024-09-29 16:45:28.420173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.056 [2024-09-29 16:45:28.420207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.056 qpair failed and we were unable to recover it. 00:37:28.056 [2024-09-29 16:45:28.420376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.056 [2024-09-29 16:45:28.420413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.056 qpair failed and we were unable to recover it. 00:37:28.056 [2024-09-29 16:45:28.420542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.056 [2024-09-29 16:45:28.420579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.056 qpair failed and we were unable to recover it. 00:37:28.056 [2024-09-29 16:45:28.420722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.056 [2024-09-29 16:45:28.420755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.056 qpair failed and we were unable to recover it. 00:37:28.056 [2024-09-29 16:45:28.420932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.056 [2024-09-29 16:45:28.420965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.056 qpair failed and we were unable to recover it. 00:37:28.056 [2024-09-29 16:45:28.421076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.056 [2024-09-29 16:45:28.421115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.056 qpair failed and we were unable to recover it. 00:37:28.056 [2024-09-29 16:45:28.421288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.056 [2024-09-29 16:45:28.421325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.056 qpair failed and we were unable to recover it. 00:37:28.056 [2024-09-29 16:45:28.421547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.056 [2024-09-29 16:45:28.421580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.056 qpair failed and we were unable to recover it. 00:37:28.056 [2024-09-29 16:45:28.421702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.056 [2024-09-29 16:45:28.421736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.056 qpair failed and we were unable to recover it. 00:37:28.056 [2024-09-29 16:45:28.421853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.056 [2024-09-29 16:45:28.421891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.056 qpair failed and we were unable to recover it. 00:37:28.056 [2024-09-29 16:45:28.422042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.056 [2024-09-29 16:45:28.422093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.056 qpair failed and we were unable to recover it. 00:37:28.056 [2024-09-29 16:45:28.422248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.056 [2024-09-29 16:45:28.422284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.056 qpair failed and we were unable to recover it. 00:37:28.056 [2024-09-29 16:45:28.422437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.056 [2024-09-29 16:45:28.422473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.056 qpair failed and we were unable to recover it. 00:37:28.056 [2024-09-29 16:45:28.422605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.056 [2024-09-29 16:45:28.422637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.056 qpair failed and we were unable to recover it. 00:37:28.056 [2024-09-29 16:45:28.422766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.056 [2024-09-29 16:45:28.422800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.056 qpair failed and we were unable to recover it. 00:37:28.056 [2024-09-29 16:45:28.422924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.056 [2024-09-29 16:45:28.422957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.056 qpair failed and we were unable to recover it. 00:37:28.056 [2024-09-29 16:45:28.423093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.056 [2024-09-29 16:45:28.423145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.057 qpair failed and we were unable to recover it. 00:37:28.057 [2024-09-29 16:45:28.423261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.057 [2024-09-29 16:45:28.423294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.057 qpair failed and we were unable to recover it. 00:37:28.057 [2024-09-29 16:45:28.423480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.057 [2024-09-29 16:45:28.423518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.057 qpair failed and we were unable to recover it. 00:37:28.057 [2024-09-29 16:45:28.423658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.057 [2024-09-29 16:45:28.423712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.057 qpair failed and we were unable to recover it. 00:37:28.057 [2024-09-29 16:45:28.423877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.057 [2024-09-29 16:45:28.423933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.057 qpair failed and we were unable to recover it. 00:37:28.057 [2024-09-29 16:45:28.424073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.057 [2024-09-29 16:45:28.424112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.057 qpair failed and we were unable to recover it. 00:37:28.057 [2024-09-29 16:45:28.424326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.057 [2024-09-29 16:45:28.424365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.057 qpair failed and we were unable to recover it. 00:37:28.057 [2024-09-29 16:45:28.424550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.057 [2024-09-29 16:45:28.424588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.057 qpair failed and we were unable to recover it. 00:37:28.057 [2024-09-29 16:45:28.424717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.057 [2024-09-29 16:45:28.424778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.057 qpair failed and we were unable to recover it. 00:37:28.057 [2024-09-29 16:45:28.424906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.057 [2024-09-29 16:45:28.424940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.057 qpair failed and we were unable to recover it. 00:37:28.057 [2024-09-29 16:45:28.425186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.057 [2024-09-29 16:45:28.425219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.057 qpair failed and we were unable to recover it. 00:37:28.057 [2024-09-29 16:45:28.425367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.057 [2024-09-29 16:45:28.425400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.057 qpair failed and we were unable to recover it. 00:37:28.057 [2024-09-29 16:45:28.425575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.057 [2024-09-29 16:45:28.425611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.057 qpair failed and we were unable to recover it. 00:37:28.057 [2024-09-29 16:45:28.425775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.057 [2024-09-29 16:45:28.425823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.057 qpair failed and we were unable to recover it. 00:37:28.057 [2024-09-29 16:45:28.425972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.057 [2024-09-29 16:45:28.426006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.057 qpair failed and we were unable to recover it. 00:37:28.057 [2024-09-29 16:45:28.426209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.057 [2024-09-29 16:45:28.426272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.057 qpair failed and we were unable to recover it. 00:37:28.057 [2024-09-29 16:45:28.426515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.057 [2024-09-29 16:45:28.426571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.057 qpair failed and we were unable to recover it. 00:37:28.057 [2024-09-29 16:45:28.426718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.057 [2024-09-29 16:45:28.426754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.057 qpair failed and we were unable to recover it. 00:37:28.057 [2024-09-29 16:45:28.426890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.057 [2024-09-29 16:45:28.426946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.057 qpair failed and we were unable to recover it. 00:37:28.057 [2024-09-29 16:45:28.427084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.057 [2024-09-29 16:45:28.427134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.057 qpair failed and we were unable to recover it. 00:37:28.057 [2024-09-29 16:45:28.427284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.057 [2024-09-29 16:45:28.427318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.057 qpair failed and we were unable to recover it. 00:37:28.057 [2024-09-29 16:45:28.427436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.057 [2024-09-29 16:45:28.427469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.057 qpair failed and we were unable to recover it. 00:37:28.057 [2024-09-29 16:45:28.427652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.057 [2024-09-29 16:45:28.427725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.057 qpair failed and we were unable to recover it. 00:37:28.057 [2024-09-29 16:45:28.427914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.057 [2024-09-29 16:45:28.427949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.057 qpair failed and we were unable to recover it. 00:37:28.057 [2024-09-29 16:45:28.428118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.057 [2024-09-29 16:45:28.428171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.057 qpair failed and we were unable to recover it. 00:37:28.057 [2024-09-29 16:45:28.428377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.057 [2024-09-29 16:45:28.428415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.057 qpair failed and we were unable to recover it. 00:37:28.057 [2024-09-29 16:45:28.428576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.057 [2024-09-29 16:45:28.428609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.057 qpair failed and we were unable to recover it. 00:37:28.057 [2024-09-29 16:45:28.428732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.057 [2024-09-29 16:45:28.428768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.057 qpair failed and we were unable to recover it. 00:37:28.057 [2024-09-29 16:45:28.428884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.057 [2024-09-29 16:45:28.428919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.057 qpair failed and we were unable to recover it. 00:37:28.057 [2024-09-29 16:45:28.429070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.057 [2024-09-29 16:45:28.429122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.057 qpair failed and we were unable to recover it. 00:37:28.057 [2024-09-29 16:45:28.429372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.057 [2024-09-29 16:45:28.429420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.057 qpair failed and we were unable to recover it. 00:37:28.057 [2024-09-29 16:45:28.429547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.057 [2024-09-29 16:45:28.429583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.057 qpair failed and we were unable to recover it. 00:37:28.057 [2024-09-29 16:45:28.429740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.057 [2024-09-29 16:45:28.429776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.057 qpair failed and we were unable to recover it. 00:37:28.057 [2024-09-29 16:45:28.429968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.057 [2024-09-29 16:45:28.430010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.057 qpair failed and we were unable to recover it. 00:37:28.057 [2024-09-29 16:45:28.430164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.057 [2024-09-29 16:45:28.430200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.057 qpair failed and we were unable to recover it. 00:37:28.057 [2024-09-29 16:45:28.430359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.057 [2024-09-29 16:45:28.430397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.057 qpair failed and we were unable to recover it. 00:37:28.057 [2024-09-29 16:45:28.430571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.057 [2024-09-29 16:45:28.430619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.057 qpair failed and we were unable to recover it. 00:37:28.057 [2024-09-29 16:45:28.430768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.057 [2024-09-29 16:45:28.430815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.057 qpair failed and we were unable to recover it. 00:37:28.057 [2024-09-29 16:45:28.430943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.057 [2024-09-29 16:45:28.430999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.057 qpair failed and we were unable to recover it. 00:37:28.058 [2024-09-29 16:45:28.431150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.058 [2024-09-29 16:45:28.431187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.058 qpair failed and we were unable to recover it. 00:37:28.058 [2024-09-29 16:45:28.431345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.058 [2024-09-29 16:45:28.431382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.058 qpair failed and we were unable to recover it. 00:37:28.058 [2024-09-29 16:45:28.431559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.058 [2024-09-29 16:45:28.431596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.058 qpair failed and we were unable to recover it. 00:37:28.058 [2024-09-29 16:45:28.431779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.058 [2024-09-29 16:45:28.431814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.058 qpair failed and we were unable to recover it. 00:37:28.058 [2024-09-29 16:45:28.432018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.058 [2024-09-29 16:45:28.432071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.058 qpair failed and we were unable to recover it. 00:37:28.058 [2024-09-29 16:45:28.432209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.058 [2024-09-29 16:45:28.432261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.058 qpair failed and we were unable to recover it. 00:37:28.058 [2024-09-29 16:45:28.432437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.058 [2024-09-29 16:45:28.432472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.058 qpair failed and we were unable to recover it. 00:37:28.058 [2024-09-29 16:45:28.432613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.058 [2024-09-29 16:45:28.432646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.058 qpair failed and we were unable to recover it. 00:37:28.058 [2024-09-29 16:45:28.432801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.058 [2024-09-29 16:45:28.432834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.058 qpair failed and we were unable to recover it. 00:37:28.058 [2024-09-29 16:45:28.433008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.058 [2024-09-29 16:45:28.433056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.058 qpair failed and we were unable to recover it. 00:37:28.058 [2024-09-29 16:45:28.433183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.058 [2024-09-29 16:45:28.433218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.058 qpair failed and we were unable to recover it. 00:37:28.058 [2024-09-29 16:45:28.433358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.058 [2024-09-29 16:45:28.433392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.058 qpair failed and we were unable to recover it. 00:37:28.058 [2024-09-29 16:45:28.433507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.058 [2024-09-29 16:45:28.433541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.058 qpair failed and we were unable to recover it. 00:37:28.058 [2024-09-29 16:45:28.433725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.058 [2024-09-29 16:45:28.433774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.058 qpair failed and we were unable to recover it. 00:37:28.058 [2024-09-29 16:45:28.433927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.058 [2024-09-29 16:45:28.433981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.058 qpair failed and we were unable to recover it. 00:37:28.058 [2024-09-29 16:45:28.434126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.058 [2024-09-29 16:45:28.434217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.058 qpair failed and we were unable to recover it. 00:37:28.058 [2024-09-29 16:45:28.434392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.058 [2024-09-29 16:45:28.434452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.058 qpair failed and we were unable to recover it. 00:37:28.058 [2024-09-29 16:45:28.434580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.058 [2024-09-29 16:45:28.434617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.058 qpair failed and we were unable to recover it. 00:37:28.058 [2024-09-29 16:45:28.434800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.058 [2024-09-29 16:45:28.434848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.058 qpair failed and we were unable to recover it. 00:37:28.058 [2024-09-29 16:45:28.435027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.058 [2024-09-29 16:45:28.435081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.058 qpair failed and we were unable to recover it. 00:37:28.058 [2024-09-29 16:45:28.435228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.058 [2024-09-29 16:45:28.435261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.058 qpair failed and we were unable to recover it. 00:37:28.058 [2024-09-29 16:45:28.435435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.058 [2024-09-29 16:45:28.435474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.058 qpair failed and we were unable to recover it. 00:37:28.058 [2024-09-29 16:45:28.435603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.058 [2024-09-29 16:45:28.435638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.058 qpair failed and we were unable to recover it. 00:37:28.058 [2024-09-29 16:45:28.435765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.058 [2024-09-29 16:45:28.435800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.058 qpair failed and we were unable to recover it. 00:37:28.058 [2024-09-29 16:45:28.435949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.058 [2024-09-29 16:45:28.435983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.058 qpair failed and we were unable to recover it. 00:37:28.058 [2024-09-29 16:45:28.436207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.058 [2024-09-29 16:45:28.436244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.058 qpair failed and we were unable to recover it. 00:37:28.058 [2024-09-29 16:45:28.436372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.058 [2024-09-29 16:45:28.436409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.058 qpair failed and we were unable to recover it. 00:37:28.058 [2024-09-29 16:45:28.436587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.058 [2024-09-29 16:45:28.436622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.058 qpair failed and we were unable to recover it. 00:37:28.058 [2024-09-29 16:45:28.436781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.058 [2024-09-29 16:45:28.436815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.058 qpair failed and we were unable to recover it. 00:37:28.058 [2024-09-29 16:45:28.436961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.058 [2024-09-29 16:45:28.436995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.058 qpair failed and we were unable to recover it. 00:37:28.058 [2024-09-29 16:45:28.437136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.058 [2024-09-29 16:45:28.437169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.058 qpair failed and we were unable to recover it. 00:37:28.058 [2024-09-29 16:45:28.437417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.058 [2024-09-29 16:45:28.437454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.058 qpair failed and we were unable to recover it. 00:37:28.058 [2024-09-29 16:45:28.437624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.058 [2024-09-29 16:45:28.437658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.058 qpair failed and we were unable to recover it. 00:37:28.058 [2024-09-29 16:45:28.437808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.058 [2024-09-29 16:45:28.437856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.059 qpair failed and we were unable to recover it. 00:37:28.059 [2024-09-29 16:45:28.438035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.059 [2024-09-29 16:45:28.438080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.059 qpair failed and we were unable to recover it. 00:37:28.059 [2024-09-29 16:45:28.438208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.059 [2024-09-29 16:45:28.438246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.059 qpair failed and we were unable to recover it. 00:37:28.059 [2024-09-29 16:45:28.438376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.059 [2024-09-29 16:45:28.438412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.059 qpair failed and we were unable to recover it. 00:37:28.059 [2024-09-29 16:45:28.438612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.059 [2024-09-29 16:45:28.438659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.059 qpair failed and we were unable to recover it. 00:37:28.059 [2024-09-29 16:45:28.438819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.059 [2024-09-29 16:45:28.438856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.059 qpair failed and we were unable to recover it. 00:37:28.059 [2024-09-29 16:45:28.438987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.059 [2024-09-29 16:45:28.439022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.059 qpair failed and we were unable to recover it. 00:37:28.059 [2024-09-29 16:45:28.439160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.059 [2024-09-29 16:45:28.439193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.059 qpair failed and we were unable to recover it. 00:37:28.059 [2024-09-29 16:45:28.439331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.059 [2024-09-29 16:45:28.439364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.059 qpair failed and we were unable to recover it. 00:37:28.059 [2024-09-29 16:45:28.439495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.059 [2024-09-29 16:45:28.439528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.059 qpair failed and we were unable to recover it. 00:37:28.059 [2024-09-29 16:45:28.439711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.059 [2024-09-29 16:45:28.439746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.059 qpair failed and we were unable to recover it. 00:37:28.059 [2024-09-29 16:45:28.439860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.059 [2024-09-29 16:45:28.439894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.059 qpair failed and we were unable to recover it. 00:37:28.059 [2024-09-29 16:45:28.440061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.059 [2024-09-29 16:45:28.440116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.059 qpair failed and we were unable to recover it. 00:37:28.059 [2024-09-29 16:45:28.440322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.059 [2024-09-29 16:45:28.440380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.059 qpair failed and we were unable to recover it. 00:37:28.059 [2024-09-29 16:45:28.440531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.059 [2024-09-29 16:45:28.440567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.059 qpair failed and we were unable to recover it. 00:37:28.059 [2024-09-29 16:45:28.440721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.059 [2024-09-29 16:45:28.440756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.059 qpair failed and we were unable to recover it. 00:37:28.059 [2024-09-29 16:45:28.440891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.059 [2024-09-29 16:45:28.440947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.059 qpair failed and we were unable to recover it. 00:37:28.059 [2024-09-29 16:45:28.441116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.059 [2024-09-29 16:45:28.441169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.059 qpair failed and we were unable to recover it. 00:37:28.059 [2024-09-29 16:45:28.441410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.059 [2024-09-29 16:45:28.441468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.059 qpair failed and we were unable to recover it. 00:37:28.059 [2024-09-29 16:45:28.441605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.059 [2024-09-29 16:45:28.441639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.059 qpair failed and we were unable to recover it. 00:37:28.059 [2024-09-29 16:45:28.441787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.059 [2024-09-29 16:45:28.441821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.059 qpair failed and we were unable to recover it. 00:37:28.059 [2024-09-29 16:45:28.441953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.059 [2024-09-29 16:45:28.442004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.059 qpair failed and we were unable to recover it. 00:37:28.059 [2024-09-29 16:45:28.442167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.059 [2024-09-29 16:45:28.442217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.059 qpair failed and we were unable to recover it. 00:37:28.059 [2024-09-29 16:45:28.442339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.059 [2024-09-29 16:45:28.442373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.059 qpair failed and we were unable to recover it. 00:37:28.059 [2024-09-29 16:45:28.442517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.059 [2024-09-29 16:45:28.442550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.059 qpair failed and we were unable to recover it. 00:37:28.059 [2024-09-29 16:45:28.442664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.059 [2024-09-29 16:45:28.442704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.059 qpair failed and we were unable to recover it. 00:37:28.059 [2024-09-29 16:45:28.442914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.059 [2024-09-29 16:45:28.442949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.059 qpair failed and we were unable to recover it. 00:37:28.059 [2024-09-29 16:45:28.443108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.059 [2024-09-29 16:45:28.443161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.059 qpair failed and we were unable to recover it. 00:37:28.059 [2024-09-29 16:45:28.443286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.059 [2024-09-29 16:45:28.443320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.059 qpair failed and we were unable to recover it. 00:37:28.059 [2024-09-29 16:45:28.443462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.059 [2024-09-29 16:45:28.443495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.059 qpair failed and we were unable to recover it. 00:37:28.059 [2024-09-29 16:45:28.443639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.059 [2024-09-29 16:45:28.443681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.059 qpair failed and we were unable to recover it. 00:37:28.059 [2024-09-29 16:45:28.443800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.059 [2024-09-29 16:45:28.443833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.059 qpair failed and we were unable to recover it. 00:37:28.059 [2024-09-29 16:45:28.443963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.059 [2024-09-29 16:45:28.444013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.059 qpair failed and we were unable to recover it. 00:37:28.059 [2024-09-29 16:45:28.444141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.059 [2024-09-29 16:45:28.444195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.059 qpair failed and we were unable to recover it. 00:37:28.059 [2024-09-29 16:45:28.444342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.059 [2024-09-29 16:45:28.444375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.059 qpair failed and we were unable to recover it. 00:37:28.059 [2024-09-29 16:45:28.444514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.059 [2024-09-29 16:45:28.444547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.059 qpair failed and we were unable to recover it. 00:37:28.059 [2024-09-29 16:45:28.444720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.059 [2024-09-29 16:45:28.444774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.059 qpair failed and we were unable to recover it. 00:37:28.059 [2024-09-29 16:45:28.444930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.059 [2024-09-29 16:45:28.444965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.059 qpair failed and we were unable to recover it. 00:37:28.059 [2024-09-29 16:45:28.445113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.060 [2024-09-29 16:45:28.445151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.060 qpair failed and we were unable to recover it. 00:37:28.060 [2024-09-29 16:45:28.445308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.060 [2024-09-29 16:45:28.445360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.060 qpair failed and we were unable to recover it. 00:37:28.060 [2024-09-29 16:45:28.445540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.060 [2024-09-29 16:45:28.445573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.060 qpair failed and we were unable to recover it. 00:37:28.060 [2024-09-29 16:45:28.445689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.060 [2024-09-29 16:45:28.445750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.060 qpair failed and we were unable to recover it. 00:37:28.060 [2024-09-29 16:45:28.445937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.060 [2024-09-29 16:45:28.445991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.060 qpair failed and we were unable to recover it. 00:37:28.060 [2024-09-29 16:45:28.446135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.060 [2024-09-29 16:45:28.446169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.060 qpair failed and we were unable to recover it. 00:37:28.060 [2024-09-29 16:45:28.446291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.060 [2024-09-29 16:45:28.446325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.060 qpair failed and we were unable to recover it. 00:37:28.060 [2024-09-29 16:45:28.446489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.060 [2024-09-29 16:45:28.446523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.060 qpair failed and we were unable to recover it. 00:37:28.060 [2024-09-29 16:45:28.446662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.060 [2024-09-29 16:45:28.446702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.060 qpair failed and we were unable to recover it. 00:37:28.060 [2024-09-29 16:45:28.446860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.060 [2024-09-29 16:45:28.446912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.060 qpair failed and we were unable to recover it. 00:37:28.060 [2024-09-29 16:45:28.447136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.060 [2024-09-29 16:45:28.447173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.060 qpair failed and we were unable to recover it. 00:37:28.060 [2024-09-29 16:45:28.447359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.060 [2024-09-29 16:45:28.447394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.060 qpair failed and we were unable to recover it. 00:37:28.060 [2024-09-29 16:45:28.447516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.060 [2024-09-29 16:45:28.447550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.060 qpair failed and we were unable to recover it. 00:37:28.060 [2024-09-29 16:45:28.447691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.060 [2024-09-29 16:45:28.447725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.060 qpair failed and we were unable to recover it. 00:37:28.060 [2024-09-29 16:45:28.447876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.060 [2024-09-29 16:45:28.447910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.060 qpair failed and we were unable to recover it. 00:37:28.060 [2024-09-29 16:45:28.448049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.060 [2024-09-29 16:45:28.448083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.060 qpair failed and we were unable to recover it. 00:37:28.060 [2024-09-29 16:45:28.448216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.060 [2024-09-29 16:45:28.448250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.060 qpair failed and we were unable to recover it. 00:37:28.060 [2024-09-29 16:45:28.448396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.060 [2024-09-29 16:45:28.448431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.060 qpair failed and we were unable to recover it. 00:37:28.060 [2024-09-29 16:45:28.448579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.060 [2024-09-29 16:45:28.448613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.060 qpair failed and we were unable to recover it. 00:37:28.060 [2024-09-29 16:45:28.448764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.060 [2024-09-29 16:45:28.448798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.060 qpair failed and we were unable to recover it. 00:37:28.060 [2024-09-29 16:45:28.448965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.060 [2024-09-29 16:45:28.449012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.060 qpair failed and we were unable to recover it. 00:37:28.060 [2024-09-29 16:45:28.449186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.060 [2024-09-29 16:45:28.449222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.060 qpair failed and we were unable to recover it. 00:37:28.060 [2024-09-29 16:45:28.449374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.060 [2024-09-29 16:45:28.449408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.060 qpair failed and we were unable to recover it. 00:37:28.060 [2024-09-29 16:45:28.449575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.060 [2024-09-29 16:45:28.449609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.060 qpair failed and we were unable to recover it. 00:37:28.060 [2024-09-29 16:45:28.449785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.060 [2024-09-29 16:45:28.449823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.060 qpair failed and we were unable to recover it. 00:37:28.060 [2024-09-29 16:45:28.449979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.060 [2024-09-29 16:45:28.450016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.060 qpair failed and we were unable to recover it. 00:37:28.060 [2024-09-29 16:45:28.450181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.060 [2024-09-29 16:45:28.450214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.060 qpair failed and we were unable to recover it. 00:37:28.060 [2024-09-29 16:45:28.450380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.060 [2024-09-29 16:45:28.450417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.060 qpair failed and we were unable to recover it. 00:37:28.060 [2024-09-29 16:45:28.450536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.060 [2024-09-29 16:45:28.450586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.060 qpair failed and we were unable to recover it. 00:37:28.060 [2024-09-29 16:45:28.450725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.060 [2024-09-29 16:45:28.450759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.060 qpair failed and we were unable to recover it. 00:37:28.060 [2024-09-29 16:45:28.450936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.060 [2024-09-29 16:45:28.450989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.060 qpair failed and we were unable to recover it. 00:37:28.060 [2024-09-29 16:45:28.451256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.060 [2024-09-29 16:45:28.451317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.060 qpair failed and we were unable to recover it. 00:37:28.060 [2024-09-29 16:45:28.451590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.060 [2024-09-29 16:45:28.451654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.060 qpair failed and we were unable to recover it. 00:37:28.060 [2024-09-29 16:45:28.451827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.060 [2024-09-29 16:45:28.451863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.060 qpair failed and we were unable to recover it. 00:37:28.060 [2024-09-29 16:45:28.452004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.060 [2024-09-29 16:45:28.452037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.060 qpair failed and we were unable to recover it. 00:37:28.060 [2024-09-29 16:45:28.452152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.060 [2024-09-29 16:45:28.452188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.060 qpair failed and we were unable to recover it. 00:37:28.060 [2024-09-29 16:45:28.452358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.060 [2024-09-29 16:45:28.452390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.060 qpair failed and we were unable to recover it. 00:37:28.060 [2024-09-29 16:45:28.452512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.060 [2024-09-29 16:45:28.452545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.060 qpair failed and we were unable to recover it. 00:37:28.060 [2024-09-29 16:45:28.452690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.060 [2024-09-29 16:45:28.452724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.060 qpair failed and we were unable to recover it. 00:37:28.061 [2024-09-29 16:45:28.452893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.061 [2024-09-29 16:45:28.452932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.061 qpair failed and we were unable to recover it. 00:37:28.061 [2024-09-29 16:45:28.453084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.061 [2024-09-29 16:45:28.453122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.061 qpair failed and we were unable to recover it. 00:37:28.061 [2024-09-29 16:45:28.453330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.061 [2024-09-29 16:45:28.453389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.061 qpair failed and we were unable to recover it. 00:37:28.061 [2024-09-29 16:45:28.453511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.061 [2024-09-29 16:45:28.453547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.061 qpair failed and we were unable to recover it. 00:37:28.061 [2024-09-29 16:45:28.453693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.061 [2024-09-29 16:45:28.453750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.061 qpair failed and we were unable to recover it. 00:37:28.061 [2024-09-29 16:45:28.453871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.061 [2024-09-29 16:45:28.453903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.061 qpair failed and we were unable to recover it. 00:37:28.061 [2024-09-29 16:45:28.454071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.061 [2024-09-29 16:45:28.454107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.061 qpair failed and we were unable to recover it. 00:37:28.061 [2024-09-29 16:45:28.454325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.061 [2024-09-29 16:45:28.454384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.061 qpair failed and we were unable to recover it. 00:37:28.061 [2024-09-29 16:45:28.454578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.061 [2024-09-29 16:45:28.454617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.061 qpair failed and we were unable to recover it. 00:37:28.061 [2024-09-29 16:45:28.454814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.061 [2024-09-29 16:45:28.454850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.061 qpair failed and we were unable to recover it. 00:37:28.061 [2024-09-29 16:45:28.455019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.061 [2024-09-29 16:45:28.455087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.061 qpair failed and we were unable to recover it. 00:37:28.061 [2024-09-29 16:45:28.455336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.061 [2024-09-29 16:45:28.455394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.061 qpair failed and we were unable to recover it. 00:37:28.061 [2024-09-29 16:45:28.455550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.061 [2024-09-29 16:45:28.455586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.061 qpair failed and we were unable to recover it. 00:37:28.061 [2024-09-29 16:45:28.455716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.061 [2024-09-29 16:45:28.455750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.061 qpair failed and we were unable to recover it. 00:37:28.061 [2024-09-29 16:45:28.455864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.061 [2024-09-29 16:45:28.455897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.061 qpair failed and we were unable to recover it. 00:37:28.061 [2024-09-29 16:45:28.456123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.061 [2024-09-29 16:45:28.456181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.061 qpair failed and we were unable to recover it. 00:37:28.061 [2024-09-29 16:45:28.456432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.061 [2024-09-29 16:45:28.456465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.061 qpair failed and we were unable to recover it. 00:37:28.061 [2024-09-29 16:45:28.456615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.061 [2024-09-29 16:45:28.456648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.061 qpair failed and we were unable to recover it. 00:37:28.061 [2024-09-29 16:45:28.456833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.061 [2024-09-29 16:45:28.456880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.061 qpair failed and we were unable to recover it. 00:37:28.061 [2024-09-29 16:45:28.457045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.061 [2024-09-29 16:45:28.457093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.061 qpair failed and we were unable to recover it. 00:37:28.061 [2024-09-29 16:45:28.457329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.061 [2024-09-29 16:45:28.457388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.061 qpair failed and we were unable to recover it. 00:37:28.061 [2024-09-29 16:45:28.457530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.061 [2024-09-29 16:45:28.457566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.061 qpair failed and we were unable to recover it. 00:37:28.061 [2024-09-29 16:45:28.457701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.061 [2024-09-29 16:45:28.457735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.061 qpair failed and we were unable to recover it. 00:37:28.061 [2024-09-29 16:45:28.457913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.061 [2024-09-29 16:45:28.457946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.061 qpair failed and we were unable to recover it. 00:37:28.061 [2024-09-29 16:45:28.458120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.061 [2024-09-29 16:45:28.458153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.061 qpair failed and we were unable to recover it. 00:37:28.061 [2024-09-29 16:45:28.458267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.061 [2024-09-29 16:45:28.458301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.061 qpair failed and we were unable to recover it. 00:37:28.061 [2024-09-29 16:45:28.458446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.061 [2024-09-29 16:45:28.458478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.061 qpair failed and we were unable to recover it. 00:37:28.061 [2024-09-29 16:45:28.458649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.061 [2024-09-29 16:45:28.458691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.061 qpair failed and we were unable to recover it. 00:37:28.061 [2024-09-29 16:45:28.458854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.061 [2024-09-29 16:45:28.458903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.061 qpair failed and we were unable to recover it. 00:37:28.061 [2024-09-29 16:45:28.459096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.061 [2024-09-29 16:45:28.459143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.061 qpair failed and we were unable to recover it. 00:37:28.061 [2024-09-29 16:45:28.459266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.061 [2024-09-29 16:45:28.459320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.061 qpair failed and we were unable to recover it. 00:37:28.061 [2024-09-29 16:45:28.459504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.061 [2024-09-29 16:45:28.459541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.061 qpair failed and we were unable to recover it. 00:37:28.061 [2024-09-29 16:45:28.459676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.061 [2024-09-29 16:45:28.459730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.061 qpair failed and we were unable to recover it. 00:37:28.061 [2024-09-29 16:45:28.459892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.061 [2024-09-29 16:45:28.459938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.061 qpair failed and we were unable to recover it. 00:37:28.061 [2024-09-29 16:45:28.460092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.061 [2024-09-29 16:45:28.460127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.061 qpair failed and we were unable to recover it. 00:37:28.061 [2024-09-29 16:45:28.460273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.061 [2024-09-29 16:45:28.460307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.061 qpair failed and we were unable to recover it. 00:37:28.061 [2024-09-29 16:45:28.460550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.061 [2024-09-29 16:45:28.460587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.061 qpair failed and we were unable to recover it. 00:37:28.061 [2024-09-29 16:45:28.460754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.061 [2024-09-29 16:45:28.460788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.061 qpair failed and we were unable to recover it. 00:37:28.061 [2024-09-29 16:45:28.460919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.062 [2024-09-29 16:45:28.460966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.062 qpair failed and we were unable to recover it. 00:37:28.062 [2024-09-29 16:45:28.461137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.062 [2024-09-29 16:45:28.461190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.062 qpair failed and we were unable to recover it. 00:37:28.062 [2024-09-29 16:45:28.461350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.062 [2024-09-29 16:45:28.461402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.062 qpair failed and we were unable to recover it. 00:37:28.062 [2024-09-29 16:45:28.461551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.062 [2024-09-29 16:45:28.461586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.062 qpair failed and we were unable to recover it. 00:37:28.062 [2024-09-29 16:45:28.461754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.062 [2024-09-29 16:45:28.461788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.062 qpair failed and we were unable to recover it. 00:37:28.062 [2024-09-29 16:45:28.461920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.062 [2024-09-29 16:45:28.461971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.062 qpair failed and we were unable to recover it. 00:37:28.062 [2024-09-29 16:45:28.462148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.062 [2024-09-29 16:45:28.462181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.062 qpair failed and we were unable to recover it. 00:37:28.062 [2024-09-29 16:45:28.462337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.062 [2024-09-29 16:45:28.462370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.062 qpair failed and we were unable to recover it. 00:37:28.062 [2024-09-29 16:45:28.462472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.062 [2024-09-29 16:45:28.462505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.062 qpair failed and we were unable to recover it. 00:37:28.062 [2024-09-29 16:45:28.462606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.062 [2024-09-29 16:45:28.462640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.062 qpair failed and we were unable to recover it. 00:37:28.062 [2024-09-29 16:45:28.462768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.062 [2024-09-29 16:45:28.462806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.062 qpair failed and we were unable to recover it. 00:37:28.062 [2024-09-29 16:45:28.462932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.062 [2024-09-29 16:45:28.462980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.062 qpair failed and we were unable to recover it. 00:37:28.062 [2024-09-29 16:45:28.463220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.062 [2024-09-29 16:45:28.463294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.062 qpair failed and we were unable to recover it. 00:37:28.062 [2024-09-29 16:45:28.463472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.062 [2024-09-29 16:45:28.463507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.062 qpair failed and we were unable to recover it. 00:37:28.062 [2024-09-29 16:45:28.463634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.062 [2024-09-29 16:45:28.463668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.062 qpair failed and we were unable to recover it. 00:37:28.062 [2024-09-29 16:45:28.463809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.062 [2024-09-29 16:45:28.463846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.062 qpair failed and we were unable to recover it. 00:37:28.062 [2024-09-29 16:45:28.464018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.062 [2024-09-29 16:45:28.464091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.062 qpair failed and we were unable to recover it. 00:37:28.062 [2024-09-29 16:45:28.464266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.062 [2024-09-29 16:45:28.464330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.062 qpair failed and we were unable to recover it. 00:37:28.062 [2024-09-29 16:45:28.464554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.062 [2024-09-29 16:45:28.464591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.062 qpair failed and we were unable to recover it. 00:37:28.062 [2024-09-29 16:45:28.464739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.062 [2024-09-29 16:45:28.464775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.062 qpair failed and we were unable to recover it. 00:37:28.062 [2024-09-29 16:45:28.464978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.062 [2024-09-29 16:45:28.465031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.062 qpair failed and we were unable to recover it. 00:37:28.062 [2024-09-29 16:45:28.465193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.062 [2024-09-29 16:45:28.465243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.062 qpair failed and we were unable to recover it. 00:37:28.062 [2024-09-29 16:45:28.465397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.062 [2024-09-29 16:45:28.465448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.062 qpair failed and we were unable to recover it. 00:37:28.062 [2024-09-29 16:45:28.465591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.062 [2024-09-29 16:45:28.465624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.062 qpair failed and we were unable to recover it. 00:37:28.062 [2024-09-29 16:45:28.465805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.062 [2024-09-29 16:45:28.465839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.062 qpair failed and we were unable to recover it. 00:37:28.062 [2024-09-29 16:45:28.465958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.062 [2024-09-29 16:45:28.465992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.062 qpair failed and we were unable to recover it. 00:37:28.062 [2024-09-29 16:45:28.466134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.062 [2024-09-29 16:45:28.466167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.062 qpair failed and we were unable to recover it. 00:37:28.062 [2024-09-29 16:45:28.466313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.062 [2024-09-29 16:45:28.466346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.062 qpair failed and we were unable to recover it. 00:37:28.062 [2024-09-29 16:45:28.466514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.062 [2024-09-29 16:45:28.466548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.062 qpair failed and we were unable to recover it. 00:37:28.062 [2024-09-29 16:45:28.466689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.062 [2024-09-29 16:45:28.466723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.062 qpair failed and we were unable to recover it. 00:37:28.062 [2024-09-29 16:45:28.466882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.062 [2024-09-29 16:45:28.466918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.062 qpair failed and we were unable to recover it. 00:37:28.062 [2024-09-29 16:45:28.467143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.062 [2024-09-29 16:45:28.467201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.062 qpair failed and we were unable to recover it. 00:37:28.062 [2024-09-29 16:45:28.467349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.062 [2024-09-29 16:45:28.467383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.062 qpair failed and we were unable to recover it. 00:37:28.062 [2024-09-29 16:45:28.467515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.062 [2024-09-29 16:45:28.467553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.062 qpair failed and we were unable to recover it. 00:37:28.062 [2024-09-29 16:45:28.467682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.062 [2024-09-29 16:45:28.467718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.062 qpair failed and we were unable to recover it. 00:37:28.062 [2024-09-29 16:45:28.467862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.062 [2024-09-29 16:45:28.467895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.062 qpair failed and we were unable to recover it. 00:37:28.062 [2024-09-29 16:45:28.468052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.062 [2024-09-29 16:45:28.468107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.062 qpair failed and we were unable to recover it. 00:37:28.062 [2024-09-29 16:45:28.468269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.062 [2024-09-29 16:45:28.468321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.062 qpair failed and we were unable to recover it. 00:37:28.062 [2024-09-29 16:45:28.468440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.063 [2024-09-29 16:45:28.468474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.063 qpair failed and we were unable to recover it. 00:37:28.063 [2024-09-29 16:45:28.468612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.063 [2024-09-29 16:45:28.468644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.063 qpair failed and we were unable to recover it. 00:37:28.063 [2024-09-29 16:45:28.468786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.063 [2024-09-29 16:45:28.468819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.063 qpair failed and we were unable to recover it. 00:37:28.063 [2024-09-29 16:45:28.468990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.063 [2024-09-29 16:45:28.469025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.063 qpair failed and we were unable to recover it. 00:37:28.063 [2024-09-29 16:45:28.469167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.063 [2024-09-29 16:45:28.469200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.063 qpair failed and we were unable to recover it. 00:37:28.063 [2024-09-29 16:45:28.469342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.063 [2024-09-29 16:45:28.469375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.063 qpair failed and we were unable to recover it. 00:37:28.063 [2024-09-29 16:45:28.469542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.063 [2024-09-29 16:45:28.469591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.063 qpair failed and we were unable to recover it. 00:37:28.063 [2024-09-29 16:45:28.469743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.063 [2024-09-29 16:45:28.469780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.063 qpair failed and we were unable to recover it. 00:37:28.063 [2024-09-29 16:45:28.469923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.063 [2024-09-29 16:45:28.469957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.063 qpair failed and we were unable to recover it. 00:37:28.063 [2024-09-29 16:45:28.470108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.063 [2024-09-29 16:45:28.470148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.063 qpair failed and we were unable to recover it. 00:37:28.063 [2024-09-29 16:45:28.470315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.063 [2024-09-29 16:45:28.470353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.063 qpair failed and we were unable to recover it. 00:37:28.063 [2024-09-29 16:45:28.470508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.063 [2024-09-29 16:45:28.470545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.063 qpair failed and we were unable to recover it. 00:37:28.063 [2024-09-29 16:45:28.470690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.063 [2024-09-29 16:45:28.470726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.063 qpair failed and we were unable to recover it. 00:37:28.063 [2024-09-29 16:45:28.470898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.063 [2024-09-29 16:45:28.470949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.063 qpair failed and we were unable to recover it. 00:37:28.063 [2024-09-29 16:45:28.471139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.063 [2024-09-29 16:45:28.471192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.063 qpair failed and we were unable to recover it. 00:37:28.063 [2024-09-29 16:45:28.471378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.063 [2024-09-29 16:45:28.471438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.063 qpair failed and we were unable to recover it. 00:37:28.063 [2024-09-29 16:45:28.471579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.063 [2024-09-29 16:45:28.471611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.063 qpair failed and we were unable to recover it. 00:37:28.063 [2024-09-29 16:45:28.471781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.063 [2024-09-29 16:45:28.471834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.063 qpair failed and we were unable to recover it. 00:37:28.063 [2024-09-29 16:45:28.471978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.063 [2024-09-29 16:45:28.472017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.063 qpair failed and we were unable to recover it. 00:37:28.063 [2024-09-29 16:45:28.472220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.063 [2024-09-29 16:45:28.472258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.063 qpair failed and we were unable to recover it. 00:37:28.063 [2024-09-29 16:45:28.472506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.063 [2024-09-29 16:45:28.472565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.063 qpair failed and we were unable to recover it. 00:37:28.063 [2024-09-29 16:45:28.472750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.063 [2024-09-29 16:45:28.472785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.063 qpair failed and we were unable to recover it. 00:37:28.063 [2024-09-29 16:45:28.472929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.063 [2024-09-29 16:45:28.472982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.063 qpair failed and we were unable to recover it. 00:37:28.063 [2024-09-29 16:45:28.473134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.063 [2024-09-29 16:45:28.473171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.063 qpair failed and we were unable to recover it. 00:37:28.063 [2024-09-29 16:45:28.473297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.063 [2024-09-29 16:45:28.473335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.063 qpair failed and we were unable to recover it. 00:37:28.063 [2024-09-29 16:45:28.473462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.063 [2024-09-29 16:45:28.473500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.063 qpair failed and we were unable to recover it. 00:37:28.063 [2024-09-29 16:45:28.473650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.063 [2024-09-29 16:45:28.473698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.063 qpair failed and we were unable to recover it. 00:37:28.063 [2024-09-29 16:45:28.473835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.063 [2024-09-29 16:45:28.473870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.063 qpair failed and we were unable to recover it. 00:37:28.063 [2024-09-29 16:45:28.474002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.063 [2024-09-29 16:45:28.474053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.063 qpair failed and we were unable to recover it. 00:37:28.063 [2024-09-29 16:45:28.474179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.063 [2024-09-29 16:45:28.474232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.063 qpair failed and we were unable to recover it. 00:37:28.063 [2024-09-29 16:45:28.474400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.063 [2024-09-29 16:45:28.474452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.063 qpair failed and we were unable to recover it. 00:37:28.063 [2024-09-29 16:45:28.474627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.063 [2024-09-29 16:45:28.474660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.063 qpair failed and we were unable to recover it. 00:37:28.063 [2024-09-29 16:45:28.474837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.063 [2024-09-29 16:45:28.474889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.063 qpair failed and we were unable to recover it. 00:37:28.063 [2024-09-29 16:45:28.475077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.063 [2024-09-29 16:45:28.475111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.063 qpair failed and we were unable to recover it. 00:37:28.063 [2024-09-29 16:45:28.475260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.064 [2024-09-29 16:45:28.475294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.064 qpair failed and we were unable to recover it. 00:37:28.064 [2024-09-29 16:45:28.475411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.064 [2024-09-29 16:45:28.475450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.064 qpair failed and we were unable to recover it. 00:37:28.064 [2024-09-29 16:45:28.475592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.064 [2024-09-29 16:45:28.475627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.064 qpair failed and we were unable to recover it. 00:37:28.064 [2024-09-29 16:45:28.475779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.064 [2024-09-29 16:45:28.475813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.064 qpair failed and we were unable to recover it. 00:37:28.064 [2024-09-29 16:45:28.475960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.064 [2024-09-29 16:45:28.475994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.064 qpair failed and we were unable to recover it. 00:37:28.064 [2024-09-29 16:45:28.476154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.064 [2024-09-29 16:45:28.476208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.064 qpair failed and we were unable to recover it. 00:37:28.064 [2024-09-29 16:45:28.476371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.064 [2024-09-29 16:45:28.476422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.064 qpair failed and we were unable to recover it. 00:37:28.064 [2024-09-29 16:45:28.476563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.064 [2024-09-29 16:45:28.476596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.064 qpair failed and we were unable to recover it. 00:37:28.064 [2024-09-29 16:45:28.476760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.064 [2024-09-29 16:45:28.476813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.064 qpair failed and we were unable to recover it. 00:37:28.064 [2024-09-29 16:45:28.476920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.064 [2024-09-29 16:45:28.476953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.064 qpair failed and we were unable to recover it. 00:37:28.064 [2024-09-29 16:45:28.477093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.064 [2024-09-29 16:45:28.477148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.064 qpair failed and we were unable to recover it. 00:37:28.064 [2024-09-29 16:45:28.477357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.064 [2024-09-29 16:45:28.477391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.064 qpair failed and we were unable to recover it. 00:37:28.064 [2024-09-29 16:45:28.477499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.064 [2024-09-29 16:45:28.477533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.064 qpair failed and we were unable to recover it. 00:37:28.064 [2024-09-29 16:45:28.477684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.064 [2024-09-29 16:45:28.477719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.064 qpair failed and we were unable to recover it. 00:37:28.064 [2024-09-29 16:45:28.477860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.064 [2024-09-29 16:45:28.477899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.064 qpair failed and we were unable to recover it. 00:37:28.064 [2024-09-29 16:45:28.478137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.064 [2024-09-29 16:45:28.478174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.064 qpair failed and we were unable to recover it. 00:37:28.064 [2024-09-29 16:45:28.478433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.064 [2024-09-29 16:45:28.478490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.064 qpair failed and we were unable to recover it. 00:37:28.064 [2024-09-29 16:45:28.478644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.064 [2024-09-29 16:45:28.478683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.064 qpair failed and we were unable to recover it. 00:37:28.064 [2024-09-29 16:45:28.478875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.064 [2024-09-29 16:45:28.478910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.064 qpair failed and we were unable to recover it. 00:37:28.064 [2024-09-29 16:45:28.479020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.064 [2024-09-29 16:45:28.479064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.064 qpair failed and we were unable to recover it. 00:37:28.064 [2024-09-29 16:45:28.479233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.064 [2024-09-29 16:45:28.479288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.064 qpair failed and we were unable to recover it. 00:37:28.064 [2024-09-29 16:45:28.479402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.064 [2024-09-29 16:45:28.479436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.064 qpair failed and we were unable to recover it. 00:37:28.064 [2024-09-29 16:45:28.479572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.064 [2024-09-29 16:45:28.479605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.064 qpair failed and we were unable to recover it. 00:37:28.064 [2024-09-29 16:45:28.479752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.064 [2024-09-29 16:45:28.479787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.064 qpair failed and we were unable to recover it. 00:37:28.064 [2024-09-29 16:45:28.479906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.064 [2024-09-29 16:45:28.479939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.064 qpair failed and we were unable to recover it. 00:37:28.064 [2024-09-29 16:45:28.480099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.064 [2024-09-29 16:45:28.480153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.064 qpair failed and we were unable to recover it. 00:37:28.064 [2024-09-29 16:45:28.480295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.064 [2024-09-29 16:45:28.480334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.064 qpair failed and we were unable to recover it. 00:37:28.064 [2024-09-29 16:45:28.480509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.064 [2024-09-29 16:45:28.480542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.064 qpair failed and we were unable to recover it. 00:37:28.064 [2024-09-29 16:45:28.480714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.064 [2024-09-29 16:45:28.480748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.064 qpair failed and we were unable to recover it. 00:37:28.064 [2024-09-29 16:45:28.480887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.064 [2024-09-29 16:45:28.480921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.064 qpair failed and we were unable to recover it. 00:37:28.064 [2024-09-29 16:45:28.481056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.064 [2024-09-29 16:45:28.481090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.064 qpair failed and we were unable to recover it. 00:37:28.064 [2024-09-29 16:45:28.481237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.064 [2024-09-29 16:45:28.481271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.064 qpair failed and we were unable to recover it. 00:37:28.064 [2024-09-29 16:45:28.481412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.064 [2024-09-29 16:45:28.481446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.064 qpair failed and we were unable to recover it. 00:37:28.064 [2024-09-29 16:45:28.481556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.064 [2024-09-29 16:45:28.481589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.064 qpair failed and we were unable to recover it. 00:37:28.064 [2024-09-29 16:45:28.481726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.064 [2024-09-29 16:45:28.481761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.064 qpair failed and we were unable to recover it. 00:37:28.064 [2024-09-29 16:45:28.481886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.064 [2024-09-29 16:45:28.481919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.064 qpair failed and we were unable to recover it. 00:37:28.064 [2024-09-29 16:45:28.482058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.064 [2024-09-29 16:45:28.482092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.064 qpair failed and we were unable to recover it. 00:37:28.064 [2024-09-29 16:45:28.482200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.064 [2024-09-29 16:45:28.482234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.064 qpair failed and we were unable to recover it. 00:37:28.064 [2024-09-29 16:45:28.482389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.064 [2024-09-29 16:45:28.482437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.065 qpair failed and we were unable to recover it. 00:37:28.065 [2024-09-29 16:45:28.482591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.065 [2024-09-29 16:45:28.482628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.065 qpair failed and we were unable to recover it. 00:37:28.065 [2024-09-29 16:45:28.482815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.065 [2024-09-29 16:45:28.482850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.065 qpair failed and we were unable to recover it. 00:37:28.065 [2024-09-29 16:45:28.482962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.065 [2024-09-29 16:45:28.483006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.065 qpair failed and we were unable to recover it. 00:37:28.065 [2024-09-29 16:45:28.483128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.065 [2024-09-29 16:45:28.483163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.065 qpair failed and we were unable to recover it. 00:37:28.065 [2024-09-29 16:45:28.483304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.065 [2024-09-29 16:45:28.483339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.065 qpair failed and we were unable to recover it. 00:37:28.065 [2024-09-29 16:45:28.483494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.065 [2024-09-29 16:45:28.483529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.065 qpair failed and we were unable to recover it. 00:37:28.065 [2024-09-29 16:45:28.483691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.065 [2024-09-29 16:45:28.483727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.065 qpair failed and we were unable to recover it. 00:37:28.065 [2024-09-29 16:45:28.483864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.065 [2024-09-29 16:45:28.483924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.065 qpair failed and we were unable to recover it. 00:37:28.065 [2024-09-29 16:45:28.484116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.065 [2024-09-29 16:45:28.484167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.065 qpair failed and we were unable to recover it. 00:37:28.065 [2024-09-29 16:45:28.484269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.065 [2024-09-29 16:45:28.484302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.065 qpair failed and we were unable to recover it. 00:37:28.065 [2024-09-29 16:45:28.484448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.065 [2024-09-29 16:45:28.484483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.065 qpair failed and we were unable to recover it. 00:37:28.065 [2024-09-29 16:45:28.484634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.065 [2024-09-29 16:45:28.484669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.065 qpair failed and we were unable to recover it. 00:37:28.065 [2024-09-29 16:45:28.484797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.065 [2024-09-29 16:45:28.484831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.065 qpair failed and we were unable to recover it. 00:37:28.065 [2024-09-29 16:45:28.484977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.065 [2024-09-29 16:45:28.485011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.065 qpair failed and we were unable to recover it. 00:37:28.065 [2024-09-29 16:45:28.485249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.065 [2024-09-29 16:45:28.485306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.065 qpair failed and we were unable to recover it. 00:37:28.065 [2024-09-29 16:45:28.485437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.065 [2024-09-29 16:45:28.485474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.065 qpair failed and we were unable to recover it. 00:37:28.065 [2024-09-29 16:45:28.485611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.065 [2024-09-29 16:45:28.485647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.065 qpair failed and we were unable to recover it. 00:37:28.065 [2024-09-29 16:45:28.485838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.065 [2024-09-29 16:45:28.485889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.065 qpair failed and we were unable to recover it. 00:37:28.065 [2024-09-29 16:45:28.486050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.065 [2024-09-29 16:45:28.486106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.065 qpair failed and we were unable to recover it. 00:37:28.065 [2024-09-29 16:45:28.486293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.065 [2024-09-29 16:45:28.486344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.065 qpair failed and we were unable to recover it. 00:37:28.065 [2024-09-29 16:45:28.486515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.065 [2024-09-29 16:45:28.486549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.065 qpair failed and we were unable to recover it. 00:37:28.065 [2024-09-29 16:45:28.486691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.065 [2024-09-29 16:45:28.486725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.065 qpair failed and we were unable to recover it. 00:37:28.065 [2024-09-29 16:45:28.486883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.065 [2024-09-29 16:45:28.486935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.065 qpair failed and we were unable to recover it. 00:37:28.065 [2024-09-29 16:45:28.487127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.065 [2024-09-29 16:45:28.487185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.065 qpair failed and we were unable to recover it. 00:37:28.065 [2024-09-29 16:45:28.487348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.065 [2024-09-29 16:45:28.487402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.065 qpair failed and we were unable to recover it. 00:37:28.065 [2024-09-29 16:45:28.487545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.065 [2024-09-29 16:45:28.487579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.065 qpair failed and we were unable to recover it. 00:37:28.065 [2024-09-29 16:45:28.487743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.065 [2024-09-29 16:45:28.487783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.065 qpair failed and we were unable to recover it. 00:37:28.065 [2024-09-29 16:45:28.487949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.065 [2024-09-29 16:45:28.487987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.065 qpair failed and we were unable to recover it. 00:37:28.065 [2024-09-29 16:45:28.488153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.065 [2024-09-29 16:45:28.488191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.065 qpair failed and we were unable to recover it. 00:37:28.065 [2024-09-29 16:45:28.488350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.065 [2024-09-29 16:45:28.488388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.065 qpair failed and we were unable to recover it. 00:37:28.065 [2024-09-29 16:45:28.488551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.065 [2024-09-29 16:45:28.488584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.065 qpair failed and we were unable to recover it. 00:37:28.065 [2024-09-29 16:45:28.488697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.065 [2024-09-29 16:45:28.488731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.065 qpair failed and we were unable to recover it. 00:37:28.065 [2024-09-29 16:45:28.488903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.065 [2024-09-29 16:45:28.488941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.065 qpair failed and we were unable to recover it. 00:37:28.065 [2024-09-29 16:45:28.489078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.065 [2024-09-29 16:45:28.489116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.065 qpair failed and we were unable to recover it. 00:37:28.065 [2024-09-29 16:45:28.489296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.065 [2024-09-29 16:45:28.489333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.065 qpair failed and we were unable to recover it. 00:37:28.065 [2024-09-29 16:45:28.489471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.065 [2024-09-29 16:45:28.489507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.065 qpair failed and we were unable to recover it. 00:37:28.065 [2024-09-29 16:45:28.489680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.065 [2024-09-29 16:45:28.489733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.065 qpair failed and we were unable to recover it. 00:37:28.065 [2024-09-29 16:45:28.489867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.065 [2024-09-29 16:45:28.489920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.065 qpair failed and we were unable to recover it. 00:37:28.065 [2024-09-29 16:45:28.490079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.066 [2024-09-29 16:45:28.490129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.066 qpair failed and we were unable to recover it. 00:37:28.066 [2024-09-29 16:45:28.490281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.066 [2024-09-29 16:45:28.490333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.066 qpair failed and we were unable to recover it. 00:37:28.066 [2024-09-29 16:45:28.490494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.066 [2024-09-29 16:45:28.490528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.066 qpair failed and we were unable to recover it. 00:37:28.066 [2024-09-29 16:45:28.490698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.066 [2024-09-29 16:45:28.490733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.066 qpair failed and we were unable to recover it. 00:37:28.066 [2024-09-29 16:45:28.490870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.066 [2024-09-29 16:45:28.490930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.066 qpair failed and we were unable to recover it. 00:37:28.066 [2024-09-29 16:45:28.491092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.066 [2024-09-29 16:45:28.491143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.066 qpair failed and we were unable to recover it. 00:37:28.066 [2024-09-29 16:45:28.491326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.066 [2024-09-29 16:45:28.491393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.066 qpair failed and we were unable to recover it. 00:37:28.066 [2024-09-29 16:45:28.491517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.066 [2024-09-29 16:45:28.491551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.066 qpair failed and we were unable to recover it. 00:37:28.066 [2024-09-29 16:45:28.491740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.066 [2024-09-29 16:45:28.491793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.066 qpair failed and we were unable to recover it. 00:37:28.066 [2024-09-29 16:45:28.491911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.066 [2024-09-29 16:45:28.491944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.066 qpair failed and we were unable to recover it. 00:37:28.066 [2024-09-29 16:45:28.492058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.066 [2024-09-29 16:45:28.492092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.066 qpair failed and we were unable to recover it. 00:37:28.066 [2024-09-29 16:45:28.492239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.066 [2024-09-29 16:45:28.492272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.066 qpair failed and we were unable to recover it. 00:37:28.066 [2024-09-29 16:45:28.492389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.066 [2024-09-29 16:45:28.492421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.066 qpair failed and we were unable to recover it. 00:37:28.066 [2024-09-29 16:45:28.492559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.066 [2024-09-29 16:45:28.492593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.066 qpair failed and we were unable to recover it. 00:37:28.066 [2024-09-29 16:45:28.492733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.066 [2024-09-29 16:45:28.492768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.066 qpair failed and we were unable to recover it. 00:37:28.066 [2024-09-29 16:45:28.492940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.066 [2024-09-29 16:45:28.492988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.066 qpair failed and we were unable to recover it. 00:37:28.066 [2024-09-29 16:45:28.493150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.066 [2024-09-29 16:45:28.493188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.066 qpair failed and we were unable to recover it. 00:37:28.066 [2024-09-29 16:45:28.493312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.066 [2024-09-29 16:45:28.493347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.066 qpair failed and we were unable to recover it. 00:37:28.066 [2024-09-29 16:45:28.493522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.066 [2024-09-29 16:45:28.493556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.066 qpair failed and we were unable to recover it. 00:37:28.066 [2024-09-29 16:45:28.493695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.066 [2024-09-29 16:45:28.493730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.066 qpair failed and we were unable to recover it. 00:37:28.066 [2024-09-29 16:45:28.493899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.066 [2024-09-29 16:45:28.493936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.066 qpair failed and we were unable to recover it. 00:37:28.066 [2024-09-29 16:45:28.494196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.066 [2024-09-29 16:45:28.494256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.066 qpair failed and we were unable to recover it. 00:37:28.066 [2024-09-29 16:45:28.494416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.066 [2024-09-29 16:45:28.494468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.066 qpair failed and we were unable to recover it. 00:37:28.066 [2024-09-29 16:45:28.494576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.066 [2024-09-29 16:45:28.494609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.066 qpair failed and we were unable to recover it. 00:37:28.066 [2024-09-29 16:45:28.494799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.066 [2024-09-29 16:45:28.494851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.066 qpair failed and we were unable to recover it. 00:37:28.066 [2024-09-29 16:45:28.495017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.066 [2024-09-29 16:45:28.495067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.066 qpair failed and we were unable to recover it. 00:37:28.066 [2024-09-29 16:45:28.495193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.066 [2024-09-29 16:45:28.495246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.066 qpair failed and we were unable to recover it. 00:37:28.066 [2024-09-29 16:45:28.495388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.066 [2024-09-29 16:45:28.495422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.066 qpair failed and we were unable to recover it. 00:37:28.066 [2024-09-29 16:45:28.495589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.066 [2024-09-29 16:45:28.495623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.066 qpair failed and we were unable to recover it. 00:37:28.066 [2024-09-29 16:45:28.495742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.066 [2024-09-29 16:45:28.495776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.066 qpair failed and we were unable to recover it. 00:37:28.066 [2024-09-29 16:45:28.495937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.066 [2024-09-29 16:45:28.495990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.066 qpair failed and we were unable to recover it. 00:37:28.066 [2024-09-29 16:45:28.496190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.066 [2024-09-29 16:45:28.496242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.066 qpair failed and we were unable to recover it. 00:37:28.066 [2024-09-29 16:45:28.496365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.066 [2024-09-29 16:45:28.496398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.066 qpair failed and we were unable to recover it. 00:37:28.066 [2024-09-29 16:45:28.496521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.066 [2024-09-29 16:45:28.496553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.066 qpair failed and we were unable to recover it. 00:37:28.066 [2024-09-29 16:45:28.496663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.066 [2024-09-29 16:45:28.496711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.066 qpair failed and we were unable to recover it. 00:37:28.066 [2024-09-29 16:45:28.496877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.066 [2024-09-29 16:45:28.496909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.066 qpair failed and we were unable to recover it. 00:37:28.066 [2024-09-29 16:45:28.497027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.066 [2024-09-29 16:45:28.497061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.066 qpair failed and we were unable to recover it. 00:37:28.066 [2024-09-29 16:45:28.497201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.066 [2024-09-29 16:45:28.497235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.066 qpair failed and we were unable to recover it. 00:37:28.066 [2024-09-29 16:45:28.497370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.066 [2024-09-29 16:45:28.497403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.067 qpair failed and we were unable to recover it. 00:37:28.067 [2024-09-29 16:45:28.497509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.067 [2024-09-29 16:45:28.497541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.067 qpair failed and we were unable to recover it. 00:37:28.067 [2024-09-29 16:45:28.497707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.067 [2024-09-29 16:45:28.497742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.067 qpair failed and we were unable to recover it. 00:37:28.067 [2024-09-29 16:45:28.497889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.067 [2024-09-29 16:45:28.497922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.067 qpair failed and we were unable to recover it. 00:37:28.067 [2024-09-29 16:45:28.498065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.067 [2024-09-29 16:45:28.498098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.067 qpair failed and we were unable to recover it. 00:37:28.067 [2024-09-29 16:45:28.498237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.067 [2024-09-29 16:45:28.498270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.067 qpair failed and we were unable to recover it. 00:37:28.067 [2024-09-29 16:45:28.498382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.067 [2024-09-29 16:45:28.498421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.067 qpair failed and we were unable to recover it. 00:37:28.067 [2024-09-29 16:45:28.498607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.067 [2024-09-29 16:45:28.498655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.067 qpair failed and we were unable to recover it. 00:37:28.067 [2024-09-29 16:45:28.498841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.067 [2024-09-29 16:45:28.498882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.067 qpair failed and we were unable to recover it. 00:37:28.067 [2024-09-29 16:45:28.499050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.067 [2024-09-29 16:45:28.499088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.067 qpair failed and we were unable to recover it. 00:37:28.067 [2024-09-29 16:45:28.499304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.067 [2024-09-29 16:45:28.499362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.067 qpair failed and we were unable to recover it. 00:37:28.067 [2024-09-29 16:45:28.499542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.067 [2024-09-29 16:45:28.499580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.067 qpair failed and we were unable to recover it. 00:37:28.067 [2024-09-29 16:45:28.499745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.067 [2024-09-29 16:45:28.499797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.067 qpair failed and we were unable to recover it. 00:37:28.067 [2024-09-29 16:45:28.499966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.067 [2024-09-29 16:45:28.500000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.067 qpair failed and we were unable to recover it. 00:37:28.067 [2024-09-29 16:45:28.500181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.067 [2024-09-29 16:45:28.500219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.067 qpair failed and we were unable to recover it. 00:37:28.067 [2024-09-29 16:45:28.500375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.067 [2024-09-29 16:45:28.500412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.067 qpair failed and we were unable to recover it. 00:37:28.067 [2024-09-29 16:45:28.500577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.067 [2024-09-29 16:45:28.500612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.067 qpair failed and we were unable to recover it. 00:37:28.067 [2024-09-29 16:45:28.500771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.067 [2024-09-29 16:45:28.500806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.067 qpair failed and we were unable to recover it. 00:37:28.067 [2024-09-29 16:45:28.501001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.067 [2024-09-29 16:45:28.501038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.067 qpair failed and we were unable to recover it. 00:37:28.067 [2024-09-29 16:45:28.501252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.067 [2024-09-29 16:45:28.501290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.067 qpair failed and we were unable to recover it. 00:37:28.067 [2024-09-29 16:45:28.501456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.067 [2024-09-29 16:45:28.501495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.067 qpair failed and we were unable to recover it. 00:37:28.067 [2024-09-29 16:45:28.501638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.067 [2024-09-29 16:45:28.501683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.067 qpair failed and we were unable to recover it. 00:37:28.067 [2024-09-29 16:45:28.501842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.067 [2024-09-29 16:45:28.501878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.067 qpair failed and we were unable to recover it. 00:37:28.067 [2024-09-29 16:45:28.502007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.067 [2024-09-29 16:45:28.502044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.067 qpair failed and we were unable to recover it. 00:37:28.067 [2024-09-29 16:45:28.502218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.067 [2024-09-29 16:45:28.502270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.067 qpair failed and we were unable to recover it. 00:37:28.067 [2024-09-29 16:45:28.502430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.067 [2024-09-29 16:45:28.502482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.067 qpair failed and we were unable to recover it. 00:37:28.067 [2024-09-29 16:45:28.502631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.067 [2024-09-29 16:45:28.502665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.067 qpair failed and we were unable to recover it. 00:37:28.067 [2024-09-29 16:45:28.502826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.067 [2024-09-29 16:45:28.502864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.067 qpair failed and we were unable to recover it. 00:37:28.067 [2024-09-29 16:45:28.503016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.067 [2024-09-29 16:45:28.503087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.067 qpair failed and we were unable to recover it. 00:37:28.067 [2024-09-29 16:45:28.503226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.067 [2024-09-29 16:45:28.503259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.067 qpair failed and we were unable to recover it. 00:37:28.067 [2024-09-29 16:45:28.503410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.067 [2024-09-29 16:45:28.503444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.067 qpair failed and we were unable to recover it. 00:37:28.067 [2024-09-29 16:45:28.503568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.067 [2024-09-29 16:45:28.503604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.067 qpair failed and we were unable to recover it. 00:37:28.067 [2024-09-29 16:45:28.503760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.067 [2024-09-29 16:45:28.503795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.067 qpair failed and we were unable to recover it. 00:37:28.067 [2024-09-29 16:45:28.503908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.067 [2024-09-29 16:45:28.503943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.067 qpair failed and we were unable to recover it. 00:37:28.067 [2024-09-29 16:45:28.504082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.067 [2024-09-29 16:45:28.504116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.068 qpair failed and we were unable to recover it. 00:37:28.068 [2024-09-29 16:45:28.504258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.068 [2024-09-29 16:45:28.504292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.068 qpair failed and we were unable to recover it. 00:37:28.068 [2024-09-29 16:45:28.504432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.068 [2024-09-29 16:45:28.504466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.068 qpair failed and we were unable to recover it. 00:37:28.068 [2024-09-29 16:45:28.504593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.068 [2024-09-29 16:45:28.504639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.068 qpair failed and we were unable to recover it. 00:37:28.068 [2024-09-29 16:45:28.504823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.068 [2024-09-29 16:45:28.504858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.068 qpair failed and we were unable to recover it. 00:37:28.068 [2024-09-29 16:45:28.505011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.068 [2024-09-29 16:45:28.505063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.068 qpair failed and we were unable to recover it. 00:37:28.068 [2024-09-29 16:45:28.505212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.068 [2024-09-29 16:45:28.505271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.068 qpair failed and we were unable to recover it. 00:37:28.068 [2024-09-29 16:45:28.505431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.068 [2024-09-29 16:45:28.505482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.068 qpair failed and we were unable to recover it. 00:37:28.068 [2024-09-29 16:45:28.505646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.068 [2024-09-29 16:45:28.505689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.068 qpair failed and we were unable to recover it. 00:37:28.068 [2024-09-29 16:45:28.505834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.068 [2024-09-29 16:45:28.505867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.068 qpair failed and we were unable to recover it. 00:37:28.068 [2024-09-29 16:45:28.505993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.068 [2024-09-29 16:45:28.506027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.068 qpair failed and we were unable to recover it. 00:37:28.068 [2024-09-29 16:45:28.506140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.068 [2024-09-29 16:45:28.506173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.068 qpair failed and we were unable to recover it. 00:37:28.068 [2024-09-29 16:45:28.506291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.068 [2024-09-29 16:45:28.506330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.068 qpair failed and we were unable to recover it. 00:37:28.068 [2024-09-29 16:45:28.506499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.068 [2024-09-29 16:45:28.506534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.068 qpair failed and we were unable to recover it. 00:37:28.068 [2024-09-29 16:45:28.506648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.068 [2024-09-29 16:45:28.506690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.068 qpair failed and we were unable to recover it. 00:37:28.068 [2024-09-29 16:45:28.506809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.068 [2024-09-29 16:45:28.506842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.068 qpair failed and we were unable to recover it. 00:37:28.068 [2024-09-29 16:45:28.506985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.068 [2024-09-29 16:45:28.507018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.068 qpair failed and we were unable to recover it. 00:37:28.068 [2024-09-29 16:45:28.507159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.068 [2024-09-29 16:45:28.507194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.068 qpair failed and we were unable to recover it. 00:37:28.068 [2024-09-29 16:45:28.507317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.068 [2024-09-29 16:45:28.507352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.068 qpair failed and we were unable to recover it. 00:37:28.068 [2024-09-29 16:45:28.507500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.068 [2024-09-29 16:45:28.507534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.068 qpair failed and we were unable to recover it. 00:37:28.068 [2024-09-29 16:45:28.507708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.068 [2024-09-29 16:45:28.507741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.068 qpair failed and we were unable to recover it. 00:37:28.068 [2024-09-29 16:45:28.507875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.068 [2024-09-29 16:45:28.507909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.068 qpair failed and we were unable to recover it. 00:37:28.068 [2024-09-29 16:45:28.508046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.068 [2024-09-29 16:45:28.508079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.068 qpair failed and we were unable to recover it. 00:37:28.068 [2024-09-29 16:45:28.508213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.068 [2024-09-29 16:45:28.508246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.068 qpair failed and we were unable to recover it. 00:37:28.068 [2024-09-29 16:45:28.508422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.068 [2024-09-29 16:45:28.508457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.068 qpair failed and we were unable to recover it. 00:37:28.068 [2024-09-29 16:45:28.508593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.068 [2024-09-29 16:45:28.508627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.068 qpair failed and we were unable to recover it. 00:37:28.068 [2024-09-29 16:45:28.508786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.068 [2024-09-29 16:45:28.508821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.068 qpair failed and we were unable to recover it. 00:37:28.068 [2024-09-29 16:45:28.508958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.068 [2024-09-29 16:45:28.508991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.068 qpair failed and we were unable to recover it. 00:37:28.068 [2024-09-29 16:45:28.509169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.068 [2024-09-29 16:45:28.509203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.068 qpair failed and we were unable to recover it. 00:37:28.068 [2024-09-29 16:45:28.509315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.068 [2024-09-29 16:45:28.509350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.068 qpair failed and we were unable to recover it. 00:37:28.068 [2024-09-29 16:45:28.509517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.068 [2024-09-29 16:45:28.509553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.068 qpair failed and we were unable to recover it. 00:37:28.068 [2024-09-29 16:45:28.509722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.068 [2024-09-29 16:45:28.509757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.068 qpair failed and we were unable to recover it. 00:37:28.068 [2024-09-29 16:45:28.509889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.068 [2024-09-29 16:45:28.509940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.068 qpair failed and we were unable to recover it. 00:37:28.068 [2024-09-29 16:45:28.510058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.068 [2024-09-29 16:45:28.510092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.068 qpair failed and we were unable to recover it. 00:37:28.068 [2024-09-29 16:45:28.510268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.068 [2024-09-29 16:45:28.510302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.068 qpair failed and we were unable to recover it. 00:37:28.068 [2024-09-29 16:45:28.510451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.068 [2024-09-29 16:45:28.510485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.068 qpair failed and we were unable to recover it. 00:37:28.068 [2024-09-29 16:45:28.510620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.068 [2024-09-29 16:45:28.510652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.068 qpair failed and we were unable to recover it. 00:37:28.068 [2024-09-29 16:45:28.510787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.068 [2024-09-29 16:45:28.510835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.068 qpair failed and we were unable to recover it. 00:37:28.068 [2024-09-29 16:45:28.510964] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2780 is same with the state(6) to be set 00:37:28.068 [2024-09-29 16:45:28.511178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.068 [2024-09-29 16:45:28.511232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.069 qpair failed and we were unable to recover it. 00:37:28.069 [2024-09-29 16:45:28.511359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.069 [2024-09-29 16:45:28.511395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.069 qpair failed and we were unable to recover it. 00:37:28.069 [2024-09-29 16:45:28.511564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.069 [2024-09-29 16:45:28.511597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.069 qpair failed and we were unable to recover it. 00:37:28.069 [2024-09-29 16:45:28.511737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.069 [2024-09-29 16:45:28.511776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.069 qpair failed and we were unable to recover it. 00:37:28.069 [2024-09-29 16:45:28.511912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.069 [2024-09-29 16:45:28.511956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.069 qpair failed and we were unable to recover it. 00:37:28.069 [2024-09-29 16:45:28.512176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.069 [2024-09-29 16:45:28.512230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.069 qpair failed and we were unable to recover it. 00:37:28.069 [2024-09-29 16:45:28.512450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.069 [2024-09-29 16:45:28.512503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.069 qpair failed and we were unable to recover it. 00:37:28.069 [2024-09-29 16:45:28.512639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.069 [2024-09-29 16:45:28.512689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.069 qpair failed and we were unable to recover it. 00:37:28.069 [2024-09-29 16:45:28.512851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.069 [2024-09-29 16:45:28.512904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.069 qpair failed and we were unable to recover it. 00:37:28.069 [2024-09-29 16:45:28.513020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.069 [2024-09-29 16:45:28.513052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.069 qpair failed and we were unable to recover it. 00:37:28.069 [2024-09-29 16:45:28.513298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.069 [2024-09-29 16:45:28.513354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.069 qpair failed and we were unable to recover it. 00:37:28.069 [2024-09-29 16:45:28.513492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.069 [2024-09-29 16:45:28.513526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.069 qpair failed and we were unable to recover it. 00:37:28.069 [2024-09-29 16:45:28.513678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.069 [2024-09-29 16:45:28.513712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.069 qpair failed and we were unable to recover it. 00:37:28.069 [2024-09-29 16:45:28.513879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.069 [2024-09-29 16:45:28.513913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.069 qpair failed and we were unable to recover it. 00:37:28.069 [2024-09-29 16:45:28.514083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.069 [2024-09-29 16:45:28.514119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.069 qpair failed and we were unable to recover it. 00:37:28.069 [2024-09-29 16:45:28.514270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.069 [2024-09-29 16:45:28.514319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.069 qpair failed and we were unable to recover it. 00:37:28.069 [2024-09-29 16:45:28.514485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.069 [2024-09-29 16:45:28.514522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.069 qpair failed and we were unable to recover it. 00:37:28.069 [2024-09-29 16:45:28.514669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.069 [2024-09-29 16:45:28.514711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.069 qpair failed and we were unable to recover it. 00:37:28.069 [2024-09-29 16:45:28.514857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.069 [2024-09-29 16:45:28.514890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.069 qpair failed and we were unable to recover it. 00:37:28.069 [2024-09-29 16:45:28.515053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.069 [2024-09-29 16:45:28.515091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.069 qpair failed and we were unable to recover it. 00:37:28.069 [2024-09-29 16:45:28.515212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.069 [2024-09-29 16:45:28.515249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.069 qpair failed and we were unable to recover it. 00:37:28.069 [2024-09-29 16:45:28.515403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.069 [2024-09-29 16:45:28.515439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.069 qpair failed and we were unable to recover it. 00:37:28.069 [2024-09-29 16:45:28.515607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.069 [2024-09-29 16:45:28.515643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.069 qpair failed and we were unable to recover it. 00:37:28.069 [2024-09-29 16:45:28.515768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.069 [2024-09-29 16:45:28.515803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.069 qpair failed and we were unable to recover it. 00:37:28.069 [2024-09-29 16:45:28.515964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.069 [2024-09-29 16:45:28.516024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.069 qpair failed and we were unable to recover it. 00:37:28.069 [2024-09-29 16:45:28.516214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.069 [2024-09-29 16:45:28.516265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.069 qpair failed and we were unable to recover it. 00:37:28.069 [2024-09-29 16:45:28.516385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.069 [2024-09-29 16:45:28.516419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.069 qpair failed and we were unable to recover it. 00:37:28.069 [2024-09-29 16:45:28.516534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.069 [2024-09-29 16:45:28.516569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.069 qpair failed and we were unable to recover it. 00:37:28.069 [2024-09-29 16:45:28.516719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.069 [2024-09-29 16:45:28.516762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.069 qpair failed and we were unable to recover it. 00:37:28.069 [2024-09-29 16:45:28.516902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.069 [2024-09-29 16:45:28.516950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.069 qpair failed and we were unable to recover it. 00:37:28.069 [2024-09-29 16:45:28.517112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.069 [2024-09-29 16:45:28.517149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.069 qpair failed and we were unable to recover it. 00:37:28.069 [2024-09-29 16:45:28.517269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.069 [2024-09-29 16:45:28.517302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.069 qpair failed and we were unable to recover it. 00:37:28.069 [2024-09-29 16:45:28.517452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.069 [2024-09-29 16:45:28.517486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.069 qpair failed and we were unable to recover it. 00:37:28.069 [2024-09-29 16:45:28.517613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.069 [2024-09-29 16:45:28.517662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.069 qpair failed and we were unable to recover it. 00:37:28.069 [2024-09-29 16:45:28.517855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.069 [2024-09-29 16:45:28.517906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.069 qpair failed and we were unable to recover it. 00:37:28.069 [2024-09-29 16:45:28.518063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.069 [2024-09-29 16:45:28.518114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.069 qpair failed and we were unable to recover it. 00:37:28.069 [2024-09-29 16:45:28.518284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.069 [2024-09-29 16:45:28.518351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.069 qpair failed and we were unable to recover it. 00:37:28.069 [2024-09-29 16:45:28.518507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.069 [2024-09-29 16:45:28.518545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.069 qpair failed and we were unable to recover it. 00:37:28.069 [2024-09-29 16:45:28.518689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.069 [2024-09-29 16:45:28.518747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.069 qpair failed and we were unable to recover it. 00:37:28.069 [2024-09-29 16:45:28.518894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.070 [2024-09-29 16:45:28.518931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.070 qpair failed and we were unable to recover it. 00:37:28.070 [2024-09-29 16:45:28.519159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.070 [2024-09-29 16:45:28.519213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.070 qpair failed and we were unable to recover it. 00:37:28.070 [2024-09-29 16:45:28.519353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.070 [2024-09-29 16:45:28.519406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.070 qpair failed and we were unable to recover it. 00:37:28.070 [2024-09-29 16:45:28.519551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.070 [2024-09-29 16:45:28.519584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.070 qpair failed and we were unable to recover it. 00:37:28.070 [2024-09-29 16:45:28.519693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.070 [2024-09-29 16:45:28.519726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.070 qpair failed and we were unable to recover it. 00:37:28.070 [2024-09-29 16:45:28.519884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.070 [2024-09-29 16:45:28.519937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.070 qpair failed and we were unable to recover it. 00:37:28.070 [2024-09-29 16:45:28.520121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.070 [2024-09-29 16:45:28.520173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.070 qpair failed and we were unable to recover it. 00:37:28.070 [2024-09-29 16:45:28.520372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.070 [2024-09-29 16:45:28.520434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.070 qpair failed and we were unable to recover it. 00:37:28.070 [2024-09-29 16:45:28.520569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.070 [2024-09-29 16:45:28.520602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.070 qpair failed and we were unable to recover it. 00:37:28.070 [2024-09-29 16:45:28.520723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.070 [2024-09-29 16:45:28.520757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.070 qpair failed and we were unable to recover it. 00:37:28.070 [2024-09-29 16:45:28.520902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.070 [2024-09-29 16:45:28.520937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.070 qpair failed and we were unable to recover it. 00:37:28.070 [2024-09-29 16:45:28.521143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.070 [2024-09-29 16:45:28.521202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.070 qpair failed and we were unable to recover it. 00:37:28.070 [2024-09-29 16:45:28.521388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.070 [2024-09-29 16:45:28.521446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.070 qpair failed and we were unable to recover it. 00:37:28.070 [2024-09-29 16:45:28.521612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.070 [2024-09-29 16:45:28.521650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.070 qpair failed and we were unable to recover it. 00:37:28.070 [2024-09-29 16:45:28.521811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.070 [2024-09-29 16:45:28.521849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.070 qpair failed and we were unable to recover it. 00:37:28.070 [2024-09-29 16:45:28.522003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.070 [2024-09-29 16:45:28.522056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.070 qpair failed and we were unable to recover it. 00:37:28.070 [2024-09-29 16:45:28.522264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.070 [2024-09-29 16:45:28.522304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.070 qpair failed and we were unable to recover it. 00:37:28.070 [2024-09-29 16:45:28.522469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.070 [2024-09-29 16:45:28.522532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.070 qpair failed and we were unable to recover it. 00:37:28.070 [2024-09-29 16:45:28.522663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.070 [2024-09-29 16:45:28.522725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.070 qpair failed and we were unable to recover it. 00:37:28.070 [2024-09-29 16:45:28.522843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.070 [2024-09-29 16:45:28.522876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.070 qpair failed and we were unable to recover it. 00:37:28.070 [2024-09-29 16:45:28.522998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.070 [2024-09-29 16:45:28.523053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.070 qpair failed and we were unable to recover it. 00:37:28.070 [2024-09-29 16:45:28.523189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.070 [2024-09-29 16:45:28.523243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.070 qpair failed and we were unable to recover it. 00:37:28.070 [2024-09-29 16:45:28.523379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.070 [2024-09-29 16:45:28.523432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.070 qpair failed and we were unable to recover it. 00:37:28.070 [2024-09-29 16:45:28.523547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.070 [2024-09-29 16:45:28.523581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.070 qpair failed and we were unable to recover it. 00:37:28.070 [2024-09-29 16:45:28.523695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.070 [2024-09-29 16:45:28.523730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.070 qpair failed and we were unable to recover it. 00:37:28.070 [2024-09-29 16:45:28.523847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.070 [2024-09-29 16:45:28.523880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.070 qpair failed and we were unable to recover it. 00:37:28.070 [2024-09-29 16:45:28.523999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.070 [2024-09-29 16:45:28.524032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.070 qpair failed and we were unable to recover it. 00:37:28.070 [2024-09-29 16:45:28.524169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.070 [2024-09-29 16:45:28.524204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.070 qpair failed and we were unable to recover it. 00:37:28.070 [2024-09-29 16:45:28.524320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.070 [2024-09-29 16:45:28.524358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.070 qpair failed and we were unable to recover it. 00:37:28.070 [2024-09-29 16:45:28.524499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.070 [2024-09-29 16:45:28.524534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.070 qpair failed and we were unable to recover it. 00:37:28.070 [2024-09-29 16:45:28.524687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.070 [2024-09-29 16:45:28.524722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.070 qpair failed and we were unable to recover it. 00:37:28.070 [2024-09-29 16:45:28.524865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.070 [2024-09-29 16:45:28.524898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.070 qpair failed and we were unable to recover it. 00:37:28.070 [2024-09-29 16:45:28.525024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.070 [2024-09-29 16:45:28.525058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.070 qpair failed and we were unable to recover it. 00:37:28.070 [2024-09-29 16:45:28.525204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.070 [2024-09-29 16:45:28.525243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.070 qpair failed and we were unable to recover it. 00:37:28.070 [2024-09-29 16:45:28.525425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.070 [2024-09-29 16:45:28.525459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.070 qpair failed and we were unable to recover it. 00:37:28.070 [2024-09-29 16:45:28.525575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.070 [2024-09-29 16:45:28.525608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.070 qpair failed and we were unable to recover it. 00:37:28.070 [2024-09-29 16:45:28.525759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.070 [2024-09-29 16:45:28.525794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.070 qpair failed and we were unable to recover it. 00:37:28.070 [2024-09-29 16:45:28.525910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.070 [2024-09-29 16:45:28.525943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.070 qpair failed and we were unable to recover it. 00:37:28.070 [2024-09-29 16:45:28.526087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.070 [2024-09-29 16:45:28.526121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.071 qpair failed and we were unable to recover it. 00:37:28.071 [2024-09-29 16:45:28.526238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.071 [2024-09-29 16:45:28.526272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.071 qpair failed and we were unable to recover it. 00:37:28.071 [2024-09-29 16:45:28.526416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.071 [2024-09-29 16:45:28.526449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.071 qpair failed and we were unable to recover it. 00:37:28.071 [2024-09-29 16:45:28.526555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.071 [2024-09-29 16:45:28.526588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.071 qpair failed and we were unable to recover it. 00:37:28.071 [2024-09-29 16:45:28.526769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.071 [2024-09-29 16:45:28.526816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.071 qpair failed and we were unable to recover it. 00:37:28.071 [2024-09-29 16:45:28.526940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.071 [2024-09-29 16:45:28.526978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.071 qpair failed and we were unable to recover it. 00:37:28.071 [2024-09-29 16:45:28.527127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.071 [2024-09-29 16:45:28.527163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.071 qpair failed and we were unable to recover it. 00:37:28.071 [2024-09-29 16:45:28.527279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.071 [2024-09-29 16:45:28.527313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.071 qpair failed and we were unable to recover it. 00:37:28.071 [2024-09-29 16:45:28.527460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.071 [2024-09-29 16:45:28.527494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.071 qpair failed and we were unable to recover it. 00:37:28.071 [2024-09-29 16:45:28.527663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.071 [2024-09-29 16:45:28.527725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.071 qpair failed and we were unable to recover it. 00:37:28.071 [2024-09-29 16:45:28.527895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.071 [2024-09-29 16:45:28.527955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.071 qpair failed and we were unable to recover it. 00:37:28.071 [2024-09-29 16:45:28.528111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.071 [2024-09-29 16:45:28.528162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.071 qpair failed and we were unable to recover it. 00:37:28.071 [2024-09-29 16:45:28.528325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.071 [2024-09-29 16:45:28.528381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.071 qpair failed and we were unable to recover it. 00:37:28.071 [2024-09-29 16:45:28.528500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.071 [2024-09-29 16:45:28.528533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.071 qpair failed and we were unable to recover it. 00:37:28.071 [2024-09-29 16:45:28.528689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.071 [2024-09-29 16:45:28.528723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.071 qpair failed and we were unable to recover it. 00:37:28.071 [2024-09-29 16:45:28.528922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.071 [2024-09-29 16:45:28.528975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.071 qpair failed and we were unable to recover it. 00:37:28.071 [2024-09-29 16:45:28.529116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.071 [2024-09-29 16:45:28.529156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.071 qpair failed and we were unable to recover it. 00:37:28.071 [2024-09-29 16:45:28.529315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.071 [2024-09-29 16:45:28.529384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.071 qpair failed and we were unable to recover it. 00:37:28.071 [2024-09-29 16:45:28.529524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.071 [2024-09-29 16:45:28.529557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.071 qpair failed and we were unable to recover it. 00:37:28.071 [2024-09-29 16:45:28.529684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.071 [2024-09-29 16:45:28.529718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.071 qpair failed and we were unable to recover it. 00:37:28.071 [2024-09-29 16:45:28.529831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.071 [2024-09-29 16:45:28.529865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.071 qpair failed and we were unable to recover it. 00:37:28.071 [2024-09-29 16:45:28.529996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.071 [2024-09-29 16:45:28.530081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.071 qpair failed and we were unable to recover it. 00:37:28.071 [2024-09-29 16:45:28.530227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.071 [2024-09-29 16:45:28.530282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.071 qpair failed and we were unable to recover it. 00:37:28.071 [2024-09-29 16:45:28.530456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.071 [2024-09-29 16:45:28.530504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.071 qpair failed and we were unable to recover it. 00:37:28.071 [2024-09-29 16:45:28.530637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.071 [2024-09-29 16:45:28.530682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.071 qpair failed and we were unable to recover it. 00:37:28.071 [2024-09-29 16:45:28.530846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.071 [2024-09-29 16:45:28.530899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.071 qpair failed and we were unable to recover it. 00:37:28.071 [2024-09-29 16:45:28.531194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.071 [2024-09-29 16:45:28.531255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.071 qpair failed and we were unable to recover it. 00:37:28.071 [2024-09-29 16:45:28.531407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.071 [2024-09-29 16:45:28.531472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.071 qpair failed and we were unable to recover it. 00:37:28.071 [2024-09-29 16:45:28.531598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.071 [2024-09-29 16:45:28.531636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.071 qpair failed and we were unable to recover it. 00:37:28.071 [2024-09-29 16:45:28.531796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.071 [2024-09-29 16:45:28.531832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.071 qpair failed and we were unable to recover it. 00:37:28.071 [2024-09-29 16:45:28.531957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.071 [2024-09-29 16:45:28.531995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.071 qpair failed and we were unable to recover it. 00:37:28.071 [2024-09-29 16:45:28.532154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.071 [2024-09-29 16:45:28.532206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.071 qpair failed and we were unable to recover it. 00:37:28.071 [2024-09-29 16:45:28.532336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.071 [2024-09-29 16:45:28.532388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.071 qpair failed and we were unable to recover it. 00:37:28.071 [2024-09-29 16:45:28.532497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.071 [2024-09-29 16:45:28.532531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.071 qpair failed and we were unable to recover it. 00:37:28.071 [2024-09-29 16:45:28.532689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.071 [2024-09-29 16:45:28.532736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.071 qpair failed and we were unable to recover it. 00:37:28.071 [2024-09-29 16:45:28.532889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.071 [2024-09-29 16:45:28.532936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.071 qpair failed and we were unable to recover it. 00:37:28.071 [2024-09-29 16:45:28.533106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.071 [2024-09-29 16:45:28.533146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.071 qpair failed and we were unable to recover it. 00:37:28.071 [2024-09-29 16:45:28.533299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.071 [2024-09-29 16:45:28.533338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.071 qpair failed and we were unable to recover it. 00:37:28.071 [2024-09-29 16:45:28.533518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.071 [2024-09-29 16:45:28.533557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.071 qpair failed and we were unable to recover it. 00:37:28.071 [2024-09-29 16:45:28.533743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.072 [2024-09-29 16:45:28.533778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.072 qpair failed and we were unable to recover it. 00:37:28.072 [2024-09-29 16:45:28.533918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.072 [2024-09-29 16:45:28.533972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.072 qpair failed and we were unable to recover it. 00:37:28.072 [2024-09-29 16:45:28.534114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.072 [2024-09-29 16:45:28.534166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.072 qpair failed and we were unable to recover it. 00:37:28.072 [2024-09-29 16:45:28.534304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.072 [2024-09-29 16:45:28.534340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.072 qpair failed and we were unable to recover it. 00:37:28.072 [2024-09-29 16:45:28.534504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.072 [2024-09-29 16:45:28.534536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.072 qpair failed and we were unable to recover it. 00:37:28.072 [2024-09-29 16:45:28.534733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.072 [2024-09-29 16:45:28.534796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.072 qpair failed and we were unable to recover it. 00:37:28.072 [2024-09-29 16:45:28.534953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.072 [2024-09-29 16:45:28.534990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.072 qpair failed and we were unable to recover it. 00:37:28.072 [2024-09-29 16:45:28.535135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.072 [2024-09-29 16:45:28.535170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.072 qpair failed and we were unable to recover it. 00:37:28.072 [2024-09-29 16:45:28.535340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.072 [2024-09-29 16:45:28.535374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.072 qpair failed and we were unable to recover it. 00:37:28.072 [2024-09-29 16:45:28.535516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.072 [2024-09-29 16:45:28.535550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.072 qpair failed and we were unable to recover it. 00:37:28.072 [2024-09-29 16:45:28.535664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.072 [2024-09-29 16:45:28.535706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.072 qpair failed and we were unable to recover it. 00:37:28.072 [2024-09-29 16:45:28.535831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.072 [2024-09-29 16:45:28.535865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.072 qpair failed and we were unable to recover it. 00:37:28.072 [2024-09-29 16:45:28.536000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.072 [2024-09-29 16:45:28.536033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.072 qpair failed and we were unable to recover it. 00:37:28.072 [2024-09-29 16:45:28.536155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.072 [2024-09-29 16:45:28.536187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.072 qpair failed and we were unable to recover it. 00:37:28.072 [2024-09-29 16:45:28.536335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.072 [2024-09-29 16:45:28.536369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.072 qpair failed and we were unable to recover it. 00:37:28.072 [2024-09-29 16:45:28.536515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.072 [2024-09-29 16:45:28.536548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.072 qpair failed and we were unable to recover it. 00:37:28.072 [2024-09-29 16:45:28.536693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.072 [2024-09-29 16:45:28.536728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.072 qpair failed and we were unable to recover it. 00:37:28.072 [2024-09-29 16:45:28.536861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.072 [2024-09-29 16:45:28.536896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.072 qpair failed and we were unable to recover it. 00:37:28.072 [2024-09-29 16:45:28.537042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.072 [2024-09-29 16:45:28.537098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.072 qpair failed and we were unable to recover it. 00:37:28.072 [2024-09-29 16:45:28.537224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.072 [2024-09-29 16:45:28.537262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.072 qpair failed and we were unable to recover it. 00:37:28.072 [2024-09-29 16:45:28.537423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.072 [2024-09-29 16:45:28.537462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.072 qpair failed and we were unable to recover it. 00:37:28.072 [2024-09-29 16:45:28.537609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.072 [2024-09-29 16:45:28.537646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.072 qpair failed and we were unable to recover it. 00:37:28.072 [2024-09-29 16:45:28.537813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.072 [2024-09-29 16:45:28.537861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.072 qpair failed and we were unable to recover it. 00:37:28.072 [2024-09-29 16:45:28.538012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.072 [2024-09-29 16:45:28.538067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.072 qpair failed and we were unable to recover it. 00:37:28.072 [2024-09-29 16:45:28.538249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.072 [2024-09-29 16:45:28.538285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.072 qpair failed and we were unable to recover it. 00:37:28.072 [2024-09-29 16:45:28.538434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.072 [2024-09-29 16:45:28.538469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.072 qpair failed and we were unable to recover it. 00:37:28.072 [2024-09-29 16:45:28.538586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.072 [2024-09-29 16:45:28.538620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.072 qpair failed and we were unable to recover it. 00:37:28.072 [2024-09-29 16:45:28.538765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.072 [2024-09-29 16:45:28.538813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.072 qpair failed and we were unable to recover it. 00:37:28.072 [2024-09-29 16:45:28.538989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.072 [2024-09-29 16:45:28.539024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.072 qpair failed and we were unable to recover it. 00:37:28.072 [2024-09-29 16:45:28.539148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.072 [2024-09-29 16:45:28.539182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.072 qpair failed and we were unable to recover it. 00:37:28.072 [2024-09-29 16:45:28.539329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.072 [2024-09-29 16:45:28.539363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.072 qpair failed and we were unable to recover it. 00:37:28.072 [2024-09-29 16:45:28.539509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.072 [2024-09-29 16:45:28.539548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.072 qpair failed and we were unable to recover it. 00:37:28.072 [2024-09-29 16:45:28.539666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.072 [2024-09-29 16:45:28.539707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.072 qpair failed and we were unable to recover it. 00:37:28.072 [2024-09-29 16:45:28.539841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.072 [2024-09-29 16:45:28.539879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.072 qpair failed and we were unable to recover it. 00:37:28.072 [2024-09-29 16:45:28.540025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.072 [2024-09-29 16:45:28.540078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.073 qpair failed and we were unable to recover it. 00:37:28.073 [2024-09-29 16:45:28.540258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.073 [2024-09-29 16:45:28.540297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.073 qpair failed and we were unable to recover it. 00:37:28.073 [2024-09-29 16:45:28.540487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.073 [2024-09-29 16:45:28.540544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.073 qpair failed and we were unable to recover it. 00:37:28.073 [2024-09-29 16:45:28.540678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.073 [2024-09-29 16:45:28.540715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.073 qpair failed and we were unable to recover it. 00:37:28.073 [2024-09-29 16:45:28.540874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.073 [2024-09-29 16:45:28.540927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.073 qpair failed and we were unable to recover it. 00:37:28.073 [2024-09-29 16:45:28.541060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.073 [2024-09-29 16:45:28.541112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.073 qpair failed and we were unable to recover it. 00:37:28.073 [2024-09-29 16:45:28.541288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.073 [2024-09-29 16:45:28.541353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.073 qpair failed and we were unable to recover it. 00:37:28.073 [2024-09-29 16:45:28.541500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.073 [2024-09-29 16:45:28.541550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.073 qpair failed and we were unable to recover it. 00:37:28.073 [2024-09-29 16:45:28.541659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.073 [2024-09-29 16:45:28.541700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.073 qpair failed and we were unable to recover it. 00:37:28.073 [2024-09-29 16:45:28.541834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.073 [2024-09-29 16:45:28.541871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.073 qpair failed and we were unable to recover it. 00:37:28.073 [2024-09-29 16:45:28.542027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.073 [2024-09-29 16:45:28.542080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.073 qpair failed and we were unable to recover it. 00:37:28.073 [2024-09-29 16:45:28.542248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.073 [2024-09-29 16:45:28.542316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.073 qpair failed and we were unable to recover it. 00:37:28.073 [2024-09-29 16:45:28.542496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.073 [2024-09-29 16:45:28.542548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.073 qpair failed and we were unable to recover it. 00:37:28.073 [2024-09-29 16:45:28.542693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.073 [2024-09-29 16:45:28.542727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.073 qpair failed and we were unable to recover it. 00:37:28.073 [2024-09-29 16:45:28.542872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.073 [2024-09-29 16:45:28.542927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.073 qpair failed and we were unable to recover it. 00:37:28.073 [2024-09-29 16:45:28.543073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.073 [2024-09-29 16:45:28.543124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.073 qpair failed and we were unable to recover it. 00:37:28.073 [2024-09-29 16:45:28.543267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.073 [2024-09-29 16:45:28.543300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.073 qpair failed and we were unable to recover it. 00:37:28.073 [2024-09-29 16:45:28.543411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.073 [2024-09-29 16:45:28.543445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.073 qpair failed and we were unable to recover it. 00:37:28.073 [2024-09-29 16:45:28.543565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.073 [2024-09-29 16:45:28.543598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.073 qpair failed and we were unable to recover it. 00:37:28.073 [2024-09-29 16:45:28.543761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.073 [2024-09-29 16:45:28.543797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.073 qpair failed and we were unable to recover it. 00:37:28.073 [2024-09-29 16:45:28.543934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.073 [2024-09-29 16:45:28.543986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.073 qpair failed and we were unable to recover it. 00:37:28.073 [2024-09-29 16:45:28.544167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.073 [2024-09-29 16:45:28.544204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.073 qpair failed and we were unable to recover it. 00:37:28.073 [2024-09-29 16:45:28.544358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.073 [2024-09-29 16:45:28.544392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.073 qpair failed and we were unable to recover it. 00:37:28.073 [2024-09-29 16:45:28.544516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.073 [2024-09-29 16:45:28.544550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.073 qpair failed and we were unable to recover it. 00:37:28.073 [2024-09-29 16:45:28.544684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.073 [2024-09-29 16:45:28.544718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.073 qpair failed and we were unable to recover it. 00:37:28.073 [2024-09-29 16:45:28.544871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.073 [2024-09-29 16:45:28.544907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.073 qpair failed and we were unable to recover it. 00:37:28.073 [2024-09-29 16:45:28.545023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.073 [2024-09-29 16:45:28.545058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.073 qpair failed and we were unable to recover it. 00:37:28.073 [2024-09-29 16:45:28.545251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.073 [2024-09-29 16:45:28.545304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.073 qpair failed and we were unable to recover it. 00:37:28.073 [2024-09-29 16:45:28.545456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.073 [2024-09-29 16:45:28.545489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.073 qpair failed and we were unable to recover it. 00:37:28.073 [2024-09-29 16:45:28.545665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.073 [2024-09-29 16:45:28.545720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.073 qpair failed and we were unable to recover it. 00:37:28.073 [2024-09-29 16:45:28.545839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.073 [2024-09-29 16:45:28.545883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.073 qpair failed and we were unable to recover it. 00:37:28.073 [2024-09-29 16:45:28.546032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.073 [2024-09-29 16:45:28.546084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.073 qpair failed and we were unable to recover it. 00:37:28.073 [2024-09-29 16:45:28.546211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.073 [2024-09-29 16:45:28.546262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.073 qpair failed and we were unable to recover it. 00:37:28.073 [2024-09-29 16:45:28.546482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.073 [2024-09-29 16:45:28.546540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.073 qpair failed and we were unable to recover it. 00:37:28.073 [2024-09-29 16:45:28.546699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.073 [2024-09-29 16:45:28.546754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.073 qpair failed and we were unable to recover it. 00:37:28.073 [2024-09-29 16:45:28.546887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.073 [2024-09-29 16:45:28.546935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.073 qpair failed and we were unable to recover it. 00:37:28.073 [2024-09-29 16:45:28.547083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.073 [2024-09-29 16:45:28.547140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.073 qpair failed and we were unable to recover it. 00:37:28.073 [2024-09-29 16:45:28.547285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.073 [2024-09-29 16:45:28.547345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.073 qpair failed and we were unable to recover it. 00:37:28.073 [2024-09-29 16:45:28.547461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.073 [2024-09-29 16:45:28.547494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.073 qpair failed and we were unable to recover it. 00:37:28.074 [2024-09-29 16:45:28.547610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.074 [2024-09-29 16:45:28.547643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.074 qpair failed and we were unable to recover it. 00:37:28.074 [2024-09-29 16:45:28.547790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.074 [2024-09-29 16:45:28.547837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.074 qpair failed and we were unable to recover it. 00:37:28.074 [2024-09-29 16:45:28.547997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.074 [2024-09-29 16:45:28.548033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.074 qpair failed and we were unable to recover it. 00:37:28.074 [2024-09-29 16:45:28.548155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.074 [2024-09-29 16:45:28.548189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.074 qpair failed and we were unable to recover it. 00:37:28.074 [2024-09-29 16:45:28.548300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.074 [2024-09-29 16:45:28.548333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.074 qpair failed and we were unable to recover it. 00:37:28.074 [2024-09-29 16:45:28.548474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.074 [2024-09-29 16:45:28.548508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.074 qpair failed and we were unable to recover it. 00:37:28.074 [2024-09-29 16:45:28.548645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.074 [2024-09-29 16:45:28.548700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.074 qpair failed and we were unable to recover it. 00:37:28.074 [2024-09-29 16:45:28.548854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.074 [2024-09-29 16:45:28.548892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.074 qpair failed and we were unable to recover it. 00:37:28.074 [2024-09-29 16:45:28.549034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.074 [2024-09-29 16:45:28.549073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.074 qpair failed and we were unable to recover it. 00:37:28.074 [2024-09-29 16:45:28.549209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.074 [2024-09-29 16:45:28.549247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.074 qpair failed and we were unable to recover it. 00:37:28.074 [2024-09-29 16:45:28.549479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.074 [2024-09-29 16:45:28.549549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.074 qpair failed and we were unable to recover it. 00:37:28.074 [2024-09-29 16:45:28.549714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.074 [2024-09-29 16:45:28.549769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.074 qpair failed and we were unable to recover it. 00:37:28.074 [2024-09-29 16:45:28.549900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.074 [2024-09-29 16:45:28.549935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.074 qpair failed and we were unable to recover it. 00:37:28.074 [2024-09-29 16:45:28.550146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.074 [2024-09-29 16:45:28.550180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.074 qpair failed and we were unable to recover it. 00:37:28.074 [2024-09-29 16:45:28.550325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.074 [2024-09-29 16:45:28.550359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.074 qpair failed and we were unable to recover it. 00:37:28.074 [2024-09-29 16:45:28.550577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.074 [2024-09-29 16:45:28.550613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.074 qpair failed and we were unable to recover it. 00:37:28.074 [2024-09-29 16:45:28.550741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.074 [2024-09-29 16:45:28.550776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.074 qpair failed and we were unable to recover it. 00:37:28.074 [2024-09-29 16:45:28.550925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.074 [2024-09-29 16:45:28.550959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.074 qpair failed and we were unable to recover it. 00:37:28.074 [2024-09-29 16:45:28.551128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.074 [2024-09-29 16:45:28.551162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.074 qpair failed and we were unable to recover it. 00:37:28.074 [2024-09-29 16:45:28.551362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.074 [2024-09-29 16:45:28.551447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.074 qpair failed and we were unable to recover it. 00:37:28.074 [2024-09-29 16:45:28.551611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.074 [2024-09-29 16:45:28.551649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.074 qpair failed and we were unable to recover it. 00:37:28.074 [2024-09-29 16:45:28.551797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.074 [2024-09-29 16:45:28.551845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.074 qpair failed and we were unable to recover it. 00:37:28.074 [2024-09-29 16:45:28.551998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.074 [2024-09-29 16:45:28.552052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.074 qpair failed and we were unable to recover it. 00:37:28.074 [2024-09-29 16:45:28.552218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.074 [2024-09-29 16:45:28.552274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.074 qpair failed and we were unable to recover it. 00:37:28.074 [2024-09-29 16:45:28.552445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.074 [2024-09-29 16:45:28.552478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.074 qpair failed and we were unable to recover it. 00:37:28.074 [2024-09-29 16:45:28.552626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.074 [2024-09-29 16:45:28.552662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.074 qpair failed and we were unable to recover it. 00:37:28.074 [2024-09-29 16:45:28.552836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.074 [2024-09-29 16:45:28.552897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.074 qpair failed and we were unable to recover it. 00:37:28.074 [2024-09-29 16:45:28.553047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.074 [2024-09-29 16:45:28.553089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.074 qpair failed and we were unable to recover it. 00:37:28.074 [2024-09-29 16:45:28.553281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.074 [2024-09-29 16:45:28.553319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.074 qpair failed and we were unable to recover it. 00:37:28.074 [2024-09-29 16:45:28.553514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.074 [2024-09-29 16:45:28.553552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.074 qpair failed and we were unable to recover it. 00:37:28.074 [2024-09-29 16:45:28.553718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.074 [2024-09-29 16:45:28.553766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.074 qpair failed and we were unable to recover it. 00:37:28.074 [2024-09-29 16:45:28.553914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.074 [2024-09-29 16:45:28.553970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.074 qpair failed and we were unable to recover it. 00:37:28.074 [2024-09-29 16:45:28.554174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.074 [2024-09-29 16:45:28.554230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.074 qpair failed and we were unable to recover it. 00:37:28.074 [2024-09-29 16:45:28.554360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.074 [2024-09-29 16:45:28.554412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.074 qpair failed and we were unable to recover it. 00:37:28.074 [2024-09-29 16:45:28.554555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.074 [2024-09-29 16:45:28.554589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.074 qpair failed and we were unable to recover it. 00:37:28.074 [2024-09-29 16:45:28.554734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.074 [2024-09-29 16:45:28.554770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.074 qpair failed and we were unable to recover it. 00:37:28.074 [2024-09-29 16:45:28.554880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.074 [2024-09-29 16:45:28.554925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.074 qpair failed and we were unable to recover it. 00:37:28.074 [2024-09-29 16:45:28.555066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.074 [2024-09-29 16:45:28.555100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.074 qpair failed and we were unable to recover it. 00:37:28.074 [2024-09-29 16:45:28.555210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.074 [2024-09-29 16:45:28.555249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.075 qpair failed and we were unable to recover it. 00:37:28.075 [2024-09-29 16:45:28.555406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.075 [2024-09-29 16:45:28.555442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.075 qpair failed and we were unable to recover it. 00:37:28.075 [2024-09-29 16:45:28.555559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.075 [2024-09-29 16:45:28.555594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.075 qpair failed and we were unable to recover it. 00:37:28.075 [2024-09-29 16:45:28.555740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.075 [2024-09-29 16:45:28.555775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.075 qpair failed and we were unable to recover it. 00:37:28.075 [2024-09-29 16:45:28.555887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.075 [2024-09-29 16:45:28.555920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.075 qpair failed and we were unable to recover it. 00:37:28.075 [2024-09-29 16:45:28.556039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.075 [2024-09-29 16:45:28.556081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.075 qpair failed and we were unable to recover it. 00:37:28.075 [2024-09-29 16:45:28.556206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.075 [2024-09-29 16:45:28.556239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.075 qpair failed and we were unable to recover it. 00:37:28.075 [2024-09-29 16:45:28.556360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.075 [2024-09-29 16:45:28.556395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.075 qpair failed and we were unable to recover it. 00:37:28.075 [2024-09-29 16:45:28.556539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.075 [2024-09-29 16:45:28.556573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.075 qpair failed and we were unable to recover it. 00:37:28.075 [2024-09-29 16:45:28.556696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.075 [2024-09-29 16:45:28.556732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.075 qpair failed and we were unable to recover it. 00:37:28.075 [2024-09-29 16:45:28.556858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.075 [2024-09-29 16:45:28.556891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.075 qpair failed and we were unable to recover it. 00:37:28.075 [2024-09-29 16:45:28.557031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.075 [2024-09-29 16:45:28.557064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.075 qpair failed and we were unable to recover it. 00:37:28.075 [2024-09-29 16:45:28.557206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.075 [2024-09-29 16:45:28.557239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.075 qpair failed and we were unable to recover it. 00:37:28.075 [2024-09-29 16:45:28.557381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.075 [2024-09-29 16:45:28.557416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.075 qpair failed and we were unable to recover it. 00:37:28.075 [2024-09-29 16:45:28.557534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.075 [2024-09-29 16:45:28.557568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.075 qpair failed and we were unable to recover it. 00:37:28.075 [2024-09-29 16:45:28.557713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.075 [2024-09-29 16:45:28.557748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.075 qpair failed and we were unable to recover it. 00:37:28.075 [2024-09-29 16:45:28.557891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.075 [2024-09-29 16:45:28.557925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.075 qpair failed and we were unable to recover it. 00:37:28.075 [2024-09-29 16:45:28.558070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.075 [2024-09-29 16:45:28.558104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.075 qpair failed and we were unable to recover it. 00:37:28.075 [2024-09-29 16:45:28.558249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.075 [2024-09-29 16:45:28.558288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.075 qpair failed and we were unable to recover it. 00:37:28.075 [2024-09-29 16:45:28.558448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.075 [2024-09-29 16:45:28.558486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.075 qpair failed and we were unable to recover it. 00:37:28.075 [2024-09-29 16:45:28.558647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.075 [2024-09-29 16:45:28.558692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.075 qpair failed and we were unable to recover it. 00:37:28.075 [2024-09-29 16:45:28.558839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.075 [2024-09-29 16:45:28.558876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.075 qpair failed and we were unable to recover it. 00:37:28.075 [2024-09-29 16:45:28.559028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.075 [2024-09-29 16:45:28.559064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.075 qpair failed and we were unable to recover it. 00:37:28.075 [2024-09-29 16:45:28.559182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.075 [2024-09-29 16:45:28.559220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.075 qpair failed and we were unable to recover it. 00:37:28.075 [2024-09-29 16:45:28.559414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.075 [2024-09-29 16:45:28.559450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.075 qpair failed and we were unable to recover it. 00:37:28.075 [2024-09-29 16:45:28.559589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.075 [2024-09-29 16:45:28.559624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.075 qpair failed and we were unable to recover it. 00:37:28.075 [2024-09-29 16:45:28.559779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.075 [2024-09-29 16:45:28.559814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.075 qpair failed and we were unable to recover it. 00:37:28.075 [2024-09-29 16:45:28.559987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.075 [2024-09-29 16:45:28.560042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.075 qpair failed and we were unable to recover it. 00:37:28.075 [2024-09-29 16:45:28.560185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.075 [2024-09-29 16:45:28.560239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.075 qpair failed and we were unable to recover it. 00:37:28.075 [2024-09-29 16:45:28.560466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.075 [2024-09-29 16:45:28.560501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.075 qpair failed and we were unable to recover it. 00:37:28.075 [2024-09-29 16:45:28.560660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.075 [2024-09-29 16:45:28.560720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.075 qpair failed and we were unable to recover it. 00:37:28.075 [2024-09-29 16:45:28.560852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.075 [2024-09-29 16:45:28.560905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.075 qpair failed and we were unable to recover it. 00:37:28.075 [2024-09-29 16:45:28.561044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.075 [2024-09-29 16:45:28.561097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.075 qpair failed and we were unable to recover it. 00:37:28.075 [2024-09-29 16:45:28.561240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.075 [2024-09-29 16:45:28.561272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.075 qpair failed and we were unable to recover it. 00:37:28.075 [2024-09-29 16:45:28.561415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.075 [2024-09-29 16:45:28.561448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.075 qpair failed and we were unable to recover it. 00:37:28.075 [2024-09-29 16:45:28.561560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.075 [2024-09-29 16:45:28.561594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.075 qpair failed and we were unable to recover it. 00:37:28.075 [2024-09-29 16:45:28.561755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.075 [2024-09-29 16:45:28.561789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.075 qpair failed and we were unable to recover it. 00:37:28.075 [2024-09-29 16:45:28.561912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.075 [2024-09-29 16:45:28.561960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.075 qpair failed and we were unable to recover it. 00:37:28.075 [2024-09-29 16:45:28.562102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.075 [2024-09-29 16:45:28.562136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.075 qpair failed and we were unable to recover it. 00:37:28.075 [2024-09-29 16:45:28.562273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.076 [2024-09-29 16:45:28.562307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.076 qpair failed and we were unable to recover it. 00:37:28.076 [2024-09-29 16:45:28.562422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.076 [2024-09-29 16:45:28.562459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.076 qpair failed and we were unable to recover it. 00:37:28.076 [2024-09-29 16:45:28.562567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.076 [2024-09-29 16:45:28.562600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.076 qpair failed and we were unable to recover it. 00:37:28.076 [2024-09-29 16:45:28.562746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.076 [2024-09-29 16:45:28.562794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.076 qpair failed and we were unable to recover it. 00:37:28.076 [2024-09-29 16:45:28.562922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.076 [2024-09-29 16:45:28.562959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.076 qpair failed and we were unable to recover it. 00:37:28.076 [2024-09-29 16:45:28.563104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.076 [2024-09-29 16:45:28.563139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.076 qpair failed and we were unable to recover it. 00:37:28.076 [2024-09-29 16:45:28.563308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.076 [2024-09-29 16:45:28.563342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.076 qpair failed and we were unable to recover it. 00:37:28.076 [2024-09-29 16:45:28.563482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.076 [2024-09-29 16:45:28.563530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.076 qpair failed and we were unable to recover it. 00:37:28.076 [2024-09-29 16:45:28.563655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.076 [2024-09-29 16:45:28.563699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.076 qpair failed and we were unable to recover it. 00:37:28.076 [2024-09-29 16:45:28.563907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.076 [2024-09-29 16:45:28.563963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.076 qpair failed and we were unable to recover it. 00:37:28.076 [2024-09-29 16:45:28.564107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.076 [2024-09-29 16:45:28.564163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.076 qpair failed and we were unable to recover it. 00:37:28.076 [2024-09-29 16:45:28.564361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.076 [2024-09-29 16:45:28.564422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.076 qpair failed and we were unable to recover it. 00:37:28.076 [2024-09-29 16:45:28.564527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.076 [2024-09-29 16:45:28.564560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.076 qpair failed and we were unable to recover it. 00:37:28.076 [2024-09-29 16:45:28.564732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.076 [2024-09-29 16:45:28.564780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.076 qpair failed and we were unable to recover it. 00:37:28.076 [2024-09-29 16:45:28.564943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.076 [2024-09-29 16:45:28.564997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.076 qpair failed and we were unable to recover it. 00:37:28.076 [2024-09-29 16:45:28.565188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.076 [2024-09-29 16:45:28.565254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.076 qpair failed and we were unable to recover it. 00:37:28.076 [2024-09-29 16:45:28.565431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.076 [2024-09-29 16:45:28.565486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.076 qpair failed and we were unable to recover it. 00:37:28.076 [2024-09-29 16:45:28.565639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.076 [2024-09-29 16:45:28.565685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.076 qpair failed and we were unable to recover it. 00:37:28.076 [2024-09-29 16:45:28.565830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.076 [2024-09-29 16:45:28.565866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.076 qpair failed and we were unable to recover it. 00:37:28.076 [2024-09-29 16:45:28.566005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.076 [2024-09-29 16:45:28.566057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.076 qpair failed and we were unable to recover it. 00:37:28.076 [2024-09-29 16:45:28.566201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.076 [2024-09-29 16:45:28.566253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.076 qpair failed and we were unable to recover it. 00:37:28.076 [2024-09-29 16:45:28.566373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.076 [2024-09-29 16:45:28.566405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.076 qpair failed and we were unable to recover it. 00:37:28.076 [2024-09-29 16:45:28.566544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.076 [2024-09-29 16:45:28.566578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.076 qpair failed and we were unable to recover it. 00:37:28.076 [2024-09-29 16:45:28.566721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.076 [2024-09-29 16:45:28.566756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.076 qpair failed and we were unable to recover it. 00:37:28.076 [2024-09-29 16:45:28.566879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.076 [2024-09-29 16:45:28.566913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.076 qpair failed and we were unable to recover it. 00:37:28.076 [2024-09-29 16:45:28.567073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.076 [2024-09-29 16:45:28.567114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.076 qpair failed and we were unable to recover it. 00:37:28.076 [2024-09-29 16:45:28.567258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.076 [2024-09-29 16:45:28.567292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.076 qpair failed and we were unable to recover it. 00:37:28.076 [2024-09-29 16:45:28.567432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.076 [2024-09-29 16:45:28.567465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.076 qpair failed and we were unable to recover it. 00:37:28.076 [2024-09-29 16:45:28.567649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.076 [2024-09-29 16:45:28.567692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.076 qpair failed and we were unable to recover it. 00:37:28.076 [2024-09-29 16:45:28.567875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.076 [2024-09-29 16:45:28.567928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.076 qpair failed and we were unable to recover it. 00:37:28.076 [2024-09-29 16:45:28.568106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.076 [2024-09-29 16:45:28.568167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.076 qpair failed and we were unable to recover it. 00:37:28.076 [2024-09-29 16:45:28.568374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.076 [2024-09-29 16:45:28.568447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.076 qpair failed and we were unable to recover it. 00:37:28.076 [2024-09-29 16:45:28.568587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.076 [2024-09-29 16:45:28.568621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.076 qpair failed and we were unable to recover it. 00:37:28.076 [2024-09-29 16:45:28.568758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.076 [2024-09-29 16:45:28.568811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.076 qpair failed and we were unable to recover it. 00:37:28.076 [2024-09-29 16:45:28.568940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.077 [2024-09-29 16:45:28.568990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.077 qpair failed and we were unable to recover it. 00:37:28.077 [2024-09-29 16:45:28.569194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.077 [2024-09-29 16:45:28.569245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.077 qpair failed and we were unable to recover it. 00:37:28.077 [2024-09-29 16:45:28.569381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.077 [2024-09-29 16:45:28.569416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.077 qpair failed and we were unable to recover it. 00:37:28.077 [2024-09-29 16:45:28.569537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.077 [2024-09-29 16:45:28.569572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.077 qpair failed and we were unable to recover it. 00:37:28.077 [2024-09-29 16:45:28.569709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.077 [2024-09-29 16:45:28.569742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.077 qpair failed and we were unable to recover it. 00:37:28.077 [2024-09-29 16:45:28.569863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.077 [2024-09-29 16:45:28.569898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.077 qpair failed and we were unable to recover it. 00:37:28.077 [2024-09-29 16:45:28.570001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.077 [2024-09-29 16:45:28.570034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.077 qpair failed and we were unable to recover it. 00:37:28.077 [2024-09-29 16:45:28.570144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.077 [2024-09-29 16:45:28.570182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.077 qpair failed and we were unable to recover it. 00:37:28.077 [2024-09-29 16:45:28.570319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.077 [2024-09-29 16:45:28.570366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.077 qpair failed and we were unable to recover it. 00:37:28.077 [2024-09-29 16:45:28.570516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.077 [2024-09-29 16:45:28.570552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.077 qpair failed and we were unable to recover it. 00:37:28.077 [2024-09-29 16:45:28.570679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.077 [2024-09-29 16:45:28.570715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.077 qpair failed and we were unable to recover it. 00:37:28.077 [2024-09-29 16:45:28.570845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.077 [2024-09-29 16:45:28.570879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.077 qpair failed and we were unable to recover it. 00:37:28.077 [2024-09-29 16:45:28.571032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.077 [2024-09-29 16:45:28.571067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.077 qpair failed and we were unable to recover it. 00:37:28.077 [2024-09-29 16:45:28.571213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.077 [2024-09-29 16:45:28.571247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.077 qpair failed and we were unable to recover it. 00:37:28.077 [2024-09-29 16:45:28.571387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.077 [2024-09-29 16:45:28.571440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.077 qpair failed and we were unable to recover it. 00:37:28.077 [2024-09-29 16:45:28.571562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.077 [2024-09-29 16:45:28.571596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.077 qpair failed and we were unable to recover it. 00:37:28.077 [2024-09-29 16:45:28.571708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.077 [2024-09-29 16:45:28.571742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.077 qpair failed and we were unable to recover it. 00:37:28.077 [2024-09-29 16:45:28.571903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.077 [2024-09-29 16:45:28.571962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.077 qpair failed and we were unable to recover it. 00:37:28.077 [2024-09-29 16:45:28.572122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.077 [2024-09-29 16:45:28.572174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.077 qpair failed and we were unable to recover it. 00:37:28.077 [2024-09-29 16:45:28.572316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.077 [2024-09-29 16:45:28.572349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.077 qpair failed and we were unable to recover it. 00:37:28.077 [2024-09-29 16:45:28.572497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.077 [2024-09-29 16:45:28.572532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.077 qpair failed and we were unable to recover it. 00:37:28.077 [2024-09-29 16:45:28.572727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.077 [2024-09-29 16:45:28.572767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.077 qpair failed and we were unable to recover it. 00:37:28.077 [2024-09-29 16:45:28.572901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.077 [2024-09-29 16:45:28.572939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.077 qpair failed and we were unable to recover it. 00:37:28.077 [2024-09-29 16:45:28.573101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.077 [2024-09-29 16:45:28.573139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.077 qpair failed and we were unable to recover it. 00:37:28.077 [2024-09-29 16:45:28.573300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.077 [2024-09-29 16:45:28.573338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.077 qpair failed and we were unable to recover it. 00:37:28.077 [2024-09-29 16:45:28.573477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.077 [2024-09-29 16:45:28.573512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.077 qpair failed and we were unable to recover it. 00:37:28.077 [2024-09-29 16:45:28.573630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.077 [2024-09-29 16:45:28.573664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.077 qpair failed and we were unable to recover it. 00:37:28.077 [2024-09-29 16:45:28.573789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.077 [2024-09-29 16:45:28.573824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.077 qpair failed and we were unable to recover it. 00:37:28.077 [2024-09-29 16:45:28.573982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.077 [2024-09-29 16:45:28.574050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.077 qpair failed and we were unable to recover it. 00:37:28.077 [2024-09-29 16:45:28.574324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.077 [2024-09-29 16:45:28.574374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.077 qpair failed and we were unable to recover it. 00:37:28.360 [2024-09-29 16:45:28.574540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.360 [2024-09-29 16:45:28.574579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.360 qpair failed and we were unable to recover it. 00:37:28.360 [2024-09-29 16:45:28.574755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.360 [2024-09-29 16:45:28.574791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.360 qpair failed and we were unable to recover it. 00:37:28.360 [2024-09-29 16:45:28.574915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.360 [2024-09-29 16:45:28.574949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.360 qpair failed and we were unable to recover it. 00:37:28.360 [2024-09-29 16:45:28.575116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.360 [2024-09-29 16:45:28.575180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.360 qpair failed and we were unable to recover it. 00:37:28.360 [2024-09-29 16:45:28.575342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.360 [2024-09-29 16:45:28.575382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.360 qpair failed and we were unable to recover it. 00:37:28.360 [2024-09-29 16:45:28.575532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.360 [2024-09-29 16:45:28.575579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.360 qpair failed and we were unable to recover it. 00:37:28.360 [2024-09-29 16:45:28.575734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.360 [2024-09-29 16:45:28.575771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.360 qpair failed and we were unable to recover it. 00:37:28.360 [2024-09-29 16:45:28.575915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.360 [2024-09-29 16:45:28.575971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.360 qpair failed and we were unable to recover it. 00:37:28.360 [2024-09-29 16:45:28.576112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.360 [2024-09-29 16:45:28.576167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.360 qpair failed and we were unable to recover it. 00:37:28.360 [2024-09-29 16:45:28.576290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.360 [2024-09-29 16:45:28.576326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.360 qpair failed and we were unable to recover it. 00:37:28.360 [2024-09-29 16:45:28.576446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.360 [2024-09-29 16:45:28.576481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.360 qpair failed and we were unable to recover it. 00:37:28.360 [2024-09-29 16:45:28.576632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.360 [2024-09-29 16:45:28.576667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.360 qpair failed and we were unable to recover it. 00:37:28.361 [2024-09-29 16:45:28.576822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.361 [2024-09-29 16:45:28.576855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.361 qpair failed and we were unable to recover it. 00:37:28.361 [2024-09-29 16:45:28.576970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.361 [2024-09-29 16:45:28.577004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.361 qpair failed and we were unable to recover it. 00:37:28.361 [2024-09-29 16:45:28.577181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.361 [2024-09-29 16:45:28.577219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.361 qpair failed and we were unable to recover it. 00:37:28.361 [2024-09-29 16:45:28.577408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.361 [2024-09-29 16:45:28.577470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.361 qpair failed and we were unable to recover it. 00:37:28.361 [2024-09-29 16:45:28.577611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.361 [2024-09-29 16:45:28.577644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.361 qpair failed and we were unable to recover it. 00:37:28.361 [2024-09-29 16:45:28.577788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.361 [2024-09-29 16:45:28.577841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.361 qpair failed and we were unable to recover it. 00:37:28.361 [2024-09-29 16:45:28.577994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.361 [2024-09-29 16:45:28.578034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.361 qpair failed and we were unable to recover it. 00:37:28.361 [2024-09-29 16:45:28.578211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.361 [2024-09-29 16:45:28.578286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.361 qpair failed and we were unable to recover it. 00:37:28.361 [2024-09-29 16:45:28.578525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.361 [2024-09-29 16:45:28.578585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.361 qpair failed and we were unable to recover it. 00:37:28.361 [2024-09-29 16:45:28.578739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.361 [2024-09-29 16:45:28.578774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.361 qpair failed and we were unable to recover it. 00:37:28.361 [2024-09-29 16:45:28.578888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.361 [2024-09-29 16:45:28.578922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.361 qpair failed and we were unable to recover it. 00:37:28.361 [2024-09-29 16:45:28.579085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.361 [2024-09-29 16:45:28.579122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.361 qpair failed and we were unable to recover it. 00:37:28.361 [2024-09-29 16:45:28.579315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.361 [2024-09-29 16:45:28.579374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.361 qpair failed and we were unable to recover it. 00:37:28.361 [2024-09-29 16:45:28.579509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.361 [2024-09-29 16:45:28.579547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.361 qpair failed and we were unable to recover it. 00:37:28.361 [2024-09-29 16:45:28.579691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.361 [2024-09-29 16:45:28.579725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.361 qpair failed and we were unable to recover it. 00:37:28.361 [2024-09-29 16:45:28.579858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.361 [2024-09-29 16:45:28.579896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.361 qpair failed and we were unable to recover it. 00:37:28.361 [2024-09-29 16:45:28.580030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.361 [2024-09-29 16:45:28.580068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.361 qpair failed and we were unable to recover it. 00:37:28.361 [2024-09-29 16:45:28.580199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.361 [2024-09-29 16:45:28.580238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.361 qpair failed and we were unable to recover it. 00:37:28.361 [2024-09-29 16:45:28.580358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.361 [2024-09-29 16:45:28.580397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.361 qpair failed and we were unable to recover it. 00:37:28.361 [2024-09-29 16:45:28.580576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.361 [2024-09-29 16:45:28.580612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.361 qpair failed and we were unable to recover it. 00:37:28.361 [2024-09-29 16:45:28.580868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.361 [2024-09-29 16:45:28.580916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.361 qpair failed and we were unable to recover it. 00:37:28.361 [2024-09-29 16:45:28.581092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.361 [2024-09-29 16:45:28.581131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.361 qpair failed and we were unable to recover it. 00:37:28.361 [2024-09-29 16:45:28.581316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.361 [2024-09-29 16:45:28.581353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.361 qpair failed and we were unable to recover it. 00:37:28.361 [2024-09-29 16:45:28.581506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.361 [2024-09-29 16:45:28.581544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.361 qpair failed and we were unable to recover it. 00:37:28.361 [2024-09-29 16:45:28.581691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.361 [2024-09-29 16:45:28.581744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.361 qpair failed and we were unable to recover it. 00:37:28.361 [2024-09-29 16:45:28.581874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.361 [2024-09-29 16:45:28.581908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.361 qpair failed and we were unable to recover it. 00:37:28.361 [2024-09-29 16:45:28.582033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.361 [2024-09-29 16:45:28.582071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.361 qpair failed and we were unable to recover it. 00:37:28.361 [2024-09-29 16:45:28.582201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.361 [2024-09-29 16:45:28.582238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.361 qpair failed and we were unable to recover it. 00:37:28.361 [2024-09-29 16:45:28.582472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.361 [2024-09-29 16:45:28.582509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.361 qpair failed and we were unable to recover it. 00:37:28.361 [2024-09-29 16:45:28.582686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.361 [2024-09-29 16:45:28.582754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.361 qpair failed and we were unable to recover it. 00:37:28.361 [2024-09-29 16:45:28.582901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.361 [2024-09-29 16:45:28.582948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.361 qpair failed and we were unable to recover it. 00:37:28.361 [2024-09-29 16:45:28.583160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.361 [2024-09-29 16:45:28.583197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.361 qpair failed and we were unable to recover it. 00:37:28.361 [2024-09-29 16:45:28.583340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.361 [2024-09-29 16:45:28.583374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.361 qpair failed and we were unable to recover it. 00:37:28.361 [2024-09-29 16:45:28.583488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.361 [2024-09-29 16:45:28.583522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.361 qpair failed and we were unable to recover it. 00:37:28.361 [2024-09-29 16:45:28.583635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.361 [2024-09-29 16:45:28.583668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.361 qpair failed and we were unable to recover it. 00:37:28.361 [2024-09-29 16:45:28.583817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.361 [2024-09-29 16:45:28.583851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.361 qpair failed and we were unable to recover it. 00:37:28.361 [2024-09-29 16:45:28.584012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.361 [2024-09-29 16:45:28.584050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.361 qpair failed and we were unable to recover it. 00:37:28.361 [2024-09-29 16:45:28.584207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.362 [2024-09-29 16:45:28.584244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.362 qpair failed and we were unable to recover it. 00:37:28.362 [2024-09-29 16:45:28.584386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.362 [2024-09-29 16:45:28.584438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.362 qpair failed and we were unable to recover it. 00:37:28.362 [2024-09-29 16:45:28.584573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.362 [2024-09-29 16:45:28.584611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.362 qpair failed and we were unable to recover it. 00:37:28.362 [2024-09-29 16:45:28.584782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.362 [2024-09-29 16:45:28.584816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.362 qpair failed and we were unable to recover it. 00:37:28.362 [2024-09-29 16:45:28.584989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.362 [2024-09-29 16:45:28.585025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.362 qpair failed and we were unable to recover it. 00:37:28.362 [2024-09-29 16:45:28.585180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.362 [2024-09-29 16:45:28.585217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.362 qpair failed and we were unable to recover it. 00:37:28.362 [2024-09-29 16:45:28.585351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.362 [2024-09-29 16:45:28.585387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.362 qpair failed and we were unable to recover it. 00:37:28.362 [2024-09-29 16:45:28.585541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.362 [2024-09-29 16:45:28.585577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.362 qpair failed and we were unable to recover it. 00:37:28.362 [2024-09-29 16:45:28.585701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.362 [2024-09-29 16:45:28.585759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.362 qpair failed and we were unable to recover it. 00:37:28.362 [2024-09-29 16:45:28.585918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.362 [2024-09-29 16:45:28.585983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.362 qpair failed and we were unable to recover it. 00:37:28.362 [2024-09-29 16:45:28.586274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.362 [2024-09-29 16:45:28.586332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.362 qpair failed and we were unable to recover it. 00:37:28.362 [2024-09-29 16:45:28.586575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.362 [2024-09-29 16:45:28.586633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.362 qpair failed and we were unable to recover it. 00:37:28.362 [2024-09-29 16:45:28.586808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.362 [2024-09-29 16:45:28.586842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.362 qpair failed and we were unable to recover it. 00:37:28.362 [2024-09-29 16:45:28.586987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.362 [2024-09-29 16:45:28.587020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.362 qpair failed and we were unable to recover it. 00:37:28.362 [2024-09-29 16:45:28.587144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.362 [2024-09-29 16:45:28.587181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.362 qpair failed and we were unable to recover it. 00:37:28.362 [2024-09-29 16:45:28.587339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.362 [2024-09-29 16:45:28.587397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.362 qpair failed and we were unable to recover it. 00:37:28.362 [2024-09-29 16:45:28.587585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.362 [2024-09-29 16:45:28.587622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.362 qpair failed and we were unable to recover it. 00:37:28.362 [2024-09-29 16:45:28.587770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.362 [2024-09-29 16:45:28.587804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.362 qpair failed and we were unable to recover it. 00:37:28.362 [2024-09-29 16:45:28.587945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.362 [2024-09-29 16:45:28.587999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.362 qpair failed and we were unable to recover it. 00:37:28.362 [2024-09-29 16:45:28.588214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.362 [2024-09-29 16:45:28.588252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.362 qpair failed and we were unable to recover it. 00:37:28.362 [2024-09-29 16:45:28.588383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.362 [2024-09-29 16:45:28.588422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.362 qpair failed and we were unable to recover it. 00:37:28.362 [2024-09-29 16:45:28.588574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.362 [2024-09-29 16:45:28.588613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.362 qpair failed and we were unable to recover it. 00:37:28.362 [2024-09-29 16:45:28.588786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.362 [2024-09-29 16:45:28.588835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.362 qpair failed and we were unable to recover it. 00:37:28.362 [2024-09-29 16:45:28.588993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.362 [2024-09-29 16:45:28.589030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.362 qpair failed and we were unable to recover it. 00:37:28.362 [2024-09-29 16:45:28.589184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.362 [2024-09-29 16:45:28.589241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.362 qpair failed and we were unable to recover it. 00:37:28.362 [2024-09-29 16:45:28.589374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.362 [2024-09-29 16:45:28.589426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.362 qpair failed and we were unable to recover it. 00:37:28.362 [2024-09-29 16:45:28.589551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.362 [2024-09-29 16:45:28.589585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.362 qpair failed and we were unable to recover it. 00:37:28.362 [2024-09-29 16:45:28.589728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.362 [2024-09-29 16:45:28.589762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.362 qpair failed and we were unable to recover it. 00:37:28.362 [2024-09-29 16:45:28.589877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.362 [2024-09-29 16:45:28.589911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.362 qpair failed and we were unable to recover it. 00:37:28.362 [2024-09-29 16:45:28.590025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.362 [2024-09-29 16:45:28.590060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.362 qpair failed and we were unable to recover it. 00:37:28.362 [2024-09-29 16:45:28.590206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.362 [2024-09-29 16:45:28.590239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.362 qpair failed and we were unable to recover it. 00:37:28.362 [2024-09-29 16:45:28.590389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.362 [2024-09-29 16:45:28.590421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.362 qpair failed and we were unable to recover it. 00:37:28.362 [2024-09-29 16:45:28.590587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.362 [2024-09-29 16:45:28.590636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.362 qpair failed and we were unable to recover it. 00:37:28.362 [2024-09-29 16:45:28.590777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.362 [2024-09-29 16:45:28.590825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.362 qpair failed and we were unable to recover it. 00:37:28.362 [2024-09-29 16:45:28.590968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.362 [2024-09-29 16:45:28.591016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.362 qpair failed and we were unable to recover it. 00:37:28.362 [2024-09-29 16:45:28.591188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.362 [2024-09-29 16:45:28.591242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.362 qpair failed and we were unable to recover it. 00:37:28.362 [2024-09-29 16:45:28.591385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.362 [2024-09-29 16:45:28.591420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.362 qpair failed and we were unable to recover it. 00:37:28.362 [2024-09-29 16:45:28.591539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.362 [2024-09-29 16:45:28.591573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.362 qpair failed and we were unable to recover it. 00:37:28.362 [2024-09-29 16:45:28.591747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.362 [2024-09-29 16:45:28.591782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.363 qpair failed and we were unable to recover it. 00:37:28.363 [2024-09-29 16:45:28.591918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.363 [2024-09-29 16:45:28.591951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.363 qpair failed and we were unable to recover it. 00:37:28.363 [2024-09-29 16:45:28.592064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.363 [2024-09-29 16:45:28.592099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.363 qpair failed and we were unable to recover it. 00:37:28.363 [2024-09-29 16:45:28.592242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.363 [2024-09-29 16:45:28.592276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.363 qpair failed and we were unable to recover it. 00:37:28.363 [2024-09-29 16:45:28.592398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.363 [2024-09-29 16:45:28.592431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.363 qpair failed and we were unable to recover it. 00:37:28.363 [2024-09-29 16:45:28.592550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.363 [2024-09-29 16:45:28.592584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.363 qpair failed and we were unable to recover it. 00:37:28.363 [2024-09-29 16:45:28.592742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.363 [2024-09-29 16:45:28.592790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.363 qpair failed and we were unable to recover it. 00:37:28.363 [2024-09-29 16:45:28.592909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.363 [2024-09-29 16:45:28.592944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.363 qpair failed and we were unable to recover it. 00:37:28.363 [2024-09-29 16:45:28.593060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.363 [2024-09-29 16:45:28.593097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.363 qpair failed and we were unable to recover it. 00:37:28.363 [2024-09-29 16:45:28.593269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.363 [2024-09-29 16:45:28.593303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.363 qpair failed and we were unable to recover it. 00:37:28.363 [2024-09-29 16:45:28.593434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.363 [2024-09-29 16:45:28.593472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.363 qpair failed and we were unable to recover it. 00:37:28.363 [2024-09-29 16:45:28.593618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.363 [2024-09-29 16:45:28.593651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.363 qpair failed and we were unable to recover it. 00:37:28.363 [2024-09-29 16:45:28.593792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.363 [2024-09-29 16:45:28.593846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.363 qpair failed and we were unable to recover it. 00:37:28.363 [2024-09-29 16:45:28.593985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.363 [2024-09-29 16:45:28.594039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.363 qpair failed and we were unable to recover it. 00:37:28.363 [2024-09-29 16:45:28.594173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.363 [2024-09-29 16:45:28.594210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.363 qpair failed and we were unable to recover it. 00:37:28.363 [2024-09-29 16:45:28.594327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.363 [2024-09-29 16:45:28.594361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.363 qpair failed and we were unable to recover it. 00:37:28.363 [2024-09-29 16:45:28.594492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.363 [2024-09-29 16:45:28.594540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.363 qpair failed and we were unable to recover it. 00:37:28.363 [2024-09-29 16:45:28.594659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.363 [2024-09-29 16:45:28.594723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.363 qpair failed and we were unable to recover it. 00:37:28.363 [2024-09-29 16:45:28.594896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.363 [2024-09-29 16:45:28.594960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.363 qpair failed and we were unable to recover it. 00:37:28.363 [2024-09-29 16:45:28.595138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.363 [2024-09-29 16:45:28.595193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.363 qpair failed and we were unable to recover it. 00:37:28.363 [2024-09-29 16:45:28.595354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.363 [2024-09-29 16:45:28.595407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.363 qpair failed and we were unable to recover it. 00:37:28.363 [2024-09-29 16:45:28.595540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.363 [2024-09-29 16:45:28.595573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.363 qpair failed and we were unable to recover it. 00:37:28.363 [2024-09-29 16:45:28.595736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.363 [2024-09-29 16:45:28.595800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.363 qpair failed and we were unable to recover it. 00:37:28.363 [2024-09-29 16:45:28.595990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.363 [2024-09-29 16:45:28.596042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.363 qpair failed and we were unable to recover it. 00:37:28.363 [2024-09-29 16:45:28.596246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.363 [2024-09-29 16:45:28.596298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.363 qpair failed and we were unable to recover it. 00:37:28.363 [2024-09-29 16:45:28.596482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.363 [2024-09-29 16:45:28.596515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.363 qpair failed and we were unable to recover it. 00:37:28.363 [2024-09-29 16:45:28.596625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.363 [2024-09-29 16:45:28.596659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.363 qpair failed and we were unable to recover it. 00:37:28.363 [2024-09-29 16:45:28.596826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.363 [2024-09-29 16:45:28.596878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.363 qpair failed and we were unable to recover it. 00:37:28.363 [2024-09-29 16:45:28.597038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.363 [2024-09-29 16:45:28.597095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.363 qpair failed and we were unable to recover it. 00:37:28.363 [2024-09-29 16:45:28.597291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.363 [2024-09-29 16:45:28.597349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.363 qpair failed and we were unable to recover it. 00:37:28.363 [2024-09-29 16:45:28.597469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.363 [2024-09-29 16:45:28.597503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.363 qpair failed and we were unable to recover it. 00:37:28.363 [2024-09-29 16:45:28.597668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.363 [2024-09-29 16:45:28.597708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.363 qpair failed and we were unable to recover it. 00:37:28.363 [2024-09-29 16:45:28.597869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.363 [2024-09-29 16:45:28.597929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.363 qpair failed and we were unable to recover it. 00:37:28.363 [2024-09-29 16:45:28.598069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.363 [2024-09-29 16:45:28.598105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.363 qpair failed and we were unable to recover it. 00:37:28.363 [2024-09-29 16:45:28.598260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.363 [2024-09-29 16:45:28.598293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.363 qpair failed and we were unable to recover it. 00:37:28.363 [2024-09-29 16:45:28.598408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.363 [2024-09-29 16:45:28.598442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.363 qpair failed and we were unable to recover it. 00:37:28.363 [2024-09-29 16:45:28.598546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.363 [2024-09-29 16:45:28.598579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.363 qpair failed and we were unable to recover it. 00:37:28.363 [2024-09-29 16:45:28.598745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.363 [2024-09-29 16:45:28.598793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.363 qpair failed and we were unable to recover it. 00:37:28.363 [2024-09-29 16:45:28.598913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.363 [2024-09-29 16:45:28.598949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.363 qpair failed and we were unable to recover it. 00:37:28.364 [2024-09-29 16:45:28.599070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.364 [2024-09-29 16:45:28.599103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.364 qpair failed and we were unable to recover it. 00:37:28.364 [2024-09-29 16:45:28.599260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.364 [2024-09-29 16:45:28.599293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.364 qpair failed and we were unable to recover it. 00:37:28.364 [2024-09-29 16:45:28.599415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.364 [2024-09-29 16:45:28.599449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.364 qpair failed and we were unable to recover it. 00:37:28.364 [2024-09-29 16:45:28.599562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.364 [2024-09-29 16:45:28.599594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.364 qpair failed and we were unable to recover it. 00:37:28.364 [2024-09-29 16:45:28.599730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.364 [2024-09-29 16:45:28.599770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.364 qpair failed and we were unable to recover it. 00:37:28.364 [2024-09-29 16:45:28.599964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.364 [2024-09-29 16:45:28.600016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.364 qpair failed and we were unable to recover it. 00:37:28.364 [2024-09-29 16:45:28.600167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.364 [2024-09-29 16:45:28.600207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.364 qpair failed and we were unable to recover it. 00:37:28.364 [2024-09-29 16:45:28.600390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.364 [2024-09-29 16:45:28.600429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.364 qpair failed and we were unable to recover it. 00:37:28.364 [2024-09-29 16:45:28.600567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.364 [2024-09-29 16:45:28.600601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.364 qpair failed and we were unable to recover it. 00:37:28.364 [2024-09-29 16:45:28.600776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.364 [2024-09-29 16:45:28.600811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.364 qpair failed and we were unable to recover it. 00:37:28.364 [2024-09-29 16:45:28.600940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.364 [2024-09-29 16:45:28.600976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.364 qpair failed and we were unable to recover it. 00:37:28.364 [2024-09-29 16:45:28.601149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.364 [2024-09-29 16:45:28.601192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.364 qpair failed and we were unable to recover it. 00:37:28.364 [2024-09-29 16:45:28.601351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.364 [2024-09-29 16:45:28.601388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.364 qpair failed and we were unable to recover it. 00:37:28.364 [2024-09-29 16:45:28.601543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.364 [2024-09-29 16:45:28.601580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.364 qpair failed and we were unable to recover it. 00:37:28.364 [2024-09-29 16:45:28.601720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.364 [2024-09-29 16:45:28.601756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.364 qpair failed and we were unable to recover it. 00:37:28.364 [2024-09-29 16:45:28.601875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.364 [2024-09-29 16:45:28.601908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.364 qpair failed and we were unable to recover it. 00:37:28.364 [2024-09-29 16:45:28.602074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.364 [2024-09-29 16:45:28.602127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.364 qpair failed and we were unable to recover it. 00:37:28.364 [2024-09-29 16:45:28.602265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.364 [2024-09-29 16:45:28.602332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.364 qpair failed and we were unable to recover it. 00:37:28.364 [2024-09-29 16:45:28.602503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.364 [2024-09-29 16:45:28.602566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.364 qpair failed and we were unable to recover it. 00:37:28.364 [2024-09-29 16:45:28.602723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.364 [2024-09-29 16:45:28.602760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.364 qpair failed and we were unable to recover it. 00:37:28.364 [2024-09-29 16:45:28.602886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.364 [2024-09-29 16:45:28.602921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.364 qpair failed and we were unable to recover it. 00:37:28.364 [2024-09-29 16:45:28.603102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.364 [2024-09-29 16:45:28.603138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.364 qpair failed and we were unable to recover it. 00:37:28.364 [2024-09-29 16:45:28.603277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.364 [2024-09-29 16:45:28.603330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.364 qpair failed and we were unable to recover it. 00:37:28.364 [2024-09-29 16:45:28.603484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.364 [2024-09-29 16:45:28.603522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.364 qpair failed and we were unable to recover it. 00:37:28.364 [2024-09-29 16:45:28.603680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.364 [2024-09-29 16:45:28.603730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.364 qpair failed and we were unable to recover it. 00:37:28.364 [2024-09-29 16:45:28.603876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.364 [2024-09-29 16:45:28.603924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.364 qpair failed and we were unable to recover it. 00:37:28.364 [2024-09-29 16:45:28.604125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.364 [2024-09-29 16:45:28.604164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.364 qpair failed and we were unable to recover it. 00:37:28.364 [2024-09-29 16:45:28.604327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.364 [2024-09-29 16:45:28.604364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.364 qpair failed and we were unable to recover it. 00:37:28.364 [2024-09-29 16:45:28.604518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.364 [2024-09-29 16:45:28.604556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.364 qpair failed and we were unable to recover it. 00:37:28.364 [2024-09-29 16:45:28.604717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.364 [2024-09-29 16:45:28.604751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.364 qpair failed and we were unable to recover it. 00:37:28.364 [2024-09-29 16:45:28.604862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.364 [2024-09-29 16:45:28.604895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.364 qpair failed and we were unable to recover it. 00:37:28.364 [2024-09-29 16:45:28.605082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.364 [2024-09-29 16:45:28.605118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.364 qpair failed and we were unable to recover it. 00:37:28.364 [2024-09-29 16:45:28.605274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.364 [2024-09-29 16:45:28.605312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.364 qpair failed and we were unable to recover it. 00:37:28.364 [2024-09-29 16:45:28.605444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.364 [2024-09-29 16:45:28.605483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.364 qpair failed and we were unable to recover it. 00:37:28.364 [2024-09-29 16:45:28.605650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.364 [2024-09-29 16:45:28.605690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.364 qpair failed and we were unable to recover it. 00:37:28.364 [2024-09-29 16:45:28.605824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.364 [2024-09-29 16:45:28.605860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.364 qpair failed and we were unable to recover it. 00:37:28.364 [2024-09-29 16:45:28.606011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.364 [2024-09-29 16:45:28.606048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.364 qpair failed and we were unable to recover it. 00:37:28.364 [2024-09-29 16:45:28.606284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.364 [2024-09-29 16:45:28.606321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.364 qpair failed and we were unable to recover it. 00:37:28.364 [2024-09-29 16:45:28.606544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.365 [2024-09-29 16:45:28.606610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.365 qpair failed and we were unable to recover it. 00:37:28.365 [2024-09-29 16:45:28.606750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.365 [2024-09-29 16:45:28.606787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.365 qpair failed and we were unable to recover it. 00:37:28.365 [2024-09-29 16:45:28.606926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.365 [2024-09-29 16:45:28.606984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.365 qpair failed and we were unable to recover it. 00:37:28.365 [2024-09-29 16:45:28.607117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.365 [2024-09-29 16:45:28.607171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.365 qpair failed and we were unable to recover it. 00:37:28.365 [2024-09-29 16:45:28.607338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.365 [2024-09-29 16:45:28.607391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.365 qpair failed and we were unable to recover it. 00:37:28.365 [2024-09-29 16:45:28.607528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.365 [2024-09-29 16:45:28.607576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.365 qpair failed and we were unable to recover it. 00:37:28.365 [2024-09-29 16:45:28.607779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.365 [2024-09-29 16:45:28.607831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.365 qpair failed and we were unable to recover it. 00:37:28.365 [2024-09-29 16:45:28.607970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.365 [2024-09-29 16:45:28.608009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.365 qpair failed and we were unable to recover it. 00:37:28.365 [2024-09-29 16:45:28.608139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.365 [2024-09-29 16:45:28.608177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.365 qpair failed and we were unable to recover it. 00:37:28.365 [2024-09-29 16:45:28.608392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.365 [2024-09-29 16:45:28.608456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.365 qpair failed and we were unable to recover it. 00:37:28.365 [2024-09-29 16:45:28.608579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.365 [2024-09-29 16:45:28.608615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.365 qpair failed and we were unable to recover it. 00:37:28.365 [2024-09-29 16:45:28.608781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.365 [2024-09-29 16:45:28.608817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.365 qpair failed and we were unable to recover it. 00:37:28.365 [2024-09-29 16:45:28.609016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.365 [2024-09-29 16:45:28.609075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.365 qpair failed and we were unable to recover it. 00:37:28.365 [2024-09-29 16:45:28.609254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.365 [2024-09-29 16:45:28.609299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.365 qpair failed and we were unable to recover it. 00:37:28.365 [2024-09-29 16:45:28.609428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.365 [2024-09-29 16:45:28.609469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.365 qpair failed and we were unable to recover it. 00:37:28.365 [2024-09-29 16:45:28.609633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.365 [2024-09-29 16:45:28.609666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.365 qpair failed and we were unable to recover it. 00:37:28.365 [2024-09-29 16:45:28.609820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.365 [2024-09-29 16:45:28.609854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.365 qpair failed and we were unable to recover it. 00:37:28.365 [2024-09-29 16:45:28.609965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.365 [2024-09-29 16:45:28.610018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.365 qpair failed and we were unable to recover it. 00:37:28.365 [2024-09-29 16:45:28.610246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.365 [2024-09-29 16:45:28.610283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.365 qpair failed and we were unable to recover it. 00:37:28.365 [2024-09-29 16:45:28.610424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.365 [2024-09-29 16:45:28.610475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.365 qpair failed and we were unable to recover it. 00:37:28.365 [2024-09-29 16:45:28.610602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.365 [2024-09-29 16:45:28.610639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.365 qpair failed and we were unable to recover it. 00:37:28.365 [2024-09-29 16:45:28.610810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.365 [2024-09-29 16:45:28.610847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.365 qpair failed and we were unable to recover it. 00:37:28.365 [2024-09-29 16:45:28.610976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.365 [2024-09-29 16:45:28.611029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.365 qpair failed and we were unable to recover it. 00:37:28.365 [2024-09-29 16:45:28.611174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.365 [2024-09-29 16:45:28.611209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.365 qpair failed and we were unable to recover it. 00:37:28.365 [2024-09-29 16:45:28.611365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.365 [2024-09-29 16:45:28.611399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.365 qpair failed and we were unable to recover it. 00:37:28.365 [2024-09-29 16:45:28.611543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.365 [2024-09-29 16:45:28.611578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.365 qpair failed and we were unable to recover it. 00:37:28.365 [2024-09-29 16:45:28.611710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.365 [2024-09-29 16:45:28.611748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.365 qpair failed and we were unable to recover it. 00:37:28.365 [2024-09-29 16:45:28.611896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.365 [2024-09-29 16:45:28.611932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.365 qpair failed and we were unable to recover it. 00:37:28.365 [2024-09-29 16:45:28.612092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.365 [2024-09-29 16:45:28.612129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.365 qpair failed and we were unable to recover it. 00:37:28.365 [2024-09-29 16:45:28.612409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.365 [2024-09-29 16:45:28.612472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.365 qpair failed and we were unable to recover it. 00:37:28.365 [2024-09-29 16:45:28.612624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.365 [2024-09-29 16:45:28.612660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.365 qpair failed and we were unable to recover it. 00:37:28.365 [2024-09-29 16:45:28.612828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.365 [2024-09-29 16:45:28.612862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.365 qpair failed and we were unable to recover it. 00:37:28.366 [2024-09-29 16:45:28.613071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.366 [2024-09-29 16:45:28.613146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.366 qpair failed and we were unable to recover it. 00:37:28.366 [2024-09-29 16:45:28.613399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.366 [2024-09-29 16:45:28.613460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.366 qpair failed and we were unable to recover it. 00:37:28.366 [2024-09-29 16:45:28.613609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.366 [2024-09-29 16:45:28.613644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.366 qpair failed and we were unable to recover it. 00:37:28.366 [2024-09-29 16:45:28.613773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.366 [2024-09-29 16:45:28.613807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.366 qpair failed and we were unable to recover it. 00:37:28.366 [2024-09-29 16:45:28.613928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.366 [2024-09-29 16:45:28.613980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.366 qpair failed and we were unable to recover it. 00:37:28.366 [2024-09-29 16:45:28.614127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.366 [2024-09-29 16:45:28.614177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.366 qpair failed and we were unable to recover it. 00:37:28.366 [2024-09-29 16:45:28.614358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.366 [2024-09-29 16:45:28.614411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.366 qpair failed and we were unable to recover it. 00:37:28.366 [2024-09-29 16:45:28.614581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.366 [2024-09-29 16:45:28.614615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.366 qpair failed and we were unable to recover it. 00:37:28.366 [2024-09-29 16:45:28.614776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.366 [2024-09-29 16:45:28.614812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.366 qpair failed and we were unable to recover it. 00:37:28.366 [2024-09-29 16:45:28.614948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.366 [2024-09-29 16:45:28.614984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.366 qpair failed and we were unable to recover it. 00:37:28.366 [2024-09-29 16:45:28.615126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.366 [2024-09-29 16:45:28.615187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.366 qpair failed and we were unable to recover it. 00:37:28.366 [2024-09-29 16:45:28.615312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.366 [2024-09-29 16:45:28.615349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.366 qpair failed and we were unable to recover it. 00:37:28.366 [2024-09-29 16:45:28.615534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.366 [2024-09-29 16:45:28.615589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.366 qpair failed and we were unable to recover it. 00:37:28.366 [2024-09-29 16:45:28.615757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.366 [2024-09-29 16:45:28.615805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.366 qpair failed and we were unable to recover it. 00:37:28.366 [2024-09-29 16:45:28.615943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.366 [2024-09-29 16:45:28.615983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.366 qpair failed and we were unable to recover it. 00:37:28.366 [2024-09-29 16:45:28.616170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.366 [2024-09-29 16:45:28.616225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.366 qpair failed and we were unable to recover it. 00:37:28.366 [2024-09-29 16:45:28.616446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.366 [2024-09-29 16:45:28.616521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.366 qpair failed and we were unable to recover it. 00:37:28.366 [2024-09-29 16:45:28.616703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.366 [2024-09-29 16:45:28.616748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.366 qpair failed and we were unable to recover it. 00:37:28.366 [2024-09-29 16:45:28.616872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.366 [2024-09-29 16:45:28.616907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.366 qpair failed and we were unable to recover it. 00:37:28.366 [2024-09-29 16:45:28.617042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.366 [2024-09-29 16:45:28.617079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.366 qpair failed and we were unable to recover it. 00:37:28.366 [2024-09-29 16:45:28.617213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.366 [2024-09-29 16:45:28.617250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.366 qpair failed and we were unable to recover it. 00:37:28.366 [2024-09-29 16:45:28.617387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.366 [2024-09-29 16:45:28.617427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.366 qpair failed and we were unable to recover it. 00:37:28.366 [2024-09-29 16:45:28.617573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.366 [2024-09-29 16:45:28.617610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.366 qpair failed and we were unable to recover it. 00:37:28.366 [2024-09-29 16:45:28.617771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.366 [2024-09-29 16:45:28.617807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.366 qpair failed and we were unable to recover it. 00:37:28.366 [2024-09-29 16:45:28.617946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.366 [2024-09-29 16:45:28.617997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.366 qpair failed and we were unable to recover it. 00:37:28.366 [2024-09-29 16:45:28.618145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.366 [2024-09-29 16:45:28.618197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.366 qpair failed and we were unable to recover it. 00:37:28.366 [2024-09-29 16:45:28.618339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.366 [2024-09-29 16:45:28.618395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.366 qpair failed and we were unable to recover it. 00:37:28.366 [2024-09-29 16:45:28.618524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.366 [2024-09-29 16:45:28.618560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.366 qpair failed and we were unable to recover it. 00:37:28.366 [2024-09-29 16:45:28.618718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.366 [2024-09-29 16:45:28.618766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.366 qpair failed and we were unable to recover it. 00:37:28.366 [2024-09-29 16:45:28.618908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.366 [2024-09-29 16:45:28.618964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.366 qpair failed and we were unable to recover it. 00:37:28.366 [2024-09-29 16:45:28.619102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.366 [2024-09-29 16:45:28.619168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.366 qpair failed and we were unable to recover it. 00:37:28.366 [2024-09-29 16:45:28.619386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.366 [2024-09-29 16:45:28.619422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.366 qpair failed and we were unable to recover it. 00:37:28.366 [2024-09-29 16:45:28.619541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.366 [2024-09-29 16:45:28.619578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.366 qpair failed and we were unable to recover it. 00:37:28.366 [2024-09-29 16:45:28.619752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.366 [2024-09-29 16:45:28.619799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.366 qpair failed and we were unable to recover it. 00:37:28.366 [2024-09-29 16:45:28.619970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.366 [2024-09-29 16:45:28.620009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.366 qpair failed and we were unable to recover it. 00:37:28.366 [2024-09-29 16:45:28.620176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.366 [2024-09-29 16:45:28.620214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.366 qpair failed and we were unable to recover it. 00:37:28.366 [2024-09-29 16:45:28.620348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.366 [2024-09-29 16:45:28.620387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.366 qpair failed and we were unable to recover it. 00:37:28.366 [2024-09-29 16:45:28.620555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.366 [2024-09-29 16:45:28.620589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.366 qpair failed and we were unable to recover it. 00:37:28.366 [2024-09-29 16:45:28.620719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.367 [2024-09-29 16:45:28.620754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.367 qpair failed and we were unable to recover it. 00:37:28.367 [2024-09-29 16:45:28.620921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.367 [2024-09-29 16:45:28.620958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.367 qpair failed and we were unable to recover it. 00:37:28.367 [2024-09-29 16:45:28.621097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.367 [2024-09-29 16:45:28.621134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.367 qpair failed and we were unable to recover it. 00:37:28.367 [2024-09-29 16:45:28.621263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.367 [2024-09-29 16:45:28.621300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.367 qpair failed and we were unable to recover it. 00:37:28.367 [2024-09-29 16:45:28.621456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.367 [2024-09-29 16:45:28.621494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.367 qpair failed and we were unable to recover it. 00:37:28.367 [2024-09-29 16:45:28.621683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.367 [2024-09-29 16:45:28.621722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.367 qpair failed and we were unable to recover it. 00:37:28.367 [2024-09-29 16:45:28.621867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.367 [2024-09-29 16:45:28.621904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.367 qpair failed and we were unable to recover it. 00:37:28.367 [2024-09-29 16:45:28.622065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.367 [2024-09-29 16:45:28.622118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.367 qpair failed and we were unable to recover it. 00:37:28.367 [2024-09-29 16:45:28.622326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.367 [2024-09-29 16:45:28.622360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.367 qpair failed and we were unable to recover it. 00:37:28.367 [2024-09-29 16:45:28.622529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.367 [2024-09-29 16:45:28.622562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.367 qpair failed and we were unable to recover it. 00:37:28.367 [2024-09-29 16:45:28.622688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.367 [2024-09-29 16:45:28.622727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.367 qpair failed and we were unable to recover it. 00:37:28.367 [2024-09-29 16:45:28.622886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.367 [2024-09-29 16:45:28.622934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.367 qpair failed and we were unable to recover it. 00:37:28.367 [2024-09-29 16:45:28.623100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.367 [2024-09-29 16:45:28.623136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.367 qpair failed and we were unable to recover it. 00:37:28.367 [2024-09-29 16:45:28.623253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.367 [2024-09-29 16:45:28.623286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.367 qpair failed and we were unable to recover it. 00:37:28.367 [2024-09-29 16:45:28.623428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.367 [2024-09-29 16:45:28.623461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.367 qpair failed and we were unable to recover it. 00:37:28.367 [2024-09-29 16:45:28.623605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.367 [2024-09-29 16:45:28.623638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.367 qpair failed and we were unable to recover it. 00:37:28.367 [2024-09-29 16:45:28.623769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.367 [2024-09-29 16:45:28.623804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.367 qpair failed and we were unable to recover it. 00:37:28.367 [2024-09-29 16:45:28.623985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.367 [2024-09-29 16:45:28.624023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.367 qpair failed and we were unable to recover it. 00:37:28.367 [2024-09-29 16:45:28.624208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.367 [2024-09-29 16:45:28.624246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.367 qpair failed and we were unable to recover it. 00:37:28.367 [2024-09-29 16:45:28.624371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.367 [2024-09-29 16:45:28.624409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.367 qpair failed and we were unable to recover it. 00:37:28.367 [2024-09-29 16:45:28.624572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.367 [2024-09-29 16:45:28.624606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.367 qpair failed and we were unable to recover it. 00:37:28.367 [2024-09-29 16:45:28.624718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.367 [2024-09-29 16:45:28.624752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.367 qpair failed and we were unable to recover it. 00:37:28.367 [2024-09-29 16:45:28.624901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.367 [2024-09-29 16:45:28.624935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.367 qpair failed and we were unable to recover it. 00:37:28.367 [2024-09-29 16:45:28.625084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.367 [2024-09-29 16:45:28.625137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.367 qpair failed and we were unable to recover it. 00:37:28.367 [2024-09-29 16:45:28.625278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.367 [2024-09-29 16:45:28.625315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.367 qpair failed and we were unable to recover it. 00:37:28.367 [2024-09-29 16:45:28.625455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.367 [2024-09-29 16:45:28.625494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.367 qpair failed and we were unable to recover it. 00:37:28.367 [2024-09-29 16:45:28.625684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.367 [2024-09-29 16:45:28.625733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.367 qpair failed and we were unable to recover it. 00:37:28.367 [2024-09-29 16:45:28.625918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.367 [2024-09-29 16:45:28.625955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.367 qpair failed and we were unable to recover it. 00:37:28.367 [2024-09-29 16:45:28.626114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.367 [2024-09-29 16:45:28.626182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.367 qpair failed and we were unable to recover it. 00:37:28.367 [2024-09-29 16:45:28.626348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.367 [2024-09-29 16:45:28.626402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.367 qpair failed and we were unable to recover it. 00:37:28.367 [2024-09-29 16:45:28.626541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.367 [2024-09-29 16:45:28.626574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.367 qpair failed and we were unable to recover it. 00:37:28.367 [2024-09-29 16:45:28.626711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.367 [2024-09-29 16:45:28.626747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.367 qpair failed and we were unable to recover it. 00:37:28.367 [2024-09-29 16:45:28.626914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.367 [2024-09-29 16:45:28.626962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.367 qpair failed and we were unable to recover it. 00:37:28.367 [2024-09-29 16:45:28.627111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.367 [2024-09-29 16:45:28.627146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.367 qpair failed and we were unable to recover it. 00:37:28.367 [2024-09-29 16:45:28.627265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.367 [2024-09-29 16:45:28.627299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.367 qpair failed and we were unable to recover it. 00:37:28.367 [2024-09-29 16:45:28.627471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.367 [2024-09-29 16:45:28.627505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.367 qpair failed and we were unable to recover it. 00:37:28.367 [2024-09-29 16:45:28.627646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.367 [2024-09-29 16:45:28.627686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.367 qpair failed and we were unable to recover it. 00:37:28.367 [2024-09-29 16:45:28.627815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.367 [2024-09-29 16:45:28.627850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.367 qpair failed and we were unable to recover it. 00:37:28.367 [2024-09-29 16:45:28.627982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.367 [2024-09-29 16:45:28.628015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.367 qpair failed and we were unable to recover it. 00:37:28.368 [2024-09-29 16:45:28.628162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.368 [2024-09-29 16:45:28.628196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.368 qpair failed and we were unable to recover it. 00:37:28.368 [2024-09-29 16:45:28.628336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.368 [2024-09-29 16:45:28.628373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.368 qpair failed and we were unable to recover it. 00:37:28.368 [2024-09-29 16:45:28.628508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.368 [2024-09-29 16:45:28.628547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.368 qpair failed and we were unable to recover it. 00:37:28.368 [2024-09-29 16:45:28.628747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.368 [2024-09-29 16:45:28.628782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.368 qpair failed and we were unable to recover it. 00:37:28.368 [2024-09-29 16:45:28.628955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.368 [2024-09-29 16:45:28.628994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.368 qpair failed and we were unable to recover it. 00:37:28.368 [2024-09-29 16:45:28.629129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.368 [2024-09-29 16:45:28.629164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.368 qpair failed and we were unable to recover it. 00:37:28.368 [2024-09-29 16:45:28.629306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.368 [2024-09-29 16:45:28.629357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.368 qpair failed and we were unable to recover it. 00:37:28.368 [2024-09-29 16:45:28.629466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.368 [2024-09-29 16:45:28.629499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.368 qpair failed and we were unable to recover it. 00:37:28.368 [2024-09-29 16:45:28.629624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.368 [2024-09-29 16:45:28.629658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.368 qpair failed and we were unable to recover it. 00:37:28.368 [2024-09-29 16:45:28.629806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.368 [2024-09-29 16:45:28.629839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.368 qpair failed and we were unable to recover it. 00:37:28.368 [2024-09-29 16:45:28.629966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.368 [2024-09-29 16:45:28.630000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.368 qpair failed and we were unable to recover it. 00:37:28.368 [2024-09-29 16:45:28.630117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.368 [2024-09-29 16:45:28.630157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.368 qpair failed and we were unable to recover it. 00:37:28.368 [2024-09-29 16:45:28.630297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.368 [2024-09-29 16:45:28.630331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.368 qpair failed and we were unable to recover it. 00:37:28.368 [2024-09-29 16:45:28.630480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.368 [2024-09-29 16:45:28.630514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.368 qpair failed and we were unable to recover it. 00:37:28.368 [2024-09-29 16:45:28.630661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.368 [2024-09-29 16:45:28.630702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.368 qpair failed and we were unable to recover it. 00:37:28.368 [2024-09-29 16:45:28.630809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.368 [2024-09-29 16:45:28.630861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.368 qpair failed and we were unable to recover it. 00:37:28.368 [2024-09-29 16:45:28.630984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.368 [2024-09-29 16:45:28.631021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.368 qpair failed and we were unable to recover it. 00:37:28.368 [2024-09-29 16:45:28.631154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.368 [2024-09-29 16:45:28.631190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.368 qpair failed and we were unable to recover it. 00:37:28.368 [2024-09-29 16:45:28.631333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.368 [2024-09-29 16:45:28.631370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.368 qpair failed and we were unable to recover it. 00:37:28.368 [2024-09-29 16:45:28.631536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.368 [2024-09-29 16:45:28.631572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.368 qpair failed and we were unable to recover it. 00:37:28.368 [2024-09-29 16:45:28.631727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.368 [2024-09-29 16:45:28.631765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.368 qpair failed and we were unable to recover it. 00:37:28.368 [2024-09-29 16:45:28.631931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.368 [2024-09-29 16:45:28.631984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.368 qpair failed and we were unable to recover it. 00:37:28.368 [2024-09-29 16:45:28.632149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.368 [2024-09-29 16:45:28.632201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.368 qpair failed and we were unable to recover it. 00:37:28.368 [2024-09-29 16:45:28.632399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.368 [2024-09-29 16:45:28.632452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.368 qpair failed and we were unable to recover it. 00:37:28.368 [2024-09-29 16:45:28.632570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.368 [2024-09-29 16:45:28.632603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.368 qpair failed and we were unable to recover it. 00:37:28.368 [2024-09-29 16:45:28.632749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.368 [2024-09-29 16:45:28.632797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.368 qpair failed and we were unable to recover it. 00:37:28.368 [2024-09-29 16:45:28.632922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.368 [2024-09-29 16:45:28.632956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.368 qpair failed and we were unable to recover it. 00:37:28.368 [2024-09-29 16:45:28.633079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.368 [2024-09-29 16:45:28.633112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.368 qpair failed and we were unable to recover it. 00:37:28.368 [2024-09-29 16:45:28.633271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.368 [2024-09-29 16:45:28.633308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.368 qpair failed and we were unable to recover it. 00:37:28.368 [2024-09-29 16:45:28.633469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.368 [2024-09-29 16:45:28.633525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.368 qpair failed and we were unable to recover it. 00:37:28.368 [2024-09-29 16:45:28.633690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.368 [2024-09-29 16:45:28.633742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.368 qpair failed and we were unable to recover it. 00:37:28.368 [2024-09-29 16:45:28.633878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.368 [2024-09-29 16:45:28.633923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.368 qpair failed and we were unable to recover it. 00:37:28.368 [2024-09-29 16:45:28.634074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.368 [2024-09-29 16:45:28.634112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.368 qpair failed and we were unable to recover it. 00:37:28.368 [2024-09-29 16:45:28.634264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.368 [2024-09-29 16:45:28.634302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.368 qpair failed and we were unable to recover it. 00:37:28.368 [2024-09-29 16:45:28.634453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.368 [2024-09-29 16:45:28.634489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.368 qpair failed and we were unable to recover it. 00:37:28.368 [2024-09-29 16:45:28.634627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.368 [2024-09-29 16:45:28.634677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.368 qpair failed and we were unable to recover it. 00:37:28.368 [2024-09-29 16:45:28.634833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.368 [2024-09-29 16:45:28.634883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.368 qpair failed and we were unable to recover it. 00:37:28.368 [2024-09-29 16:45:28.635072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.368 [2024-09-29 16:45:28.635124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.368 qpair failed and we were unable to recover it. 00:37:28.368 [2024-09-29 16:45:28.635317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.369 [2024-09-29 16:45:28.635354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.369 qpair failed and we were unable to recover it. 00:37:28.369 [2024-09-29 16:45:28.635507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.369 [2024-09-29 16:45:28.635541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.369 qpair failed and we were unable to recover it. 00:37:28.369 [2024-09-29 16:45:28.635653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.369 [2024-09-29 16:45:28.635697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.369 qpair failed and we were unable to recover it. 00:37:28.369 [2024-09-29 16:45:28.635834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.369 [2024-09-29 16:45:28.635884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.369 qpair failed and we were unable to recover it. 00:37:28.369 [2024-09-29 16:45:28.636034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.369 [2024-09-29 16:45:28.636068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.369 qpair failed and we were unable to recover it. 00:37:28.369 [2024-09-29 16:45:28.636212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.369 [2024-09-29 16:45:28.636248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.369 qpair failed and we were unable to recover it. 00:37:28.369 [2024-09-29 16:45:28.636373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.369 [2024-09-29 16:45:28.636407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.369 qpair failed and we were unable to recover it. 00:37:28.369 [2024-09-29 16:45:28.636527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.369 [2024-09-29 16:45:28.636561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.369 qpair failed and we were unable to recover it. 00:37:28.369 [2024-09-29 16:45:28.636727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.369 [2024-09-29 16:45:28.636776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.369 qpair failed and we were unable to recover it. 00:37:28.369 [2024-09-29 16:45:28.636903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.369 [2024-09-29 16:45:28.636943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.369 qpair failed and we were unable to recover it. 00:37:28.369 [2024-09-29 16:45:28.637061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.369 [2024-09-29 16:45:28.637096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.369 qpair failed and we were unable to recover it. 00:37:28.369 [2024-09-29 16:45:28.637229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.369 [2024-09-29 16:45:28.637262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.369 qpair failed and we were unable to recover it. 00:37:28.369 [2024-09-29 16:45:28.637432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.369 [2024-09-29 16:45:28.637479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.369 qpair failed and we were unable to recover it. 00:37:28.369 [2024-09-29 16:45:28.637629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.369 [2024-09-29 16:45:28.637670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.369 qpair failed and we were unable to recover it. 00:37:28.369 [2024-09-29 16:45:28.637801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.369 [2024-09-29 16:45:28.637845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.369 qpair failed and we were unable to recover it. 00:37:28.369 [2024-09-29 16:45:28.637957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.369 [2024-09-29 16:45:28.637991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.369 qpair failed and we were unable to recover it. 00:37:28.369 [2024-09-29 16:45:28.640803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.369 [2024-09-29 16:45:28.640853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.369 qpair failed and we were unable to recover it. 00:37:28.369 [2024-09-29 16:45:28.641053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.369 [2024-09-29 16:45:28.641090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.369 qpair failed and we were unable to recover it. 00:37:28.369 [2024-09-29 16:45:28.641215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.369 [2024-09-29 16:45:28.641250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.369 qpair failed and we were unable to recover it. 00:37:28.369 [2024-09-29 16:45:28.641431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.369 [2024-09-29 16:45:28.641466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.369 qpair failed and we were unable to recover it. 00:37:28.369 [2024-09-29 16:45:28.641615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.369 [2024-09-29 16:45:28.641649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.369 qpair failed and we were unable to recover it. 00:37:28.369 [2024-09-29 16:45:28.641779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.369 [2024-09-29 16:45:28.641814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.369 qpair failed and we were unable to recover it. 00:37:28.369 [2024-09-29 16:45:28.641929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.369 [2024-09-29 16:45:28.641963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.369 qpair failed and we were unable to recover it. 00:37:28.369 [2024-09-29 16:45:28.642116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.369 [2024-09-29 16:45:28.642150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.369 qpair failed and we were unable to recover it. 00:37:28.369 [2024-09-29 16:45:28.642295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.369 [2024-09-29 16:45:28.642358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.369 qpair failed and we were unable to recover it. 00:37:28.369 [2024-09-29 16:45:28.642554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.369 [2024-09-29 16:45:28.642594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.369 qpair failed and we were unable to recover it. 00:37:28.369 [2024-09-29 16:45:28.642757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.369 [2024-09-29 16:45:28.642793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.369 qpair failed and we were unable to recover it. 00:37:28.369 [2024-09-29 16:45:28.642928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.369 [2024-09-29 16:45:28.642962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.369 qpair failed and we were unable to recover it. 00:37:28.369 [2024-09-29 16:45:28.643097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.369 [2024-09-29 16:45:28.643146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.369 qpair failed and we were unable to recover it. 00:37:28.369 [2024-09-29 16:45:28.643329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.369 [2024-09-29 16:45:28.643372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.369 qpair failed and we were unable to recover it. 00:37:28.369 [2024-09-29 16:45:28.643503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.369 [2024-09-29 16:45:28.643538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.369 qpair failed and we were unable to recover it. 00:37:28.369 [2024-09-29 16:45:28.643655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.369 [2024-09-29 16:45:28.643705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.369 qpair failed and we were unable to recover it. 00:37:28.369 [2024-09-29 16:45:28.643892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.369 [2024-09-29 16:45:28.643938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.369 qpair failed and we were unable to recover it. 00:37:28.369 [2024-09-29 16:45:28.644095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.369 [2024-09-29 16:45:28.644149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.369 qpair failed and we were unable to recover it. 00:37:28.369 [2024-09-29 16:45:28.644286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.370 [2024-09-29 16:45:28.644323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.370 qpair failed and we were unable to recover it. 00:37:28.370 [2024-09-29 16:45:28.644447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.370 [2024-09-29 16:45:28.644483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.370 qpair failed and we were unable to recover it. 00:37:28.370 [2024-09-29 16:45:28.644660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.370 [2024-09-29 16:45:28.644700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.370 qpair failed and we were unable to recover it. 00:37:28.370 [2024-09-29 16:45:28.644844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.370 [2024-09-29 16:45:28.644877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.370 qpair failed and we were unable to recover it. 00:37:28.370 [2024-09-29 16:45:28.645049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.370 [2024-09-29 16:45:28.645116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.370 qpair failed and we were unable to recover it. 00:37:28.370 [2024-09-29 16:45:28.645315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.370 [2024-09-29 16:45:28.645356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.370 qpair failed and we were unable to recover it. 00:37:28.370 [2024-09-29 16:45:28.645492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.370 [2024-09-29 16:45:28.645530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.370 qpair failed and we were unable to recover it. 00:37:28.370 [2024-09-29 16:45:28.645662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.370 [2024-09-29 16:45:28.645707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.370 qpair failed and we were unable to recover it. 00:37:28.370 [2024-09-29 16:45:28.645866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.370 [2024-09-29 16:45:28.645900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.370 qpair failed and we were unable to recover it. 00:37:28.370 [2024-09-29 16:45:28.646034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.370 [2024-09-29 16:45:28.646069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.370 qpair failed and we were unable to recover it. 00:37:28.370 [2024-09-29 16:45:28.646184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.370 [2024-09-29 16:45:28.646249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.370 qpair failed and we were unable to recover it. 00:37:28.370 [2024-09-29 16:45:28.646436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.370 [2024-09-29 16:45:28.646474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.370 qpair failed and we were unable to recover it. 00:37:28.370 [2024-09-29 16:45:28.646636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.370 [2024-09-29 16:45:28.646670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.370 qpair failed and we were unable to recover it. 00:37:28.370 [2024-09-29 16:45:28.646800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.370 [2024-09-29 16:45:28.646835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.370 qpair failed and we were unable to recover it. 00:37:28.370 [2024-09-29 16:45:28.646955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.370 [2024-09-29 16:45:28.646988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.370 qpair failed and we were unable to recover it. 00:37:28.370 [2024-09-29 16:45:28.647101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.370 [2024-09-29 16:45:28.647133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.370 qpair failed and we were unable to recover it. 00:37:28.370 [2024-09-29 16:45:28.647311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.370 [2024-09-29 16:45:28.647362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.370 qpair failed and we were unable to recover it. 00:37:28.370 [2024-09-29 16:45:28.647521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.370 [2024-09-29 16:45:28.647558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.370 qpair failed and we were unable to recover it. 00:37:28.370 [2024-09-29 16:45:28.647756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.370 [2024-09-29 16:45:28.647804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.370 qpair failed and we were unable to recover it. 00:37:28.370 [2024-09-29 16:45:28.647934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.370 [2024-09-29 16:45:28.647974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.370 qpair failed and we were unable to recover it. 00:37:28.370 [2024-09-29 16:45:28.648124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.370 [2024-09-29 16:45:28.648157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.370 qpair failed and we were unable to recover it. 00:37:28.370 [2024-09-29 16:45:28.648366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.370 [2024-09-29 16:45:28.648422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.370 qpair failed and we were unable to recover it. 00:37:28.370 [2024-09-29 16:45:28.648555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.370 [2024-09-29 16:45:28.648593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.370 qpair failed and we were unable to recover it. 00:37:28.370 [2024-09-29 16:45:28.648771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.370 [2024-09-29 16:45:28.648818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.370 qpair failed and we were unable to recover it. 00:37:28.370 [2024-09-29 16:45:28.648967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.370 [2024-09-29 16:45:28.649002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.370 qpair failed and we were unable to recover it. 00:37:28.370 [2024-09-29 16:45:28.649124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.370 [2024-09-29 16:45:28.649176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.370 qpair failed and we were unable to recover it. 00:37:28.370 [2024-09-29 16:45:28.649365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.370 [2024-09-29 16:45:28.649421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.370 qpair failed and we were unable to recover it. 00:37:28.370 [2024-09-29 16:45:28.649559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.370 [2024-09-29 16:45:28.649617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.370 qpair failed and we were unable to recover it. 00:37:28.370 [2024-09-29 16:45:28.649787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.370 [2024-09-29 16:45:28.649834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.370 qpair failed and we were unable to recover it. 00:37:28.370 [2024-09-29 16:45:28.649984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.370 [2024-09-29 16:45:28.650020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.370 qpair failed and we were unable to recover it. 00:37:28.370 [2024-09-29 16:45:28.650187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.370 [2024-09-29 16:45:28.650225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.370 qpair failed and we were unable to recover it. 00:37:28.370 [2024-09-29 16:45:28.650391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.370 [2024-09-29 16:45:28.650429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.370 qpair failed and we were unable to recover it. 00:37:28.370 [2024-09-29 16:45:28.650584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.370 [2024-09-29 16:45:28.650638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.370 qpair failed and we were unable to recover it. 00:37:28.370 [2024-09-29 16:45:28.650837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.370 [2024-09-29 16:45:28.650884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.370 qpair failed and we were unable to recover it. 00:37:28.370 [2024-09-29 16:45:28.651108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.370 [2024-09-29 16:45:28.651166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.370 qpair failed and we were unable to recover it. 00:37:28.370 [2024-09-29 16:45:28.651389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.370 [2024-09-29 16:45:28.651445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.370 qpair failed and we were unable to recover it. 00:37:28.370 [2024-09-29 16:45:28.651599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.370 [2024-09-29 16:45:28.651636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.370 qpair failed and we were unable to recover it. 00:37:28.370 [2024-09-29 16:45:28.651789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.370 [2024-09-29 16:45:28.651822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.370 qpair failed and we were unable to recover it. 00:37:28.370 [2024-09-29 16:45:28.651966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.370 [2024-09-29 16:45:28.652002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.371 qpair failed and we were unable to recover it. 00:37:28.371 [2024-09-29 16:45:28.652180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.371 [2024-09-29 16:45:28.652236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.371 qpair failed and we were unable to recover it. 00:37:28.371 [2024-09-29 16:45:28.652379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.371 [2024-09-29 16:45:28.652435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.371 qpair failed and we were unable to recover it. 00:37:28.371 [2024-09-29 16:45:28.652579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.371 [2024-09-29 16:45:28.652613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.371 qpair failed and we were unable to recover it. 00:37:28.371 [2024-09-29 16:45:28.652768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.371 [2024-09-29 16:45:28.652801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.371 qpair failed and we were unable to recover it. 00:37:28.371 [2024-09-29 16:45:28.652917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.371 [2024-09-29 16:45:28.652951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.371 qpair failed and we were unable to recover it. 00:37:28.371 [2024-09-29 16:45:28.653092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.371 [2024-09-29 16:45:28.653145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.371 qpair failed and we were unable to recover it. 00:37:28.371 [2024-09-29 16:45:28.653303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.371 [2024-09-29 16:45:28.653339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.371 qpair failed and we were unable to recover it. 00:37:28.371 [2024-09-29 16:45:28.653489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.371 [2024-09-29 16:45:28.653545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.371 qpair failed and we were unable to recover it. 00:37:28.371 [2024-09-29 16:45:28.653715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.371 [2024-09-29 16:45:28.653765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.371 qpair failed and we were unable to recover it. 00:37:28.371 [2024-09-29 16:45:28.653874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.371 [2024-09-29 16:45:28.653908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.371 qpair failed and we were unable to recover it. 00:37:28.371 [2024-09-29 16:45:28.654074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.371 [2024-09-29 16:45:28.654110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.371 qpair failed and we were unable to recover it. 00:37:28.371 [2024-09-29 16:45:28.654222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.371 [2024-09-29 16:45:28.654258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.371 qpair failed and we were unable to recover it. 00:37:28.371 [2024-09-29 16:45:28.654400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.371 [2024-09-29 16:45:28.654438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.371 qpair failed and we were unable to recover it. 00:37:28.371 [2024-09-29 16:45:28.654646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.371 [2024-09-29 16:45:28.654691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.371 qpair failed and we were unable to recover it. 00:37:28.371 [2024-09-29 16:45:28.654832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.371 [2024-09-29 16:45:28.654864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.371 qpair failed and we were unable to recover it. 00:37:28.371 [2024-09-29 16:45:28.655019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.371 [2024-09-29 16:45:28.655052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.371 qpair failed and we were unable to recover it. 00:37:28.371 [2024-09-29 16:45:28.655265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.371 [2024-09-29 16:45:28.655301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.371 qpair failed and we were unable to recover it. 00:37:28.371 [2024-09-29 16:45:28.655454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.371 [2024-09-29 16:45:28.655490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.371 qpair failed and we were unable to recover it. 00:37:28.371 [2024-09-29 16:45:28.655628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.371 [2024-09-29 16:45:28.655687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.371 qpair failed and we were unable to recover it. 00:37:28.371 [2024-09-29 16:45:28.655852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.371 [2024-09-29 16:45:28.655884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.371 qpair failed and we were unable to recover it. 00:37:28.371 [2024-09-29 16:45:28.656039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.371 [2024-09-29 16:45:28.656097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.371 qpair failed and we were unable to recover it. 00:37:28.371 [2024-09-29 16:45:28.656261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.371 [2024-09-29 16:45:28.656299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.371 qpair failed and we were unable to recover it. 00:37:28.371 [2024-09-29 16:45:28.656511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.371 [2024-09-29 16:45:28.656549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.371 qpair failed and we were unable to recover it. 00:37:28.371 [2024-09-29 16:45:28.656734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.371 [2024-09-29 16:45:28.656769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.371 qpair failed and we were unable to recover it. 00:37:28.371 [2024-09-29 16:45:28.656897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.371 [2024-09-29 16:45:28.656944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.371 qpair failed and we were unable to recover it. 00:37:28.371 [2024-09-29 16:45:28.657127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.371 [2024-09-29 16:45:28.657194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.371 qpair failed and we were unable to recover it. 00:37:28.371 [2024-09-29 16:45:28.657368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.371 [2024-09-29 16:45:28.657423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.371 qpair failed and we were unable to recover it. 00:37:28.371 [2024-09-29 16:45:28.657536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.371 [2024-09-29 16:45:28.657570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.371 qpair failed and we were unable to recover it. 00:37:28.371 [2024-09-29 16:45:28.657684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.371 [2024-09-29 16:45:28.657717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.371 qpair failed and we were unable to recover it. 00:37:28.371 [2024-09-29 16:45:28.657826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.371 [2024-09-29 16:45:28.657860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.371 qpair failed and we were unable to recover it. 00:37:28.371 [2024-09-29 16:45:28.658005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.371 [2024-09-29 16:45:28.658043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.371 qpair failed and we were unable to recover it. 00:37:28.371 [2024-09-29 16:45:28.658172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.371 [2024-09-29 16:45:28.658208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.371 qpair failed and we were unable to recover it. 00:37:28.371 [2024-09-29 16:45:28.658334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.371 [2024-09-29 16:45:28.658383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.371 qpair failed and we were unable to recover it. 00:37:28.371 [2024-09-29 16:45:28.658552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.371 [2024-09-29 16:45:28.658589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.371 qpair failed and we were unable to recover it. 00:37:28.371 [2024-09-29 16:45:28.658776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.371 [2024-09-29 16:45:28.658824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.371 qpair failed and we were unable to recover it. 00:37:28.371 [2024-09-29 16:45:28.658973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.371 [2024-09-29 16:45:28.659041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.371 qpair failed and we were unable to recover it. 00:37:28.371 [2024-09-29 16:45:28.659194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.371 [2024-09-29 16:45:28.659249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.371 qpair failed and we were unable to recover it. 00:37:28.371 [2024-09-29 16:45:28.659408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.371 [2024-09-29 16:45:28.659446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.371 qpair failed and we were unable to recover it. 00:37:28.371 [2024-09-29 16:45:28.659604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.372 [2024-09-29 16:45:28.659643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.372 qpair failed and we were unable to recover it. 00:37:28.372 [2024-09-29 16:45:28.659833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.372 [2024-09-29 16:45:28.659869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.372 qpair failed and we were unable to recover it. 00:37:28.372 [2024-09-29 16:45:28.660039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.372 [2024-09-29 16:45:28.660102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.372 qpair failed and we were unable to recover it. 00:37:28.372 [2024-09-29 16:45:28.660270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.372 [2024-09-29 16:45:28.660324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.372 qpair failed and we were unable to recover it. 00:37:28.372 [2024-09-29 16:45:28.660509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.372 [2024-09-29 16:45:28.660545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.372 qpair failed and we were unable to recover it. 00:37:28.372 [2024-09-29 16:45:28.660700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.372 [2024-09-29 16:45:28.660753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.372 qpair failed and we were unable to recover it. 00:37:28.372 [2024-09-29 16:45:28.660896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.372 [2024-09-29 16:45:28.660929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.372 qpair failed and we were unable to recover it. 00:37:28.372 [2024-09-29 16:45:28.661061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.372 [2024-09-29 16:45:28.661099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.372 qpair failed and we were unable to recover it. 00:37:28.372 [2024-09-29 16:45:28.661276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.372 [2024-09-29 16:45:28.661334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.372 qpair failed and we were unable to recover it. 00:37:28.372 [2024-09-29 16:45:28.661470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.372 [2024-09-29 16:45:28.661508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.372 qpair failed and we were unable to recover it. 00:37:28.372 [2024-09-29 16:45:28.661662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.372 [2024-09-29 16:45:28.661721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.372 qpair failed and we were unable to recover it. 00:37:28.372 [2024-09-29 16:45:28.661869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.372 [2024-09-29 16:45:28.661929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.372 qpair failed and we were unable to recover it. 00:37:28.372 [2024-09-29 16:45:28.662080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.372 [2024-09-29 16:45:28.662157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.372 qpair failed and we were unable to recover it. 00:37:28.372 [2024-09-29 16:45:28.662337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.372 [2024-09-29 16:45:28.662393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.372 qpair failed and we were unable to recover it. 00:37:28.372 [2024-09-29 16:45:28.662525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.372 [2024-09-29 16:45:28.662563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.372 qpair failed and we were unable to recover it. 00:37:28.372 [2024-09-29 16:45:28.662692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.372 [2024-09-29 16:45:28.662747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.372 qpair failed and we were unable to recover it. 00:37:28.372 [2024-09-29 16:45:28.662874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.372 [2024-09-29 16:45:28.662909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.372 qpair failed and we were unable to recover it. 00:37:28.372 [2024-09-29 16:45:28.663089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.372 [2024-09-29 16:45:28.663123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.372 qpair failed and we were unable to recover it. 00:37:28.372 [2024-09-29 16:45:28.663241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.372 [2024-09-29 16:45:28.663291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.372 qpair failed and we were unable to recover it. 00:37:28.372 [2024-09-29 16:45:28.663439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.372 [2024-09-29 16:45:28.663475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.372 qpair failed and we were unable to recover it. 00:37:28.372 [2024-09-29 16:45:28.663639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.372 [2024-09-29 16:45:28.663678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.372 qpair failed and we were unable to recover it. 00:37:28.372 [2024-09-29 16:45:28.663796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.372 [2024-09-29 16:45:28.663829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.372 qpair failed and we were unable to recover it. 00:37:28.372 [2024-09-29 16:45:28.664025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.372 [2024-09-29 16:45:28.664083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.372 qpair failed and we were unable to recover it. 00:37:28.372 [2024-09-29 16:45:28.664214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.372 [2024-09-29 16:45:28.664266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.372 qpair failed and we were unable to recover it. 00:37:28.372 [2024-09-29 16:45:28.664443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.372 [2024-09-29 16:45:28.664476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.372 qpair failed and we were unable to recover it. 00:37:28.372 [2024-09-29 16:45:28.664650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.372 [2024-09-29 16:45:28.664697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.372 qpair failed and we were unable to recover it. 00:37:28.372 [2024-09-29 16:45:28.664890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.372 [2024-09-29 16:45:28.664924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.372 qpair failed and we were unable to recover it. 00:37:28.372 [2024-09-29 16:45:28.665091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.372 [2024-09-29 16:45:28.665131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.372 qpair failed and we were unable to recover it. 00:37:28.372 [2024-09-29 16:45:28.665281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.372 [2024-09-29 16:45:28.665318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.372 qpair failed and we were unable to recover it. 00:37:28.372 [2024-09-29 16:45:28.665529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.372 [2024-09-29 16:45:28.665567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.372 qpair failed and we were unable to recover it. 00:37:28.372 [2024-09-29 16:45:28.665704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.372 [2024-09-29 16:45:28.665755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.372 qpair failed and we were unable to recover it. 00:37:28.372 [2024-09-29 16:45:28.665871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.372 [2024-09-29 16:45:28.665904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.372 qpair failed and we were unable to recover it. 00:37:28.372 [2024-09-29 16:45:28.666046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.372 [2024-09-29 16:45:28.666080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.372 qpair failed and we were unable to recover it. 00:37:28.372 [2024-09-29 16:45:28.666203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.372 [2024-09-29 16:45:28.666258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.372 qpair failed and we were unable to recover it. 00:37:28.372 [2024-09-29 16:45:28.666415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.372 [2024-09-29 16:45:28.666453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.372 qpair failed and we were unable to recover it. 00:37:28.372 [2024-09-29 16:45:28.666589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.372 [2024-09-29 16:45:28.666623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.372 qpair failed and we were unable to recover it. 00:37:28.372 [2024-09-29 16:45:28.666782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.372 [2024-09-29 16:45:28.666816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.372 qpair failed and we were unable to recover it. 00:37:28.372 [2024-09-29 16:45:28.666932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.372 [2024-09-29 16:45:28.666994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.372 qpair failed and we were unable to recover it. 00:37:28.372 [2024-09-29 16:45:28.667159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.372 [2024-09-29 16:45:28.667192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.373 qpair failed and we were unable to recover it. 00:37:28.373 [2024-09-29 16:45:28.667309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.373 [2024-09-29 16:45:28.667360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.373 qpair failed and we were unable to recover it. 00:37:28.373 [2024-09-29 16:45:28.667550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.373 [2024-09-29 16:45:28.667587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.373 qpair failed and we were unable to recover it. 00:37:28.373 [2024-09-29 16:45:28.667736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.373 [2024-09-29 16:45:28.667770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.373 qpair failed and we were unable to recover it. 00:37:28.373 [2024-09-29 16:45:28.667893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.373 [2024-09-29 16:45:28.667926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.373 qpair failed and we were unable to recover it. 00:37:28.373 [2024-09-29 16:45:28.668048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.373 [2024-09-29 16:45:28.668101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.373 qpair failed and we were unable to recover it. 00:37:28.373 [2024-09-29 16:45:28.668254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.373 [2024-09-29 16:45:28.668314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.373 qpair failed and we were unable to recover it. 00:37:28.373 [2024-09-29 16:45:28.668481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.373 [2024-09-29 16:45:28.668518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.373 qpair failed and we were unable to recover it. 00:37:28.373 [2024-09-29 16:45:28.668644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.373 [2024-09-29 16:45:28.668689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.373 qpair failed and we were unable to recover it. 00:37:28.373 [2024-09-29 16:45:28.668844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.373 [2024-09-29 16:45:28.668893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.373 qpair failed and we were unable to recover it. 00:37:28.373 [2024-09-29 16:45:28.669054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.373 [2024-09-29 16:45:28.669111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.373 qpair failed and we were unable to recover it. 00:37:28.373 [2024-09-29 16:45:28.669318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.373 [2024-09-29 16:45:28.669371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.373 qpair failed and we were unable to recover it. 00:37:28.373 [2024-09-29 16:45:28.669490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.373 [2024-09-29 16:45:28.669526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.373 qpair failed and we were unable to recover it. 00:37:28.373 [2024-09-29 16:45:28.669646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.373 [2024-09-29 16:45:28.669691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.373 qpair failed and we were unable to recover it. 00:37:28.373 [2024-09-29 16:45:28.669838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.373 [2024-09-29 16:45:28.669891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.373 qpair failed and we were unable to recover it. 00:37:28.373 [2024-09-29 16:45:28.670034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.373 [2024-09-29 16:45:28.670069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.373 qpair failed and we were unable to recover it. 00:37:28.373 [2024-09-29 16:45:28.670187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.373 [2024-09-29 16:45:28.670221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.373 qpair failed and we were unable to recover it. 00:37:28.373 [2024-09-29 16:45:28.670363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.373 [2024-09-29 16:45:28.670397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.373 qpair failed and we were unable to recover it. 00:37:28.373 [2024-09-29 16:45:28.670568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.373 [2024-09-29 16:45:28.670604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.373 qpair failed and we were unable to recover it. 00:37:28.373 [2024-09-29 16:45:28.670722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.373 [2024-09-29 16:45:28.670756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.373 qpair failed and we were unable to recover it. 00:37:28.373 [2024-09-29 16:45:28.670929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.373 [2024-09-29 16:45:28.670964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.373 qpair failed and we were unable to recover it. 00:37:28.373 [2024-09-29 16:45:28.671102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.373 [2024-09-29 16:45:28.671155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.373 qpair failed and we were unable to recover it. 00:37:28.373 [2024-09-29 16:45:28.671334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.373 [2024-09-29 16:45:28.671371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.373 qpair failed and we were unable to recover it. 00:37:28.373 [2024-09-29 16:45:28.671518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.373 [2024-09-29 16:45:28.671571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.373 qpair failed and we were unable to recover it. 00:37:28.373 [2024-09-29 16:45:28.671721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.373 [2024-09-29 16:45:28.671783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.373 qpair failed and we were unable to recover it. 00:37:28.373 [2024-09-29 16:45:28.671947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.373 [2024-09-29 16:45:28.671985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.373 qpair failed and we were unable to recover it. 00:37:28.373 [2024-09-29 16:45:28.672136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.373 [2024-09-29 16:45:28.672193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.373 qpair failed and we were unable to recover it. 00:37:28.373 [2024-09-29 16:45:28.672400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.373 [2024-09-29 16:45:28.672463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.373 qpair failed and we were unable to recover it. 00:37:28.373 [2024-09-29 16:45:28.672628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.373 [2024-09-29 16:45:28.672666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.373 qpair failed and we were unable to recover it. 00:37:28.373 [2024-09-29 16:45:28.672802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.373 [2024-09-29 16:45:28.672836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.373 qpair failed and we were unable to recover it. 00:37:28.373 [2024-09-29 16:45:28.672976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.373 [2024-09-29 16:45:28.673012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.373 qpair failed and we were unable to recover it. 00:37:28.373 [2024-09-29 16:45:28.673140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.373 [2024-09-29 16:45:28.673178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.373 qpair failed and we were unable to recover it. 00:37:28.373 [2024-09-29 16:45:28.673332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.373 [2024-09-29 16:45:28.673369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.373 qpair failed and we were unable to recover it. 00:37:28.373 [2024-09-29 16:45:28.673534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.373 [2024-09-29 16:45:28.673588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.373 qpair failed and we were unable to recover it. 00:37:28.373 [2024-09-29 16:45:28.673714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.373 [2024-09-29 16:45:28.673749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.373 qpair failed and we were unable to recover it. 00:37:28.373 [2024-09-29 16:45:28.673906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.373 [2024-09-29 16:45:28.673959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.373 qpair failed and we were unable to recover it. 00:37:28.373 [2024-09-29 16:45:28.674121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.373 [2024-09-29 16:45:28.674173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.373 qpair failed and we were unable to recover it. 00:37:28.373 [2024-09-29 16:45:28.674345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.373 [2024-09-29 16:45:28.674397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.373 qpair failed and we were unable to recover it. 00:37:28.373 [2024-09-29 16:45:28.674560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.373 [2024-09-29 16:45:28.674609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.374 qpair failed and we were unable to recover it. 00:37:28.374 [2024-09-29 16:45:28.674734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.374 [2024-09-29 16:45:28.674769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.374 qpair failed and we were unable to recover it. 00:37:28.374 [2024-09-29 16:45:28.674889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.374 [2024-09-29 16:45:28.674923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.374 qpair failed and we were unable to recover it. 00:37:28.374 [2024-09-29 16:45:28.675148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.374 [2024-09-29 16:45:28.675185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.374 qpair failed and we were unable to recover it. 00:37:28.374 [2024-09-29 16:45:28.675376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.374 [2024-09-29 16:45:28.675434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.374 qpair failed and we were unable to recover it. 00:37:28.374 [2024-09-29 16:45:28.675593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.374 [2024-09-29 16:45:28.675629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.374 qpair failed and we were unable to recover it. 00:37:28.374 [2024-09-29 16:45:28.675803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.374 [2024-09-29 16:45:28.675836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.374 qpair failed and we were unable to recover it. 00:37:28.374 [2024-09-29 16:45:28.675937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.374 [2024-09-29 16:45:28.675989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.374 qpair failed and we were unable to recover it. 00:37:28.374 [2024-09-29 16:45:28.676169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.374 [2024-09-29 16:45:28.676225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.374 qpair failed and we were unable to recover it. 00:37:28.374 [2024-09-29 16:45:28.676475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.374 [2024-09-29 16:45:28.676533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.374 qpair failed and we were unable to recover it. 00:37:28.374 [2024-09-29 16:45:28.676716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.374 [2024-09-29 16:45:28.676780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.374 qpair failed and we were unable to recover it. 00:37:28.374 [2024-09-29 16:45:28.676905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.374 [2024-09-29 16:45:28.676941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.374 qpair failed and we were unable to recover it. 00:37:28.374 [2024-09-29 16:45:28.677112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.374 [2024-09-29 16:45:28.677150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.374 qpair failed and we were unable to recover it. 00:37:28.374 [2024-09-29 16:45:28.677359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.374 [2024-09-29 16:45:28.677430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.374 qpair failed and we were unable to recover it. 00:37:28.374 [2024-09-29 16:45:28.677565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.374 [2024-09-29 16:45:28.677599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.374 qpair failed and we were unable to recover it. 00:37:28.374 [2024-09-29 16:45:28.677755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.374 [2024-09-29 16:45:28.677789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.374 qpair failed and we were unable to recover it. 00:37:28.374 [2024-09-29 16:45:28.677909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.374 [2024-09-29 16:45:28.677942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.374 qpair failed and we were unable to recover it. 00:37:28.374 [2024-09-29 16:45:28.678165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.374 [2024-09-29 16:45:28.678201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.374 qpair failed and we were unable to recover it. 00:37:28.374 [2024-09-29 16:45:28.678342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.374 [2024-09-29 16:45:28.678379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.374 qpair failed and we were unable to recover it. 00:37:28.374 [2024-09-29 16:45:28.678566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.374 [2024-09-29 16:45:28.678603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.374 qpair failed and we were unable to recover it. 00:37:28.374 [2024-09-29 16:45:28.678767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.374 [2024-09-29 16:45:28.678800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.374 qpair failed and we were unable to recover it. 00:37:28.374 [2024-09-29 16:45:28.678928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.374 [2024-09-29 16:45:28.678964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.374 qpair failed and we were unable to recover it. 00:37:28.374 [2024-09-29 16:45:28.679115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.374 [2024-09-29 16:45:28.679152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.374 qpair failed and we were unable to recover it. 00:37:28.374 [2024-09-29 16:45:28.679290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.374 [2024-09-29 16:45:28.679333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.374 qpair failed and we were unable to recover it. 00:37:28.374 [2024-09-29 16:45:28.679491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.374 [2024-09-29 16:45:28.679527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.374 qpair failed and we were unable to recover it. 00:37:28.374 [2024-09-29 16:45:28.679701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.374 [2024-09-29 16:45:28.679767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.374 qpair failed and we were unable to recover it. 00:37:28.374 [2024-09-29 16:45:28.679901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.374 [2024-09-29 16:45:28.679957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.374 qpair failed and we were unable to recover it. 00:37:28.374 [2024-09-29 16:45:28.680102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.374 [2024-09-29 16:45:28.680167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.374 qpair failed and we were unable to recover it. 00:37:28.374 [2024-09-29 16:45:28.680367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.374 [2024-09-29 16:45:28.680425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.374 qpair failed and we were unable to recover it. 00:37:28.374 [2024-09-29 16:45:28.680553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.374 [2024-09-29 16:45:28.680586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.374 qpair failed and we were unable to recover it. 00:37:28.374 [2024-09-29 16:45:28.680724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.374 [2024-09-29 16:45:28.680758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.374 qpair failed and we were unable to recover it. 00:37:28.374 [2024-09-29 16:45:28.680868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.374 [2024-09-29 16:45:28.680901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.374 qpair failed and we were unable to recover it. 00:37:28.374 [2024-09-29 16:45:28.681006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.374 [2024-09-29 16:45:28.681058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.374 qpair failed and we were unable to recover it. 00:37:28.374 [2024-09-29 16:45:28.681210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.374 [2024-09-29 16:45:28.681247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.374 qpair failed and we were unable to recover it. 00:37:28.374 [2024-09-29 16:45:28.681400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.374 [2024-09-29 16:45:28.681437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.374 qpair failed and we were unable to recover it. 00:37:28.374 [2024-09-29 16:45:28.681566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.375 [2024-09-29 16:45:28.681603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.375 qpair failed and we were unable to recover it. 00:37:28.375 [2024-09-29 16:45:28.681829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.375 [2024-09-29 16:45:28.681866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.375 qpair failed and we were unable to recover it. 00:37:28.375 [2024-09-29 16:45:28.682067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.375 [2024-09-29 16:45:28.682124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.375 qpair failed and we were unable to recover it. 00:37:28.375 [2024-09-29 16:45:28.682284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.375 [2024-09-29 16:45:28.682322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.375 qpair failed and we were unable to recover it. 00:37:28.375 [2024-09-29 16:45:28.682480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.375 [2024-09-29 16:45:28.682517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.375 qpair failed and we were unable to recover it. 00:37:28.375 [2024-09-29 16:45:28.682656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.375 [2024-09-29 16:45:28.682722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.375 qpair failed and we were unable to recover it. 00:37:28.375 [2024-09-29 16:45:28.682829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.375 [2024-09-29 16:45:28.682862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.375 qpair failed and we were unable to recover it. 00:37:28.375 [2024-09-29 16:45:28.683038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.375 [2024-09-29 16:45:28.683106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.375 qpair failed and we were unable to recover it. 00:37:28.375 [2024-09-29 16:45:28.683313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.375 [2024-09-29 16:45:28.683369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.375 qpair failed and we were unable to recover it. 00:37:28.375 [2024-09-29 16:45:28.683491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.375 [2024-09-29 16:45:28.683526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.375 qpair failed and we were unable to recover it. 00:37:28.375 [2024-09-29 16:45:28.683645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.375 [2024-09-29 16:45:28.683688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.375 qpair failed and we were unable to recover it. 00:37:28.375 [2024-09-29 16:45:28.683910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.375 [2024-09-29 16:45:28.683963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.375 qpair failed and we were unable to recover it. 00:37:28.375 [2024-09-29 16:45:28.684105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.375 [2024-09-29 16:45:28.684145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.375 qpair failed and we were unable to recover it. 00:37:28.375 [2024-09-29 16:45:28.684305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.375 [2024-09-29 16:45:28.684343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.375 qpair failed and we were unable to recover it. 00:37:28.375 [2024-09-29 16:45:28.684514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.375 [2024-09-29 16:45:28.684576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.375 qpair failed and we were unable to recover it. 00:37:28.375 [2024-09-29 16:45:28.684745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.375 [2024-09-29 16:45:28.684780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.375 qpair failed and we were unable to recover it. 00:37:28.375 [2024-09-29 16:45:28.684903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.375 [2024-09-29 16:45:28.684938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.375 qpair failed and we were unable to recover it. 00:37:28.375 [2024-09-29 16:45:28.685104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.375 [2024-09-29 16:45:28.685142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.375 qpair failed and we were unable to recover it. 00:37:28.375 [2024-09-29 16:45:28.685270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.375 [2024-09-29 16:45:28.685307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.375 qpair failed and we were unable to recover it. 00:37:28.375 [2024-09-29 16:45:28.685431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.375 [2024-09-29 16:45:28.685468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.375 qpair failed and we were unable to recover it. 00:37:28.375 [2024-09-29 16:45:28.685633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.375 [2024-09-29 16:45:28.685666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.375 qpair failed and we were unable to recover it. 00:37:28.375 [2024-09-29 16:45:28.685814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.375 [2024-09-29 16:45:28.685847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.375 qpair failed and we were unable to recover it. 00:37:28.375 [2024-09-29 16:45:28.686025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.375 [2024-09-29 16:45:28.686063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.375 qpair failed and we were unable to recover it. 00:37:28.375 [2024-09-29 16:45:28.686222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.375 [2024-09-29 16:45:28.686259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.375 qpair failed and we were unable to recover it. 00:37:28.375 [2024-09-29 16:45:28.686418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.375 [2024-09-29 16:45:28.686456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.375 qpair failed and we were unable to recover it. 00:37:28.375 [2024-09-29 16:45:28.686602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.375 [2024-09-29 16:45:28.686635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.375 qpair failed and we were unable to recover it. 00:37:28.375 [2024-09-29 16:45:28.686784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.375 [2024-09-29 16:45:28.686818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.375 qpair failed and we were unable to recover it. 00:37:28.375 [2024-09-29 16:45:28.686982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.375 [2024-09-29 16:45:28.687020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.375 qpair failed and we were unable to recover it. 00:37:28.375 [2024-09-29 16:45:28.687147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.375 [2024-09-29 16:45:28.687185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.375 qpair failed and we were unable to recover it. 00:37:28.375 [2024-09-29 16:45:28.687340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.375 [2024-09-29 16:45:28.687377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.375 qpair failed and we were unable to recover it. 00:37:28.375 [2024-09-29 16:45:28.687506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.375 [2024-09-29 16:45:28.687542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.375 qpair failed and we were unable to recover it. 00:37:28.375 [2024-09-29 16:45:28.687679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.375 [2024-09-29 16:45:28.687737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.375 qpair failed and we were unable to recover it. 00:37:28.375 [2024-09-29 16:45:28.687897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.375 [2024-09-29 16:45:28.687944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.375 qpair failed and we were unable to recover it. 00:37:28.375 [2024-09-29 16:45:28.688110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.375 [2024-09-29 16:45:28.688165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.375 qpair failed and we were unable to recover it. 00:37:28.375 [2024-09-29 16:45:28.688340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.375 [2024-09-29 16:45:28.688399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.375 qpair failed and we were unable to recover it. 00:37:28.375 [2024-09-29 16:45:28.688521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.375 [2024-09-29 16:45:28.688555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.375 qpair failed and we were unable to recover it. 00:37:28.375 [2024-09-29 16:45:28.688705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.375 [2024-09-29 16:45:28.688740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.375 qpair failed and we were unable to recover it. 00:37:28.375 [2024-09-29 16:45:28.688882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.375 [2024-09-29 16:45:28.688916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.375 qpair failed and we were unable to recover it. 00:37:28.375 [2024-09-29 16:45:28.689064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.376 [2024-09-29 16:45:28.689098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.376 qpair failed and we were unable to recover it. 00:37:28.376 [2024-09-29 16:45:28.689206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.376 [2024-09-29 16:45:28.689240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.376 qpair failed and we were unable to recover it. 00:37:28.376 [2024-09-29 16:45:28.689388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.376 [2024-09-29 16:45:28.689429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.376 qpair failed and we were unable to recover it. 00:37:28.376 [2024-09-29 16:45:28.689579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.376 [2024-09-29 16:45:28.689627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.376 qpair failed and we were unable to recover it. 00:37:28.376 [2024-09-29 16:45:28.689785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.376 [2024-09-29 16:45:28.689821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.376 qpair failed and we were unable to recover it. 00:37:28.376 [2024-09-29 16:45:28.689969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.376 [2024-09-29 16:45:28.690004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.376 qpair failed and we were unable to recover it. 00:37:28.376 [2024-09-29 16:45:28.690127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.376 [2024-09-29 16:45:28.690160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.376 qpair failed and we were unable to recover it. 00:37:28.376 [2024-09-29 16:45:28.690291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.376 [2024-09-29 16:45:28.690324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.376 qpair failed and we were unable to recover it. 00:37:28.376 [2024-09-29 16:45:28.690481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.376 [2024-09-29 16:45:28.690534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.376 qpair failed and we were unable to recover it. 00:37:28.376 [2024-09-29 16:45:28.690726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.376 [2024-09-29 16:45:28.690761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.376 qpair failed and we were unable to recover it. 00:37:28.376 [2024-09-29 16:45:28.690894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.376 [2024-09-29 16:45:28.690932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.376 qpair failed and we were unable to recover it. 00:37:28.376 [2024-09-29 16:45:28.691093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.376 [2024-09-29 16:45:28.691145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.376 qpair failed and we were unable to recover it. 00:37:28.376 [2024-09-29 16:45:28.691394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.376 [2024-09-29 16:45:28.691442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.376 qpair failed and we were unable to recover it. 00:37:28.376 [2024-09-29 16:45:28.691598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.376 [2024-09-29 16:45:28.691634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.376 qpair failed and we were unable to recover it. 00:37:28.376 [2024-09-29 16:45:28.691793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.376 [2024-09-29 16:45:28.691828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.376 qpair failed and we were unable to recover it. 00:37:28.376 [2024-09-29 16:45:28.691983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.376 [2024-09-29 16:45:28.692030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.376 qpair failed and we were unable to recover it. 00:37:28.376 [2024-09-29 16:45:28.692213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.376 [2024-09-29 16:45:28.692251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.376 qpair failed and we were unable to recover it. 00:37:28.376 [2024-09-29 16:45:28.692402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.376 [2024-09-29 16:45:28.692455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.376 qpair failed and we were unable to recover it. 00:37:28.376 [2024-09-29 16:45:28.692601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.376 [2024-09-29 16:45:28.692636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.376 qpair failed and we were unable to recover it. 00:37:28.376 [2024-09-29 16:45:28.692818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.376 [2024-09-29 16:45:28.692852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.376 qpair failed and we were unable to recover it. 00:37:28.376 [2024-09-29 16:45:28.693016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.376 [2024-09-29 16:45:28.693055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.376 qpair failed and we were unable to recover it. 00:37:28.376 [2024-09-29 16:45:28.693312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.376 [2024-09-29 16:45:28.693372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.376 qpair failed and we were unable to recover it. 00:37:28.376 [2024-09-29 16:45:28.693524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.376 [2024-09-29 16:45:28.693562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.376 qpair failed and we were unable to recover it. 00:37:28.376 [2024-09-29 16:45:28.693691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.376 [2024-09-29 16:45:28.693743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.376 qpair failed and we were unable to recover it. 00:37:28.376 [2024-09-29 16:45:28.693887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.376 [2024-09-29 16:45:28.693936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.376 qpair failed and we were unable to recover it. 00:37:28.376 [2024-09-29 16:45:28.694138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.376 [2024-09-29 16:45:28.694192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.376 qpair failed and we were unable to recover it. 00:37:28.376 [2024-09-29 16:45:28.694388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.376 [2024-09-29 16:45:28.694454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.376 qpair failed and we were unable to recover it. 00:37:28.376 [2024-09-29 16:45:28.694608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.376 [2024-09-29 16:45:28.694646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.376 qpair failed and we were unable to recover it. 00:37:28.376 [2024-09-29 16:45:28.694824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.376 [2024-09-29 16:45:28.694860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.376 qpair failed and we were unable to recover it. 00:37:28.376 [2024-09-29 16:45:28.695048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.376 [2024-09-29 16:45:28.695101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.376 qpair failed and we were unable to recover it. 00:37:28.376 [2024-09-29 16:45:28.695243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.376 [2024-09-29 16:45:28.695283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.376 qpair failed and we were unable to recover it. 00:37:28.376 [2024-09-29 16:45:28.695426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.376 [2024-09-29 16:45:28.695478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.376 qpair failed and we were unable to recover it. 00:37:28.376 [2024-09-29 16:45:28.695648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.376 [2024-09-29 16:45:28.695698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.376 qpair failed and we were unable to recover it. 00:37:28.376 [2024-09-29 16:45:28.695863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.376 [2024-09-29 16:45:28.695916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.376 qpair failed and we were unable to recover it. 00:37:28.376 [2024-09-29 16:45:28.696090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.376 [2024-09-29 16:45:28.696130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.376 qpair failed and we were unable to recover it. 00:37:28.376 [2024-09-29 16:45:28.696281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.376 [2024-09-29 16:45:28.696319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.376 qpair failed and we were unable to recover it. 00:37:28.376 [2024-09-29 16:45:28.696521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.376 [2024-09-29 16:45:28.696559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.376 qpair failed and we were unable to recover it. 00:37:28.376 [2024-09-29 16:45:28.696724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.376 [2024-09-29 16:45:28.696770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.376 qpair failed and we were unable to recover it. 00:37:28.376 [2024-09-29 16:45:28.696919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.376 [2024-09-29 16:45:28.696972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.376 qpair failed and we were unable to recover it. 00:37:28.377 [2024-09-29 16:45:28.697125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.377 [2024-09-29 16:45:28.697163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.377 qpair failed and we were unable to recover it. 00:37:28.377 [2024-09-29 16:45:28.697323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.377 [2024-09-29 16:45:28.697361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.377 qpair failed and we were unable to recover it. 00:37:28.377 [2024-09-29 16:45:28.697561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.377 [2024-09-29 16:45:28.697595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.377 qpair failed and we were unable to recover it. 00:37:28.377 [2024-09-29 16:45:28.697749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.377 [2024-09-29 16:45:28.697786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.377 qpair failed and we were unable to recover it. 00:37:28.377 [2024-09-29 16:45:28.697945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.377 [2024-09-29 16:45:28.697993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.377 qpair failed and we were unable to recover it. 00:37:28.377 [2024-09-29 16:45:28.698147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.377 [2024-09-29 16:45:28.698206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.377 qpair failed and we were unable to recover it. 00:37:28.377 [2024-09-29 16:45:28.698409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.377 [2024-09-29 16:45:28.698444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.377 qpair failed and we were unable to recover it. 00:37:28.377 [2024-09-29 16:45:28.698588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.377 [2024-09-29 16:45:28.698623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.377 qpair failed and we were unable to recover it. 00:37:28.377 [2024-09-29 16:45:28.698796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.377 [2024-09-29 16:45:28.698843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.377 qpair failed and we were unable to recover it. 00:37:28.377 [2024-09-29 16:45:28.698961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.377 [2024-09-29 16:45:28.698996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.377 qpair failed and we were unable to recover it. 00:37:28.377 [2024-09-29 16:45:28.699150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.377 [2024-09-29 16:45:28.699199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.377 qpair failed and we were unable to recover it. 00:37:28.377 [2024-09-29 16:45:28.699319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.377 [2024-09-29 16:45:28.699354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.377 qpair failed and we were unable to recover it. 00:37:28.377 [2024-09-29 16:45:28.699499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.377 [2024-09-29 16:45:28.699533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.377 qpair failed and we were unable to recover it. 00:37:28.377 [2024-09-29 16:45:28.699657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.377 [2024-09-29 16:45:28.699699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.377 qpair failed and we were unable to recover it. 00:37:28.377 [2024-09-29 16:45:28.699850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.377 [2024-09-29 16:45:28.699886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.377 qpair failed and we were unable to recover it. 00:37:28.377 [2024-09-29 16:45:28.700018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.377 [2024-09-29 16:45:28.700071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.377 qpair failed and we were unable to recover it. 00:37:28.377 [2024-09-29 16:45:28.700248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.377 [2024-09-29 16:45:28.700300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.377 qpair failed and we were unable to recover it. 00:37:28.377 [2024-09-29 16:45:28.700447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.377 [2024-09-29 16:45:28.700515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.377 qpair failed and we were unable to recover it. 00:37:28.377 [2024-09-29 16:45:28.700656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.377 [2024-09-29 16:45:28.700718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.377 qpair failed and we were unable to recover it. 00:37:28.377 [2024-09-29 16:45:28.700837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.377 [2024-09-29 16:45:28.700871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.377 qpair failed and we were unable to recover it. 00:37:28.377 [2024-09-29 16:45:28.701024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.377 [2024-09-29 16:45:28.701059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.377 qpair failed and we were unable to recover it. 00:37:28.377 [2024-09-29 16:45:28.701198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.377 [2024-09-29 16:45:28.701232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.377 qpair failed and we were unable to recover it. 00:37:28.377 [2024-09-29 16:45:28.701380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.377 [2024-09-29 16:45:28.701414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.377 qpair failed and we were unable to recover it. 00:37:28.377 [2024-09-29 16:45:28.701571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.377 [2024-09-29 16:45:28.701607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.377 qpair failed and we were unable to recover it. 00:37:28.377 [2024-09-29 16:45:28.701780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.377 [2024-09-29 16:45:28.701819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.377 qpair failed and we were unable to recover it. 00:37:28.377 [2024-09-29 16:45:28.702029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.377 [2024-09-29 16:45:28.702082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.377 qpair failed and we were unable to recover it. 00:37:28.377 [2024-09-29 16:45:28.702263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.377 [2024-09-29 16:45:28.702321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.377 qpair failed and we were unable to recover it. 00:37:28.377 [2024-09-29 16:45:28.702469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.377 [2024-09-29 16:45:28.702503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.377 qpair failed and we were unable to recover it. 00:37:28.377 [2024-09-29 16:45:28.702681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.377 [2024-09-29 16:45:28.702716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.377 qpair failed and we were unable to recover it. 00:37:28.377 [2024-09-29 16:45:28.702858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.377 [2024-09-29 16:45:28.702892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.377 qpair failed and we were unable to recover it. 00:37:28.377 [2024-09-29 16:45:28.703186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.377 [2024-09-29 16:45:28.703257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.377 qpair failed and we were unable to recover it. 00:37:28.377 [2024-09-29 16:45:28.703493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.377 [2024-09-29 16:45:28.703555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.377 qpair failed and we were unable to recover it. 00:37:28.377 [2024-09-29 16:45:28.703695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.377 [2024-09-29 16:45:28.703748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.377 qpair failed and we were unable to recover it. 00:37:28.377 [2024-09-29 16:45:28.703921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.377 [2024-09-29 16:45:28.703955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.377 qpair failed and we were unable to recover it. 00:37:28.377 [2024-09-29 16:45:28.704095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.377 [2024-09-29 16:45:28.704136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.377 qpair failed and we were unable to recover it. 00:37:28.377 [2024-09-29 16:45:28.704283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.377 [2024-09-29 16:45:28.704337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.377 qpair failed and we were unable to recover it. 00:37:28.377 [2024-09-29 16:45:28.704494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.377 [2024-09-29 16:45:28.704532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.377 qpair failed and we were unable to recover it. 00:37:28.377 [2024-09-29 16:45:28.704694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.377 [2024-09-29 16:45:28.704748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.377 qpair failed and we were unable to recover it. 00:37:28.377 [2024-09-29 16:45:28.704895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.378 [2024-09-29 16:45:28.704930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.378 qpair failed and we were unable to recover it. 00:37:28.378 [2024-09-29 16:45:28.705071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.378 [2024-09-29 16:45:28.705109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.378 qpair failed and we were unable to recover it. 00:37:28.378 [2024-09-29 16:45:28.705265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.378 [2024-09-29 16:45:28.705303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.378 qpair failed and we were unable to recover it. 00:37:28.378 [2024-09-29 16:45:28.705462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.378 [2024-09-29 16:45:28.705501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.378 qpair failed and we were unable to recover it. 00:37:28.378 [2024-09-29 16:45:28.705643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.378 [2024-09-29 16:45:28.705685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.378 qpair failed and we were unable to recover it. 00:37:28.378 [2024-09-29 16:45:28.705798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.378 [2024-09-29 16:45:28.705832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.378 qpair failed and we were unable to recover it. 00:37:28.378 [2024-09-29 16:45:28.705986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.378 [2024-09-29 16:45:28.706039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.378 qpair failed and we were unable to recover it. 00:37:28.378 [2024-09-29 16:45:28.706255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.378 [2024-09-29 16:45:28.706296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.378 qpair failed and we were unable to recover it. 00:37:28.378 [2024-09-29 16:45:28.706507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.378 [2024-09-29 16:45:28.706576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.378 qpair failed and we were unable to recover it. 00:37:28.378 [2024-09-29 16:45:28.706736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.378 [2024-09-29 16:45:28.706771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.378 qpair failed and we were unable to recover it. 00:37:28.378 [2024-09-29 16:45:28.706920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.378 [2024-09-29 16:45:28.706975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.378 qpair failed and we were unable to recover it. 00:37:28.378 [2024-09-29 16:45:28.707150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.378 [2024-09-29 16:45:28.707184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.378 qpair failed and we were unable to recover it. 00:37:28.378 [2024-09-29 16:45:28.707352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.378 [2024-09-29 16:45:28.707388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.378 qpair failed and we were unable to recover it. 00:37:28.378 [2024-09-29 16:45:28.707574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.378 [2024-09-29 16:45:28.707612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.378 qpair failed and we were unable to recover it. 00:37:28.378 [2024-09-29 16:45:28.707795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.378 [2024-09-29 16:45:28.707829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.378 qpair failed and we were unable to recover it. 00:37:28.378 [2024-09-29 16:45:28.708004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.378 [2024-09-29 16:45:28.708041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.378 qpair failed and we were unable to recover it. 00:37:28.378 [2024-09-29 16:45:28.708202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.378 [2024-09-29 16:45:28.708239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.378 qpair failed and we were unable to recover it. 00:37:28.378 [2024-09-29 16:45:28.708378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.378 [2024-09-29 16:45:28.708416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.378 qpair failed and we were unable to recover it. 00:37:28.378 [2024-09-29 16:45:28.708571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.378 [2024-09-29 16:45:28.708618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.378 qpair failed and we were unable to recover it. 00:37:28.378 [2024-09-29 16:45:28.708762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.378 [2024-09-29 16:45:28.708800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.378 qpair failed and we were unable to recover it. 00:37:28.378 [2024-09-29 16:45:28.708937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.378 [2024-09-29 16:45:28.708995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.378 qpair failed and we were unable to recover it. 00:37:28.378 [2024-09-29 16:45:28.709200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.378 [2024-09-29 16:45:28.709252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.378 qpair failed and we were unable to recover it. 00:37:28.378 [2024-09-29 16:45:28.709392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.378 [2024-09-29 16:45:28.709446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.378 qpair failed and we were unable to recover it. 00:37:28.378 [2024-09-29 16:45:28.709591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.378 [2024-09-29 16:45:28.709626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.378 qpair failed and we were unable to recover it. 00:37:28.378 [2024-09-29 16:45:28.709743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.378 [2024-09-29 16:45:28.709778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.378 qpair failed and we were unable to recover it. 00:37:28.378 [2024-09-29 16:45:28.709887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.378 [2024-09-29 16:45:28.709921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.378 qpair failed and we were unable to recover it. 00:37:28.378 [2024-09-29 16:45:28.710033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.378 [2024-09-29 16:45:28.710067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.378 qpair failed and we were unable to recover it. 00:37:28.378 [2024-09-29 16:45:28.710213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.378 [2024-09-29 16:45:28.710250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.378 qpair failed and we were unable to recover it. 00:37:28.378 [2024-09-29 16:45:28.710472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.378 [2024-09-29 16:45:28.710511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.378 qpair failed and we were unable to recover it. 00:37:28.378 [2024-09-29 16:45:28.710637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.378 [2024-09-29 16:45:28.710682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.378 qpair failed and we were unable to recover it. 00:37:28.378 [2024-09-29 16:45:28.710871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.378 [2024-09-29 16:45:28.710924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.378 qpair failed and we were unable to recover it. 00:37:28.378 [2024-09-29 16:45:28.711063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.378 [2024-09-29 16:45:28.711101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.378 qpair failed and we were unable to recover it. 00:37:28.378 [2024-09-29 16:45:28.711262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.378 [2024-09-29 16:45:28.711300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.378 qpair failed and we were unable to recover it. 00:37:28.378 [2024-09-29 16:45:28.711435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.378 [2024-09-29 16:45:28.711467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.378 qpair failed and we were unable to recover it. 00:37:28.378 [2024-09-29 16:45:28.711577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.379 [2024-09-29 16:45:28.711611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.379 qpair failed and we were unable to recover it. 00:37:28.379 [2024-09-29 16:45:28.711752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.379 [2024-09-29 16:45:28.711800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.379 qpair failed and we were unable to recover it. 00:37:28.379 [2024-09-29 16:45:28.711955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.379 [2024-09-29 16:45:28.711995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.379 qpair failed and we were unable to recover it. 00:37:28.379 [2024-09-29 16:45:28.712117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.379 [2024-09-29 16:45:28.712151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.379 qpair failed and we were unable to recover it. 00:37:28.379 [2024-09-29 16:45:28.712262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.379 [2024-09-29 16:45:28.712296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.379 qpair failed and we were unable to recover it. 00:37:28.379 [2024-09-29 16:45:28.712414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.379 [2024-09-29 16:45:28.712448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.379 qpair failed and we were unable to recover it. 00:37:28.379 [2024-09-29 16:45:28.712616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.379 [2024-09-29 16:45:28.712664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.379 qpair failed and we were unable to recover it. 00:37:28.379 [2024-09-29 16:45:28.712849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.379 [2024-09-29 16:45:28.712904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.379 qpair failed and we were unable to recover it. 00:37:28.379 [2024-09-29 16:45:28.713103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.379 [2024-09-29 16:45:28.713156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.379 qpair failed and we were unable to recover it. 00:37:28.379 [2024-09-29 16:45:28.713291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.379 [2024-09-29 16:45:28.713342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.379 qpair failed and we were unable to recover it. 00:37:28.379 [2024-09-29 16:45:28.713463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.379 [2024-09-29 16:45:28.713496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.379 qpair failed and we were unable to recover it. 00:37:28.379 [2024-09-29 16:45:28.713644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.379 [2024-09-29 16:45:28.713686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.379 qpair failed and we were unable to recover it. 00:37:28.379 [2024-09-29 16:45:28.713798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.379 [2024-09-29 16:45:28.713832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.379 qpair failed and we were unable to recover it. 00:37:28.379 [2024-09-29 16:45:28.713964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.379 [2024-09-29 16:45:28.713998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.379 qpair failed and we were unable to recover it. 00:37:28.379 [2024-09-29 16:45:28.714113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.379 [2024-09-29 16:45:28.714148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.379 qpair failed and we were unable to recover it. 00:37:28.379 [2024-09-29 16:45:28.714302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.379 [2024-09-29 16:45:28.714335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.379 qpair failed and we were unable to recover it. 00:37:28.379 [2024-09-29 16:45:28.714497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.379 [2024-09-29 16:45:28.714546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.379 qpair failed and we were unable to recover it. 00:37:28.379 [2024-09-29 16:45:28.714659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.379 [2024-09-29 16:45:28.714702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.379 qpair failed and we were unable to recover it. 00:37:28.379 [2024-09-29 16:45:28.714829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.379 [2024-09-29 16:45:28.714864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.379 qpair failed and we were unable to recover it. 00:37:28.379 [2024-09-29 16:45:28.714985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.379 [2024-09-29 16:45:28.715020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.379 qpair failed and we were unable to recover it. 00:37:28.379 [2024-09-29 16:45:28.715193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.379 [2024-09-29 16:45:28.715231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.379 qpair failed and we were unable to recover it. 00:37:28.379 [2024-09-29 16:45:28.715354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.379 [2024-09-29 16:45:28.715391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.379 qpair failed and we were unable to recover it. 00:37:28.379 [2024-09-29 16:45:28.715549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.379 [2024-09-29 16:45:28.715582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.379 qpair failed and we were unable to recover it. 00:37:28.379 [2024-09-29 16:45:28.715758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.379 [2024-09-29 16:45:28.715793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.379 qpair failed and we were unable to recover it. 00:37:28.379 [2024-09-29 16:45:28.715937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.379 [2024-09-29 16:45:28.715989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.379 qpair failed and we were unable to recover it. 00:37:28.379 [2024-09-29 16:45:28.716194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.379 [2024-09-29 16:45:28.716247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.379 qpair failed and we were unable to recover it. 00:37:28.379 [2024-09-29 16:45:28.716371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.379 [2024-09-29 16:45:28.716423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.379 qpair failed and we were unable to recover it. 00:37:28.379 [2024-09-29 16:45:28.716591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.379 [2024-09-29 16:45:28.716626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.379 qpair failed and we were unable to recover it. 00:37:28.379 [2024-09-29 16:45:28.716770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.379 [2024-09-29 16:45:28.716823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.379 qpair failed and we were unable to recover it. 00:37:28.379 [2024-09-29 16:45:28.716995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.379 [2024-09-29 16:45:28.717061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.379 qpair failed and we were unable to recover it. 00:37:28.379 [2024-09-29 16:45:28.717211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.379 [2024-09-29 16:45:28.717250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.379 qpair failed and we were unable to recover it. 00:37:28.379 [2024-09-29 16:45:28.717430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.379 [2024-09-29 16:45:28.717490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.379 qpair failed and we were unable to recover it. 00:37:28.379 [2024-09-29 16:45:28.717631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.379 [2024-09-29 16:45:28.717667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.379 qpair failed and we were unable to recover it. 00:37:28.379 [2024-09-29 16:45:28.717832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.379 [2024-09-29 16:45:28.717880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.379 qpair failed and we were unable to recover it. 00:37:28.379 [2024-09-29 16:45:28.718059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.379 [2024-09-29 16:45:28.718100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.379 qpair failed and we were unable to recover it. 00:37:28.379 [2024-09-29 16:45:28.718354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.379 [2024-09-29 16:45:28.718417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.379 qpair failed and we were unable to recover it. 00:37:28.379 [2024-09-29 16:45:28.718561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.379 [2024-09-29 16:45:28.718596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.379 qpair failed and we were unable to recover it. 00:37:28.379 [2024-09-29 16:45:28.718734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.379 [2024-09-29 16:45:28.718789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.379 qpair failed and we were unable to recover it. 00:37:28.379 [2024-09-29 16:45:28.718929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.380 [2024-09-29 16:45:28.718981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.380 qpair failed and we were unable to recover it. 00:37:28.380 [2024-09-29 16:45:28.719104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.380 [2024-09-29 16:45:28.719137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.380 qpair failed and we were unable to recover it. 00:37:28.380 [2024-09-29 16:45:28.719251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.380 [2024-09-29 16:45:28.719285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.380 qpair failed and we were unable to recover it. 00:37:28.380 [2024-09-29 16:45:28.719437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.380 [2024-09-29 16:45:28.719471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.380 qpair failed and we were unable to recover it. 00:37:28.380 [2024-09-29 16:45:28.719589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.380 [2024-09-29 16:45:28.719623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.380 qpair failed and we were unable to recover it. 00:37:28.380 [2024-09-29 16:45:28.719799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.380 [2024-09-29 16:45:28.719847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.380 qpair failed and we were unable to recover it. 00:37:28.380 [2024-09-29 16:45:28.720037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.380 [2024-09-29 16:45:28.720084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.380 qpair failed and we were unable to recover it. 00:37:28.380 [2024-09-29 16:45:28.720204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.380 [2024-09-29 16:45:28.720240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.380 qpair failed and we were unable to recover it. 00:37:28.380 [2024-09-29 16:45:28.720347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.380 [2024-09-29 16:45:28.720381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.380 qpair failed and we were unable to recover it. 00:37:28.380 [2024-09-29 16:45:28.720524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.380 [2024-09-29 16:45:28.720559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.380 qpair failed and we were unable to recover it. 00:37:28.380 [2024-09-29 16:45:28.720704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.380 [2024-09-29 16:45:28.720740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.380 qpair failed and we were unable to recover it. 00:37:28.380 [2024-09-29 16:45:28.720878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.380 [2024-09-29 16:45:28.720916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.380 qpair failed and we were unable to recover it. 00:37:28.380 [2024-09-29 16:45:28.721122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.380 [2024-09-29 16:45:28.721192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.380 qpair failed and we were unable to recover it. 00:37:28.380 [2024-09-29 16:45:28.721405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.380 [2024-09-29 16:45:28.721447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.380 qpair failed and we were unable to recover it. 00:37:28.380 [2024-09-29 16:45:28.721588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.380 [2024-09-29 16:45:28.721623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.380 qpair failed and we were unable to recover it. 00:37:28.380 [2024-09-29 16:45:28.721767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.380 [2024-09-29 16:45:28.721803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.380 qpair failed and we were unable to recover it. 00:37:28.380 [2024-09-29 16:45:28.721953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.380 [2024-09-29 16:45:28.721988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.380 qpair failed and we were unable to recover it. 00:37:28.380 [2024-09-29 16:45:28.722152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.380 [2024-09-29 16:45:28.722190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.380 qpair failed and we were unable to recover it. 00:37:28.380 [2024-09-29 16:45:28.722355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.380 [2024-09-29 16:45:28.722395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.380 qpair failed and we were unable to recover it. 00:37:28.380 [2024-09-29 16:45:28.722528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.380 [2024-09-29 16:45:28.722565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.380 qpair failed and we were unable to recover it. 00:37:28.380 [2024-09-29 16:45:28.722740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.380 [2024-09-29 16:45:28.722779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.380 qpair failed and we were unable to recover it. 00:37:28.380 [2024-09-29 16:45:28.722946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.380 [2024-09-29 16:45:28.723001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.380 qpair failed and we were unable to recover it. 00:37:28.380 [2024-09-29 16:45:28.723166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.380 [2024-09-29 16:45:28.723206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.380 qpair failed and we were unable to recover it. 00:37:28.380 [2024-09-29 16:45:28.723369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.380 [2024-09-29 16:45:28.723423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.380 qpair failed and we were unable to recover it. 00:37:28.380 [2024-09-29 16:45:28.723602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.380 [2024-09-29 16:45:28.723637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.380 qpair failed and we were unable to recover it. 00:37:28.380 [2024-09-29 16:45:28.723813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.380 [2024-09-29 16:45:28.723860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.380 qpair failed and we were unable to recover it. 00:37:28.380 [2024-09-29 16:45:28.724013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.380 [2024-09-29 16:45:28.724053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.380 qpair failed and we were unable to recover it. 00:37:28.380 [2024-09-29 16:45:28.724221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.380 [2024-09-29 16:45:28.724293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.380 qpair failed and we were unable to recover it. 00:37:28.380 [2024-09-29 16:45:28.724473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.380 [2024-09-29 16:45:28.724532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.380 qpair failed and we were unable to recover it. 00:37:28.380 [2024-09-29 16:45:28.724717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.380 [2024-09-29 16:45:28.724764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.380 qpair failed and we were unable to recover it. 00:37:28.380 [2024-09-29 16:45:28.724906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.380 [2024-09-29 16:45:28.724954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.380 qpair failed and we were unable to recover it. 00:37:28.380 [2024-09-29 16:45:28.725156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.380 [2024-09-29 16:45:28.725228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.380 qpair failed and we were unable to recover it. 00:37:28.380 [2024-09-29 16:45:28.725368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.380 [2024-09-29 16:45:28.725437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.380 qpair failed and we were unable to recover it. 00:37:28.380 [2024-09-29 16:45:28.725565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.380 [2024-09-29 16:45:28.725600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.380 qpair failed and we were unable to recover it. 00:37:28.380 [2024-09-29 16:45:28.725773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.380 [2024-09-29 16:45:28.725821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.380 qpair failed and we were unable to recover it. 00:37:28.380 [2024-09-29 16:45:28.725944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.380 [2024-09-29 16:45:28.725980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.380 qpair failed and we were unable to recover it. 00:37:28.380 [2024-09-29 16:45:28.726109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.380 [2024-09-29 16:45:28.726145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.380 qpair failed and we were unable to recover it. 00:37:28.380 [2024-09-29 16:45:28.726359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.380 [2024-09-29 16:45:28.726397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.380 qpair failed and we were unable to recover it. 00:37:28.380 [2024-09-29 16:45:28.726558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.380 [2024-09-29 16:45:28.726595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.380 qpair failed and we were unable to recover it. 00:37:28.381 [2024-09-29 16:45:28.726764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.381 [2024-09-29 16:45:28.726807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.381 qpair failed and we were unable to recover it. 00:37:28.381 [2024-09-29 16:45:28.726973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.381 [2024-09-29 16:45:28.727028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.381 qpair failed and we were unable to recover it. 00:37:28.381 [2024-09-29 16:45:28.727278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.381 [2024-09-29 16:45:28.727334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.381 qpair failed and we were unable to recover it. 00:37:28.381 [2024-09-29 16:45:28.727472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.381 [2024-09-29 16:45:28.727506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.381 qpair failed and we were unable to recover it. 00:37:28.381 [2024-09-29 16:45:28.727646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.381 [2024-09-29 16:45:28.727710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.381 qpair failed and we were unable to recover it. 00:37:28.381 [2024-09-29 16:45:28.727864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.381 [2024-09-29 16:45:28.727918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.381 qpair failed and we were unable to recover it. 00:37:28.381 [2024-09-29 16:45:28.728084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.381 [2024-09-29 16:45:28.728138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.381 qpair failed and we were unable to recover it. 00:37:28.381 [2024-09-29 16:45:28.728294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.381 [2024-09-29 16:45:28.728382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.381 qpair failed and we were unable to recover it. 00:37:28.381 [2024-09-29 16:45:28.728570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.381 [2024-09-29 16:45:28.728603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.381 qpair failed and we were unable to recover it. 00:37:28.381 [2024-09-29 16:45:28.728769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.381 [2024-09-29 16:45:28.728824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.381 qpair failed and we were unable to recover it. 00:37:28.381 [2024-09-29 16:45:28.728981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.381 [2024-09-29 16:45:28.729034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.381 qpair failed and we were unable to recover it. 00:37:28.381 [2024-09-29 16:45:28.729178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.381 [2024-09-29 16:45:28.729235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.381 qpair failed and we were unable to recover it. 00:37:28.381 [2024-09-29 16:45:28.729358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.381 [2024-09-29 16:45:28.729393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.381 qpair failed and we were unable to recover it. 00:37:28.381 [2024-09-29 16:45:28.729533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.381 [2024-09-29 16:45:28.729599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.381 qpair failed and we were unable to recover it. 00:37:28.381 [2024-09-29 16:45:28.729736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.381 [2024-09-29 16:45:28.729770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.381 qpair failed and we were unable to recover it. 00:37:28.381 [2024-09-29 16:45:28.729927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.381 [2024-09-29 16:45:28.729974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.381 qpair failed and we were unable to recover it. 00:37:28.381 [2024-09-29 16:45:28.730145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.381 [2024-09-29 16:45:28.730180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.381 qpair failed and we were unable to recover it. 00:37:28.381 [2024-09-29 16:45:28.730310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.381 [2024-09-29 16:45:28.730344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.381 qpair failed and we were unable to recover it. 00:37:28.381 [2024-09-29 16:45:28.730483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.381 [2024-09-29 16:45:28.730516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.381 qpair failed and we were unable to recover it. 00:37:28.381 [2024-09-29 16:45:28.730656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.381 [2024-09-29 16:45:28.730696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.381 qpair failed and we were unable to recover it. 00:37:28.381 [2024-09-29 16:45:28.730810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.381 [2024-09-29 16:45:28.730844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.381 qpair failed and we were unable to recover it. 00:37:28.381 [2024-09-29 16:45:28.730983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.381 [2024-09-29 16:45:28.731036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.381 qpair failed and we were unable to recover it. 00:37:28.381 [2024-09-29 16:45:28.731209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.381 [2024-09-29 16:45:28.731264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.381 qpair failed and we were unable to recover it. 00:37:28.381 [2024-09-29 16:45:28.731395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.381 [2024-09-29 16:45:28.731451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.381 qpair failed and we were unable to recover it. 00:37:28.381 [2024-09-29 16:45:28.731564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.381 [2024-09-29 16:45:28.731597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.381 qpair failed and we were unable to recover it. 00:37:28.381 [2024-09-29 16:45:28.731777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.381 [2024-09-29 16:45:28.731831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.381 qpair failed and we were unable to recover it. 00:37:28.381 [2024-09-29 16:45:28.731979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.381 [2024-09-29 16:45:28.732032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.381 qpair failed and we were unable to recover it. 00:37:28.381 [2024-09-29 16:45:28.732205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.381 [2024-09-29 16:45:28.732269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.381 qpair failed and we were unable to recover it. 00:37:28.381 [2024-09-29 16:45:28.732518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.381 [2024-09-29 16:45:28.732577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.381 qpair failed and we were unable to recover it. 00:37:28.381 [2024-09-29 16:45:28.732705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.381 [2024-09-29 16:45:28.732740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.381 qpair failed and we were unable to recover it. 00:37:28.381 [2024-09-29 16:45:28.732876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.381 [2024-09-29 16:45:28.732929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.381 qpair failed and we were unable to recover it. 00:37:28.381 [2024-09-29 16:45:28.733047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.381 [2024-09-29 16:45:28.733080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.381 qpair failed and we were unable to recover it. 00:37:28.381 [2024-09-29 16:45:28.733218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.381 [2024-09-29 16:45:28.733257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.381 qpair failed and we were unable to recover it. 00:37:28.381 [2024-09-29 16:45:28.733404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.381 [2024-09-29 16:45:28.733439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.381 qpair failed and we were unable to recover it. 00:37:28.381 [2024-09-29 16:45:28.733579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.381 [2024-09-29 16:45:28.733612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.381 qpair failed and we were unable to recover it. 00:37:28.381 [2024-09-29 16:45:28.733786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.381 [2024-09-29 16:45:28.733833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.381 qpair failed and we were unable to recover it. 00:37:28.381 [2024-09-29 16:45:28.734003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.381 [2024-09-29 16:45:28.734051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.381 qpair failed and we were unable to recover it. 00:37:28.381 [2024-09-29 16:45:28.734236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.381 [2024-09-29 16:45:28.734273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.381 qpair failed and we were unable to recover it. 00:37:28.381 [2024-09-29 16:45:28.734396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.382 [2024-09-29 16:45:28.734431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.382 qpair failed and we were unable to recover it. 00:37:28.382 [2024-09-29 16:45:28.734601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.382 [2024-09-29 16:45:28.734635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.382 qpair failed and we were unable to recover it. 00:37:28.382 [2024-09-29 16:45:28.734774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.382 [2024-09-29 16:45:28.734810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.382 qpair failed and we were unable to recover it. 00:37:28.382 [2024-09-29 16:45:28.734979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.382 [2024-09-29 16:45:28.735034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.382 qpair failed and we were unable to recover it. 00:37:28.382 [2024-09-29 16:45:28.735223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.382 [2024-09-29 16:45:28.735281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.382 qpair failed and we were unable to recover it. 00:37:28.382 [2024-09-29 16:45:28.735440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.382 [2024-09-29 16:45:28.735497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.382 qpair failed and we were unable to recover it. 00:37:28.382 [2024-09-29 16:45:28.735613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.382 [2024-09-29 16:45:28.735646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.382 qpair failed and we were unable to recover it. 00:37:28.382 [2024-09-29 16:45:28.735768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.382 [2024-09-29 16:45:28.735801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.382 qpair failed and we were unable to recover it. 00:37:28.382 [2024-09-29 16:45:28.735924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.382 [2024-09-29 16:45:28.735957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.382 qpair failed and we were unable to recover it. 00:37:28.382 [2024-09-29 16:45:28.736102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.382 [2024-09-29 16:45:28.736135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.382 qpair failed and we were unable to recover it. 00:37:28.382 [2024-09-29 16:45:28.736261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.382 [2024-09-29 16:45:28.736295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.382 qpair failed and we were unable to recover it. 00:37:28.382 [2024-09-29 16:45:28.736430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.382 [2024-09-29 16:45:28.736464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.382 qpair failed and we were unable to recover it. 00:37:28.382 [2024-09-29 16:45:28.736630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.382 [2024-09-29 16:45:28.736663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.382 qpair failed and we were unable to recover it. 00:37:28.382 [2024-09-29 16:45:28.736805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.382 [2024-09-29 16:45:28.736853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.382 qpair failed and we were unable to recover it. 00:37:28.382 [2024-09-29 16:45:28.737004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.382 [2024-09-29 16:45:28.737040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.382 qpair failed and we were unable to recover it. 00:37:28.382 [2024-09-29 16:45:28.737158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.382 [2024-09-29 16:45:28.737192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.382 qpair failed and we were unable to recover it. 00:37:28.382 [2024-09-29 16:45:28.737311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.382 [2024-09-29 16:45:28.737345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.382 qpair failed and we were unable to recover it. 00:37:28.382 [2024-09-29 16:45:28.737482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.382 [2024-09-29 16:45:28.737530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.382 qpair failed and we were unable to recover it. 00:37:28.382 [2024-09-29 16:45:28.737666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.382 [2024-09-29 16:45:28.737720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.382 qpair failed and we were unable to recover it. 00:37:28.382 [2024-09-29 16:45:28.737869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.382 [2024-09-29 16:45:28.737926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.382 qpair failed and we were unable to recover it. 00:37:28.382 [2024-09-29 16:45:28.738091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.382 [2024-09-29 16:45:28.738144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.382 qpair failed and we were unable to recover it. 00:37:28.382 [2024-09-29 16:45:28.738337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.382 [2024-09-29 16:45:28.738391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.382 qpair failed and we were unable to recover it. 00:37:28.382 [2024-09-29 16:45:28.738562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.382 [2024-09-29 16:45:28.738598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.382 qpair failed and we were unable to recover it. 00:37:28.382 [2024-09-29 16:45:28.738803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.382 [2024-09-29 16:45:28.738859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.382 qpair failed and we were unable to recover it. 00:37:28.382 [2024-09-29 16:45:28.739019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.382 [2024-09-29 16:45:28.739078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.382 qpair failed and we were unable to recover it. 00:37:28.382 [2024-09-29 16:45:28.739212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.382 [2024-09-29 16:45:28.739249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.382 qpair failed and we were unable to recover it. 00:37:28.382 [2024-09-29 16:45:28.739381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.382 [2024-09-29 16:45:28.739414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.382 qpair failed and we were unable to recover it. 00:37:28.382 [2024-09-29 16:45:28.739532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.382 [2024-09-29 16:45:28.739565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.382 qpair failed and we were unable to recover it. 00:37:28.382 [2024-09-29 16:45:28.739705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.382 [2024-09-29 16:45:28.739754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.382 qpair failed and we were unable to recover it. 00:37:28.382 [2024-09-29 16:45:28.739945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.382 [2024-09-29 16:45:28.739992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.382 qpair failed and we were unable to recover it. 00:37:28.382 [2024-09-29 16:45:28.740116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.382 [2024-09-29 16:45:28.740154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.382 qpair failed and we were unable to recover it. 00:37:28.382 [2024-09-29 16:45:28.740313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.382 [2024-09-29 16:45:28.740348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.382 qpair failed and we were unable to recover it. 00:37:28.382 [2024-09-29 16:45:28.740519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.382 [2024-09-29 16:45:28.740554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.382 qpair failed and we were unable to recover it. 00:37:28.382 [2024-09-29 16:45:28.740727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.382 [2024-09-29 16:45:28.740767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.382 qpair failed and we were unable to recover it. 00:37:28.382 [2024-09-29 16:45:28.740923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.382 [2024-09-29 16:45:28.740984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.382 qpair failed and we were unable to recover it. 00:37:28.382 [2024-09-29 16:45:28.741130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.382 [2024-09-29 16:45:28.741184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.382 qpair failed and we were unable to recover it. 00:37:28.382 [2024-09-29 16:45:28.741295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.382 [2024-09-29 16:45:28.741329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.382 qpair failed and we were unable to recover it. 00:37:28.382 [2024-09-29 16:45:28.741472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.382 [2024-09-29 16:45:28.741505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.382 qpair failed and we were unable to recover it. 00:37:28.382 [2024-09-29 16:45:28.741615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.382 [2024-09-29 16:45:28.741648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.383 qpair failed and we were unable to recover it. 00:37:28.383 [2024-09-29 16:45:28.741805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.383 [2024-09-29 16:45:28.741838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.383 qpair failed and we were unable to recover it. 00:37:28.383 [2024-09-29 16:45:28.741949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.383 [2024-09-29 16:45:28.741982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.383 qpair failed and we were unable to recover it. 00:37:28.383 [2024-09-29 16:45:28.742101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.383 [2024-09-29 16:45:28.742135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.383 qpair failed and we were unable to recover it. 00:37:28.383 [2024-09-29 16:45:28.742244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.383 [2024-09-29 16:45:28.742278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.383 qpair failed and we were unable to recover it. 00:37:28.383 [2024-09-29 16:45:28.742387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.383 [2024-09-29 16:45:28.742420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.383 qpair failed and we were unable to recover it. 00:37:28.383 [2024-09-29 16:45:28.742572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.383 [2024-09-29 16:45:28.742606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.383 qpair failed and we were unable to recover it. 00:37:28.383 [2024-09-29 16:45:28.742750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.383 [2024-09-29 16:45:28.742798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.383 qpair failed and we were unable to recover it. 00:37:28.383 [2024-09-29 16:45:28.742921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.383 [2024-09-29 16:45:28.742957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.383 qpair failed and we were unable to recover it. 00:37:28.383 [2024-09-29 16:45:28.743104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.383 [2024-09-29 16:45:28.743137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.383 qpair failed and we were unable to recover it. 00:37:28.383 [2024-09-29 16:45:28.743294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.383 [2024-09-29 16:45:28.743328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.383 qpair failed and we were unable to recover it. 00:37:28.383 [2024-09-29 16:45:28.743447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.383 [2024-09-29 16:45:28.743480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.383 qpair failed and we were unable to recover it. 00:37:28.383 [2024-09-29 16:45:28.743639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.383 [2024-09-29 16:45:28.743707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.383 qpair failed and we were unable to recover it. 00:37:28.383 [2024-09-29 16:45:28.743836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.383 [2024-09-29 16:45:28.743872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.383 qpair failed and we were unable to recover it. 00:37:28.383 [2024-09-29 16:45:28.744007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.383 [2024-09-29 16:45:28.744044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.383 qpair failed and we were unable to recover it. 00:37:28.383 [2024-09-29 16:45:28.744251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.383 [2024-09-29 16:45:28.744309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.383 qpair failed and we were unable to recover it. 00:37:28.383 [2024-09-29 16:45:28.744420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.383 [2024-09-29 16:45:28.744454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.383 qpair failed and we were unable to recover it. 00:37:28.383 [2024-09-29 16:45:28.744579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.383 [2024-09-29 16:45:28.744627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.383 qpair failed and we were unable to recover it. 00:37:28.383 [2024-09-29 16:45:28.744831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.383 [2024-09-29 16:45:28.744866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.383 qpair failed and we were unable to recover it. 00:37:28.383 [2024-09-29 16:45:28.745037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.383 [2024-09-29 16:45:28.745083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.383 qpair failed and we were unable to recover it. 00:37:28.383 [2024-09-29 16:45:28.745257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.383 [2024-09-29 16:45:28.745317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.383 qpair failed and we were unable to recover it. 00:37:28.383 [2024-09-29 16:45:28.745511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.383 [2024-09-29 16:45:28.745571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.383 qpair failed and we were unable to recover it. 00:37:28.383 [2024-09-29 16:45:28.745713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.383 [2024-09-29 16:45:28.745766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.383 qpair failed and we were unable to recover it. 00:37:28.383 [2024-09-29 16:45:28.745937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.383 [2024-09-29 16:45:28.745975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.383 qpair failed and we were unable to recover it. 00:37:28.383 [2024-09-29 16:45:28.746162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.383 [2024-09-29 16:45:28.746220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.383 qpair failed and we were unable to recover it. 00:37:28.383 [2024-09-29 16:45:28.746464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.383 [2024-09-29 16:45:28.746523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.383 qpair failed and we were unable to recover it. 00:37:28.383 [2024-09-29 16:45:28.746700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.383 [2024-09-29 16:45:28.746736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.383 qpair failed and we were unable to recover it. 00:37:28.383 [2024-09-29 16:45:28.746909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.383 [2024-09-29 16:45:28.746957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.383 qpair failed and we were unable to recover it. 00:37:28.383 [2024-09-29 16:45:28.747121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.383 [2024-09-29 16:45:28.747170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.383 qpair failed and we were unable to recover it. 00:37:28.383 [2024-09-29 16:45:28.747293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.383 [2024-09-29 16:45:28.747328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.383 qpair failed and we were unable to recover it. 00:37:28.383 [2024-09-29 16:45:28.747470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.383 [2024-09-29 16:45:28.747510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.383 qpair failed and we were unable to recover it. 00:37:28.383 [2024-09-29 16:45:28.747688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.383 [2024-09-29 16:45:28.747756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.383 qpair failed and we were unable to recover it. 00:37:28.383 [2024-09-29 16:45:28.747903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.383 [2024-09-29 16:45:28.747967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.383 qpair failed and we were unable to recover it. 00:37:28.383 [2024-09-29 16:45:28.748203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.384 [2024-09-29 16:45:28.748275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.384 qpair failed and we were unable to recover it. 00:37:28.384 [2024-09-29 16:45:28.748485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.384 [2024-09-29 16:45:28.748541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.384 qpair failed and we were unable to recover it. 00:37:28.384 [2024-09-29 16:45:28.748718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.384 [2024-09-29 16:45:28.748753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.384 qpair failed and we were unable to recover it. 00:37:28.384 [2024-09-29 16:45:28.748875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.384 [2024-09-29 16:45:28.748913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.384 qpair failed and we were unable to recover it. 00:37:28.384 [2024-09-29 16:45:28.749061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.384 [2024-09-29 16:45:28.749096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.384 qpair failed and we were unable to recover it. 00:37:28.384 [2024-09-29 16:45:28.749289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.384 [2024-09-29 16:45:28.749322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.384 qpair failed and we were unable to recover it. 00:37:28.384 [2024-09-29 16:45:28.749502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.384 [2024-09-29 16:45:28.749535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.384 qpair failed and we were unable to recover it. 00:37:28.384 [2024-09-29 16:45:28.749662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.384 [2024-09-29 16:45:28.749702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.384 qpair failed and we were unable to recover it. 00:37:28.384 [2024-09-29 16:45:28.749842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.384 [2024-09-29 16:45:28.749875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.384 qpair failed and we were unable to recover it. 00:37:28.384 [2024-09-29 16:45:28.750035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.384 [2024-09-29 16:45:28.750071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.384 qpair failed and we were unable to recover it. 00:37:28.384 [2024-09-29 16:45:28.750223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.384 [2024-09-29 16:45:28.750286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.384 qpair failed and we were unable to recover it. 00:37:28.384 [2024-09-29 16:45:28.750416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.384 [2024-09-29 16:45:28.750454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.384 qpair failed and we were unable to recover it. 00:37:28.384 [2024-09-29 16:45:28.750630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.384 [2024-09-29 16:45:28.750688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.384 qpair failed and we were unable to recover it. 00:37:28.384 [2024-09-29 16:45:28.750874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.384 [2024-09-29 16:45:28.750921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.384 qpair failed and we were unable to recover it. 00:37:28.384 [2024-09-29 16:45:28.751071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.384 [2024-09-29 16:45:28.751107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.384 qpair failed and we were unable to recover it. 00:37:28.384 [2024-09-29 16:45:28.751217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.384 [2024-09-29 16:45:28.751252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.384 qpair failed and we were unable to recover it. 00:37:28.384 [2024-09-29 16:45:28.751444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.384 [2024-09-29 16:45:28.751481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.384 qpair failed and we were unable to recover it. 00:37:28.384 [2024-09-29 16:45:28.751649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.384 [2024-09-29 16:45:28.751712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.384 qpair failed and we were unable to recover it. 00:37:28.384 [2024-09-29 16:45:28.751835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.384 [2024-09-29 16:45:28.751870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.384 qpair failed and we were unable to recover it. 00:37:28.384 [2024-09-29 16:45:28.752031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.384 [2024-09-29 16:45:28.752068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.384 qpair failed and we were unable to recover it. 00:37:28.384 [2024-09-29 16:45:28.752197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.384 [2024-09-29 16:45:28.752235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.384 qpair failed and we were unable to recover it. 00:37:28.384 [2024-09-29 16:45:28.752399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.384 [2024-09-29 16:45:28.752438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.384 qpair failed and we were unable to recover it. 00:37:28.384 [2024-09-29 16:45:28.752590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.384 [2024-09-29 16:45:28.752630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.384 qpair failed and we were unable to recover it. 00:37:28.384 [2024-09-29 16:45:28.752771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.384 [2024-09-29 16:45:28.752806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.384 qpair failed and we were unable to recover it. 00:37:28.384 [2024-09-29 16:45:28.752951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.384 [2024-09-29 16:45:28.752990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.384 qpair failed and we were unable to recover it. 00:37:28.384 [2024-09-29 16:45:28.753157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.384 [2024-09-29 16:45:28.753194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.384 qpair failed and we were unable to recover it. 00:37:28.384 [2024-09-29 16:45:28.753446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.384 [2024-09-29 16:45:28.753483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.384 qpair failed and we were unable to recover it. 00:37:28.384 [2024-09-29 16:45:28.753647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.384 [2024-09-29 16:45:28.753688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.384 qpair failed and we were unable to recover it. 00:37:28.384 [2024-09-29 16:45:28.753807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.384 [2024-09-29 16:45:28.753840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.384 qpair failed and we were unable to recover it. 00:37:28.384 [2024-09-29 16:45:28.754004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.384 [2024-09-29 16:45:28.754069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.384 qpair failed and we were unable to recover it. 00:37:28.384 [2024-09-29 16:45:28.754227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.384 [2024-09-29 16:45:28.754284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.384 qpair failed and we were unable to recover it. 00:37:28.384 [2024-09-29 16:45:28.754447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.384 [2024-09-29 16:45:28.754485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.384 qpair failed and we were unable to recover it. 00:37:28.384 [2024-09-29 16:45:28.754652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.384 [2024-09-29 16:45:28.754694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.384 qpair failed and we were unable to recover it. 00:37:28.384 [2024-09-29 16:45:28.754820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.384 [2024-09-29 16:45:28.754856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.384 qpair failed and we were unable to recover it. 00:37:28.384 [2024-09-29 16:45:28.754983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.384 [2024-09-29 16:45:28.755032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.384 qpair failed and we were unable to recover it. 00:37:28.384 [2024-09-29 16:45:28.755186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.384 [2024-09-29 16:45:28.755223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.384 qpair failed and we were unable to recover it. 00:37:28.384 [2024-09-29 16:45:28.755441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.384 [2024-09-29 16:45:28.755516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.384 qpair failed and we were unable to recover it. 00:37:28.384 [2024-09-29 16:45:28.755636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.384 [2024-09-29 16:45:28.755682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.384 qpair failed and we were unable to recover it. 00:37:28.384 [2024-09-29 16:45:28.755846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.384 [2024-09-29 16:45:28.755894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.385 qpair failed and we were unable to recover it. 00:37:28.385 [2024-09-29 16:45:28.756066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.385 [2024-09-29 16:45:28.756105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.385 qpair failed and we were unable to recover it. 00:37:28.385 [2024-09-29 16:45:28.756267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.385 [2024-09-29 16:45:28.756334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.385 qpair failed and we were unable to recover it. 00:37:28.385 [2024-09-29 16:45:28.756548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.385 [2024-09-29 16:45:28.756604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.385 qpair failed and we were unable to recover it. 00:37:28.385 [2024-09-29 16:45:28.756747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.385 [2024-09-29 16:45:28.756781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.385 qpair failed and we were unable to recover it. 00:37:28.385 [2024-09-29 16:45:28.756910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.385 [2024-09-29 16:45:28.756948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.385 qpair failed and we were unable to recover it. 00:37:28.385 [2024-09-29 16:45:28.757126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.385 [2024-09-29 16:45:28.757195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.385 qpair failed and we were unable to recover it. 00:37:28.385 [2024-09-29 16:45:28.757345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.385 [2024-09-29 16:45:28.757382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.385 qpair failed and we were unable to recover it. 00:37:28.385 [2024-09-29 16:45:28.757509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.385 [2024-09-29 16:45:28.757545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.385 qpair failed and we were unable to recover it. 00:37:28.385 [2024-09-29 16:45:28.757707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.385 [2024-09-29 16:45:28.757740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.385 qpair failed and we were unable to recover it. 00:37:28.385 [2024-09-29 16:45:28.757880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.385 [2024-09-29 16:45:28.757913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.385 qpair failed and we were unable to recover it. 00:37:28.385 [2024-09-29 16:45:28.758068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.385 [2024-09-29 16:45:28.758104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.385 qpair failed and we were unable to recover it. 00:37:28.385 [2024-09-29 16:45:28.758321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.385 [2024-09-29 16:45:28.758357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.385 qpair failed and we were unable to recover it. 00:37:28.385 [2024-09-29 16:45:28.758519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.385 [2024-09-29 16:45:28.758560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.385 qpair failed and we were unable to recover it. 00:37:28.385 [2024-09-29 16:45:28.758692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.385 [2024-09-29 16:45:28.758745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.385 qpair failed and we were unable to recover it. 00:37:28.385 [2024-09-29 16:45:28.758907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.385 [2024-09-29 16:45:28.758955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.385 qpair failed and we were unable to recover it. 00:37:28.385 [2024-09-29 16:45:28.759122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.385 [2024-09-29 16:45:28.759159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.385 qpair failed and we were unable to recover it. 00:37:28.385 [2024-09-29 16:45:28.759334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.385 [2024-09-29 16:45:28.759389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.385 qpair failed and we were unable to recover it. 00:37:28.385 [2024-09-29 16:45:28.759539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.385 [2024-09-29 16:45:28.759576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.385 qpair failed and we were unable to recover it. 00:37:28.385 [2024-09-29 16:45:28.759738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.385 [2024-09-29 16:45:28.759771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.385 qpair failed and we were unable to recover it. 00:37:28.385 [2024-09-29 16:45:28.759886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.385 [2024-09-29 16:45:28.759921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.385 qpair failed and we were unable to recover it. 00:37:28.385 [2024-09-29 16:45:28.760071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.385 [2024-09-29 16:45:28.760106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.385 qpair failed and we were unable to recover it. 00:37:28.385 [2024-09-29 16:45:28.760326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.385 [2024-09-29 16:45:28.760384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.385 qpair failed and we were unable to recover it. 00:37:28.385 [2024-09-29 16:45:28.760518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.385 [2024-09-29 16:45:28.760551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.385 qpair failed and we were unable to recover it. 00:37:28.385 [2024-09-29 16:45:28.760692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.385 [2024-09-29 16:45:28.760725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.385 qpair failed and we were unable to recover it. 00:37:28.385 [2024-09-29 16:45:28.760883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.385 [2024-09-29 16:45:28.760917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.385 qpair failed and we were unable to recover it. 00:37:28.385 [2024-09-29 16:45:28.761037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.385 [2024-09-29 16:45:28.761090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.385 qpair failed and we were unable to recover it. 00:37:28.385 [2024-09-29 16:45:28.761224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.385 [2024-09-29 16:45:28.761261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.385 qpair failed and we were unable to recover it. 00:37:28.385 [2024-09-29 16:45:28.761385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.385 [2024-09-29 16:45:28.761435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.385 qpair failed and we were unable to recover it. 00:37:28.385 [2024-09-29 16:45:28.761609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.385 [2024-09-29 16:45:28.761642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.385 qpair failed and we were unable to recover it. 00:37:28.385 [2024-09-29 16:45:28.761803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.385 [2024-09-29 16:45:28.761851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.385 qpair failed and we were unable to recover it. 00:37:28.385 [2024-09-29 16:45:28.761984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.385 [2024-09-29 16:45:28.762033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.385 qpair failed and we were unable to recover it. 00:37:28.385 [2024-09-29 16:45:28.762192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.385 [2024-09-29 16:45:28.762246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.385 qpair failed and we were unable to recover it. 00:37:28.385 [2024-09-29 16:45:28.762438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.385 [2024-09-29 16:45:28.762492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.385 qpair failed and we were unable to recover it. 00:37:28.385 [2024-09-29 16:45:28.762625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.385 [2024-09-29 16:45:28.762658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.385 qpair failed and we were unable to recover it. 00:37:28.385 [2024-09-29 16:45:28.762784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.385 [2024-09-29 16:45:28.762817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.385 qpair failed and we were unable to recover it. 00:37:28.385 [2024-09-29 16:45:28.762964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.385 [2024-09-29 16:45:28.762999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.385 qpair failed and we were unable to recover it. 00:37:28.385 [2024-09-29 16:45:28.763144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.385 [2024-09-29 16:45:28.763177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.385 qpair failed and we were unable to recover it. 00:37:28.385 [2024-09-29 16:45:28.763290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.385 [2024-09-29 16:45:28.763323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.386 qpair failed and we were unable to recover it. 00:37:28.386 [2024-09-29 16:45:28.763469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.386 [2024-09-29 16:45:28.763503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.386 qpair failed and we were unable to recover it. 00:37:28.386 [2024-09-29 16:45:28.763687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.386 [2024-09-29 16:45:28.763735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.386 qpair failed and we were unable to recover it. 00:37:28.386 [2024-09-29 16:45:28.763867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.386 [2024-09-29 16:45:28.763914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.386 qpair failed and we were unable to recover it. 00:37:28.386 [2024-09-29 16:45:28.764112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.386 [2024-09-29 16:45:28.764151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.386 qpair failed and we were unable to recover it. 00:37:28.386 [2024-09-29 16:45:28.764390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.386 [2024-09-29 16:45:28.764450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.386 qpair failed and we were unable to recover it. 00:37:28.386 [2024-09-29 16:45:28.764623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.386 [2024-09-29 16:45:28.764660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.386 qpair failed and we were unable to recover it. 00:37:28.386 [2024-09-29 16:45:28.764830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.386 [2024-09-29 16:45:28.764871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.386 qpair failed and we were unable to recover it. 00:37:28.386 [2024-09-29 16:45:28.765008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.386 [2024-09-29 16:45:28.765045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.386 qpair failed and we were unable to recover it. 00:37:28.386 [2024-09-29 16:45:28.765178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.386 [2024-09-29 16:45:28.765215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.386 qpair failed and we were unable to recover it. 00:37:28.386 [2024-09-29 16:45:28.765449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.386 [2024-09-29 16:45:28.765507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.386 qpair failed and we were unable to recover it. 00:37:28.386 [2024-09-29 16:45:28.765645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.386 [2024-09-29 16:45:28.765685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.386 qpair failed and we were unable to recover it. 00:37:28.386 [2024-09-29 16:45:28.765833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.386 [2024-09-29 16:45:28.765866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.386 qpair failed and we were unable to recover it. 00:37:28.386 [2024-09-29 16:45:28.766086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.386 [2024-09-29 16:45:28.766148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.386 qpair failed and we were unable to recover it. 00:37:28.386 [2024-09-29 16:45:28.766284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.386 [2024-09-29 16:45:28.766352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.386 qpair failed and we were unable to recover it. 00:37:28.386 [2024-09-29 16:45:28.766551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.386 [2024-09-29 16:45:28.766588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.386 qpair failed and we were unable to recover it. 00:37:28.386 [2024-09-29 16:45:28.766736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.386 [2024-09-29 16:45:28.766770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.386 qpair failed and we were unable to recover it. 00:37:28.386 [2024-09-29 16:45:28.766891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.386 [2024-09-29 16:45:28.766932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.386 qpair failed and we were unable to recover it. 00:37:28.386 [2024-09-29 16:45:28.767071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.386 [2024-09-29 16:45:28.767105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.386 qpair failed and we were unable to recover it. 00:37:28.386 [2024-09-29 16:45:28.767316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.386 [2024-09-29 16:45:28.767378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.386 qpair failed and we were unable to recover it. 00:37:28.386 [2024-09-29 16:45:28.767556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.386 [2024-09-29 16:45:28.767593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.386 qpair failed and we were unable to recover it. 00:37:28.386 [2024-09-29 16:45:28.767762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.386 [2024-09-29 16:45:28.767796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.386 qpair failed and we were unable to recover it. 00:37:28.386 [2024-09-29 16:45:28.767918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.386 [2024-09-29 16:45:28.767968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.386 qpair failed and we were unable to recover it. 00:37:28.386 [2024-09-29 16:45:28.768157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.386 [2024-09-29 16:45:28.768218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.386 qpair failed and we were unable to recover it. 00:37:28.386 [2024-09-29 16:45:28.768395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.386 [2024-09-29 16:45:28.768454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.386 qpair failed and we were unable to recover it. 00:37:28.386 [2024-09-29 16:45:28.768582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.386 [2024-09-29 16:45:28.768622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.386 qpair failed and we were unable to recover it. 00:37:28.386 [2024-09-29 16:45:28.768816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.386 [2024-09-29 16:45:28.768865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.386 qpair failed and we were unable to recover it. 00:37:28.386 [2024-09-29 16:45:28.769036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.386 [2024-09-29 16:45:28.769084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.386 qpair failed and we were unable to recover it. 00:37:28.386 [2024-09-29 16:45:28.769220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.386 [2024-09-29 16:45:28.769276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.386 qpair failed and we were unable to recover it. 00:37:28.386 [2024-09-29 16:45:28.769489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.386 [2024-09-29 16:45:28.769552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.386 qpair failed and we were unable to recover it. 00:37:28.386 [2024-09-29 16:45:28.769748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.386 [2024-09-29 16:45:28.769783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.386 qpair failed and we were unable to recover it. 00:37:28.386 [2024-09-29 16:45:28.769906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.386 [2024-09-29 16:45:28.769942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.386 qpair failed and we were unable to recover it. 00:37:28.386 [2024-09-29 16:45:28.770065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.386 [2024-09-29 16:45:28.770100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.386 qpair failed and we were unable to recover it. 00:37:28.386 [2024-09-29 16:45:28.770236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.386 [2024-09-29 16:45:28.770274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.386 qpair failed and we were unable to recover it. 00:37:28.386 [2024-09-29 16:45:28.770421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.386 [2024-09-29 16:45:28.770458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.386 qpair failed and we were unable to recover it. 00:37:28.386 [2024-09-29 16:45:28.770588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.386 [2024-09-29 16:45:28.770626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.386 qpair failed and we were unable to recover it. 00:37:28.386 [2024-09-29 16:45:28.770781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.386 [2024-09-29 16:45:28.770820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.386 qpair failed and we were unable to recover it. 00:37:28.386 [2024-09-29 16:45:28.770984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.386 [2024-09-29 16:45:28.771031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.386 qpair failed and we were unable to recover it. 00:37:28.386 [2024-09-29 16:45:28.771175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.387 [2024-09-29 16:45:28.771215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.387 qpair failed and we were unable to recover it. 00:37:28.387 [2024-09-29 16:45:28.771438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.387 [2024-09-29 16:45:28.771475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.387 qpair failed and we were unable to recover it. 00:37:28.387 [2024-09-29 16:45:28.771645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.387 [2024-09-29 16:45:28.771694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.387 qpair failed and we were unable to recover it. 00:37:28.387 [2024-09-29 16:45:28.771820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.387 [2024-09-29 16:45:28.771855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.387 qpair failed and we were unable to recover it. 00:37:28.387 [2024-09-29 16:45:28.772019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.387 [2024-09-29 16:45:28.772056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.387 qpair failed and we were unable to recover it. 00:37:28.387 [2024-09-29 16:45:28.772329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.387 [2024-09-29 16:45:28.772387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.387 qpair failed and we were unable to recover it. 00:37:28.387 [2024-09-29 16:45:28.772508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.387 [2024-09-29 16:45:28.772545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.387 qpair failed and we were unable to recover it. 00:37:28.387 [2024-09-29 16:45:28.772769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.387 [2024-09-29 16:45:28.772804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.387 qpair failed and we were unable to recover it. 00:37:28.387 [2024-09-29 16:45:28.772983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.387 [2024-09-29 16:45:28.773050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.387 qpair failed and we were unable to recover it. 00:37:28.387 [2024-09-29 16:45:28.773226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.387 [2024-09-29 16:45:28.773279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.387 qpair failed and we were unable to recover it. 00:37:28.387 [2024-09-29 16:45:28.773408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.387 [2024-09-29 16:45:28.773444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.387 qpair failed and we were unable to recover it. 00:37:28.387 [2024-09-29 16:45:28.773607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.387 [2024-09-29 16:45:28.773642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.387 qpair failed and we were unable to recover it. 00:37:28.387 [2024-09-29 16:45:28.773792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.387 [2024-09-29 16:45:28.773826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.387 qpair failed and we were unable to recover it. 00:37:28.387 [2024-09-29 16:45:28.773963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.387 [2024-09-29 16:45:28.774001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.387 qpair failed and we were unable to recover it. 00:37:28.387 [2024-09-29 16:45:28.774130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.387 [2024-09-29 16:45:28.774167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.387 qpair failed and we were unable to recover it. 00:37:28.387 [2024-09-29 16:45:28.774318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.387 [2024-09-29 16:45:28.774355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.387 qpair failed and we were unable to recover it. 00:37:28.387 [2024-09-29 16:45:28.774482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.387 [2024-09-29 16:45:28.774519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.387 qpair failed and we were unable to recover it. 00:37:28.387 [2024-09-29 16:45:28.774686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.387 [2024-09-29 16:45:28.774752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.387 qpair failed and we were unable to recover it. 00:37:28.387 [2024-09-29 16:45:28.774904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.387 [2024-09-29 16:45:28.774943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.387 qpair failed and we were unable to recover it. 00:37:28.387 [2024-09-29 16:45:28.775082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.387 [2024-09-29 16:45:28.775145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.387 qpair failed and we were unable to recover it. 00:37:28.387 [2024-09-29 16:45:28.775288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.387 [2024-09-29 16:45:28.775342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.387 qpair failed and we were unable to recover it. 00:37:28.387 [2024-09-29 16:45:28.775459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.387 [2024-09-29 16:45:28.775502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.387 qpair failed and we were unable to recover it. 00:37:28.387 [2024-09-29 16:45:28.775678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.387 [2024-09-29 16:45:28.775727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.387 qpair failed and we were unable to recover it. 00:37:28.387 [2024-09-29 16:45:28.775862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.387 [2024-09-29 16:45:28.775899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.387 qpair failed and we were unable to recover it. 00:37:28.387 [2024-09-29 16:45:28.776047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.387 [2024-09-29 16:45:28.776082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.387 qpair failed and we were unable to recover it. 00:37:28.387 [2024-09-29 16:45:28.776222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.387 [2024-09-29 16:45:28.776257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.387 qpair failed and we were unable to recover it. 00:37:28.387 [2024-09-29 16:45:28.776433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.387 [2024-09-29 16:45:28.776466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.387 qpair failed and we were unable to recover it. 00:37:28.387 [2024-09-29 16:45:28.776575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.387 [2024-09-29 16:45:28.776609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.387 qpair failed and we were unable to recover it. 00:37:28.387 [2024-09-29 16:45:28.776729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.387 [2024-09-29 16:45:28.776764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.387 qpair failed and we were unable to recover it. 00:37:28.387 [2024-09-29 16:45:28.776890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.387 [2024-09-29 16:45:28.776923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.387 qpair failed and we were unable to recover it. 00:37:28.387 [2024-09-29 16:45:28.777084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.387 [2024-09-29 16:45:28.777122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.387 qpair failed and we were unable to recover it. 00:37:28.387 [2024-09-29 16:45:28.777315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.387 [2024-09-29 16:45:28.777369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.387 qpair failed and we were unable to recover it. 00:37:28.387 [2024-09-29 16:45:28.777503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.387 [2024-09-29 16:45:28.777555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.387 qpair failed and we were unable to recover it. 00:37:28.387 [2024-09-29 16:45:28.777693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.387 [2024-09-29 16:45:28.777727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.387 qpair failed and we were unable to recover it. 00:37:28.387 [2024-09-29 16:45:28.777893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.388 [2024-09-29 16:45:28.777945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.388 qpair failed and we were unable to recover it. 00:37:28.388 [2024-09-29 16:45:28.778082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.388 [2024-09-29 16:45:28.778134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.388 qpair failed and we were unable to recover it. 00:37:28.388 [2024-09-29 16:45:28.778266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.388 [2024-09-29 16:45:28.778304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.388 qpair failed and we were unable to recover it. 00:37:28.388 [2024-09-29 16:45:28.778460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.388 [2024-09-29 16:45:28.778493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.388 qpair failed and we were unable to recover it. 00:37:28.388 [2024-09-29 16:45:28.778606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.388 [2024-09-29 16:45:28.778639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.388 qpair failed and we were unable to recover it. 00:37:28.388 [2024-09-29 16:45:28.778783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.388 [2024-09-29 16:45:28.778831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.388 qpair failed and we were unable to recover it. 00:37:28.388 [2024-09-29 16:45:28.778978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.388 [2024-09-29 16:45:28.779018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.388 qpair failed and we were unable to recover it. 00:37:28.388 [2024-09-29 16:45:28.779168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.388 [2024-09-29 16:45:28.779206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.388 qpair failed and we were unable to recover it. 00:37:28.388 [2024-09-29 16:45:28.779343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.388 [2024-09-29 16:45:28.779378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.388 qpair failed and we were unable to recover it. 00:37:28.388 [2024-09-29 16:45:28.779500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.388 [2024-09-29 16:45:28.779535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.388 qpair failed and we were unable to recover it. 00:37:28.388 [2024-09-29 16:45:28.779643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.388 [2024-09-29 16:45:28.779691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.388 qpair failed and we were unable to recover it. 00:37:28.388 [2024-09-29 16:45:28.779843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.388 [2024-09-29 16:45:28.779881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.388 qpair failed and we were unable to recover it. 00:37:28.388 [2024-09-29 16:45:28.780092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.388 [2024-09-29 16:45:28.780145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.388 qpair failed and we were unable to recover it. 00:37:28.388 [2024-09-29 16:45:28.780323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.388 [2024-09-29 16:45:28.780359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.388 qpair failed and we were unable to recover it. 00:37:28.388 [2024-09-29 16:45:28.780506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.388 [2024-09-29 16:45:28.780545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.388 qpair failed and we were unable to recover it. 00:37:28.388 [2024-09-29 16:45:28.780732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.388 [2024-09-29 16:45:28.780772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.388 qpair failed and we were unable to recover it. 00:37:28.388 [2024-09-29 16:45:28.780928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.388 [2024-09-29 16:45:28.780996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.388 qpair failed and we were unable to recover it. 00:37:28.388 [2024-09-29 16:45:28.781139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.388 [2024-09-29 16:45:28.781194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.388 qpair failed and we were unable to recover it. 00:37:28.388 [2024-09-29 16:45:28.781334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.388 [2024-09-29 16:45:28.781401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.388 qpair failed and we were unable to recover it. 00:37:28.388 [2024-09-29 16:45:28.781559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.388 [2024-09-29 16:45:28.781597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.388 qpair failed and we were unable to recover it. 00:37:28.388 [2024-09-29 16:45:28.781782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.388 [2024-09-29 16:45:28.781816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.388 qpair failed and we were unable to recover it. 00:37:28.388 [2024-09-29 16:45:28.781924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.388 [2024-09-29 16:45:28.781960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.388 qpair failed and we were unable to recover it. 00:37:28.388 [2024-09-29 16:45:28.782115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.388 [2024-09-29 16:45:28.782148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.388 qpair failed and we were unable to recover it. 00:37:28.388 [2024-09-29 16:45:28.782346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.388 [2024-09-29 16:45:28.782383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.388 qpair failed and we were unable to recover it. 00:37:28.388 [2024-09-29 16:45:28.782538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.388 [2024-09-29 16:45:28.782575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.388 qpair failed and we were unable to recover it. 00:37:28.388 [2024-09-29 16:45:28.782699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.388 [2024-09-29 16:45:28.782749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.388 qpair failed and we were unable to recover it. 00:37:28.388 [2024-09-29 16:45:28.782872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.388 [2024-09-29 16:45:28.782906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.388 qpair failed and we were unable to recover it. 00:37:28.388 [2024-09-29 16:45:28.783015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.388 [2024-09-29 16:45:28.783048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.388 qpair failed and we were unable to recover it. 00:37:28.388 [2024-09-29 16:45:28.783161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.388 [2024-09-29 16:45:28.783195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.388 qpair failed and we were unable to recover it. 00:37:28.388 [2024-09-29 16:45:28.783389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.388 [2024-09-29 16:45:28.783454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.388 qpair failed and we were unable to recover it. 00:37:28.388 [2024-09-29 16:45:28.783629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.388 [2024-09-29 16:45:28.783696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.388 qpair failed and we were unable to recover it. 00:37:28.388 [2024-09-29 16:45:28.783861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.388 [2024-09-29 16:45:28.783916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.388 qpair failed and we were unable to recover it. 00:37:28.388 [2024-09-29 16:45:28.784082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.388 [2024-09-29 16:45:28.784122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.388 qpair failed and we were unable to recover it. 00:37:28.388 [2024-09-29 16:45:28.784304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.388 [2024-09-29 16:45:28.784357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.388 qpair failed and we were unable to recover it. 00:37:28.388 [2024-09-29 16:45:28.784513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.388 [2024-09-29 16:45:28.784553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.388 qpair failed and we were unable to recover it. 00:37:28.388 [2024-09-29 16:45:28.784755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.388 [2024-09-29 16:45:28.784789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.388 qpair failed and we were unable to recover it. 00:37:28.388 [2024-09-29 16:45:28.784897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.388 [2024-09-29 16:45:28.784931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.388 qpair failed and we were unable to recover it. 00:37:28.388 [2024-09-29 16:45:28.785073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.388 [2024-09-29 16:45:28.785111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.388 qpair failed and we were unable to recover it. 00:37:28.388 [2024-09-29 16:45:28.785309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.388 [2024-09-29 16:45:28.785370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.389 qpair failed and we were unable to recover it. 00:37:28.389 [2024-09-29 16:45:28.785525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.389 [2024-09-29 16:45:28.785563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.389 qpair failed and we were unable to recover it. 00:37:28.389 [2024-09-29 16:45:28.785718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.389 [2024-09-29 16:45:28.785755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.389 qpair failed and we were unable to recover it. 00:37:28.389 [2024-09-29 16:45:28.785918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.389 [2024-09-29 16:45:28.785965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.389 qpair failed and we were unable to recover it. 00:37:28.389 [2024-09-29 16:45:28.786133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.389 [2024-09-29 16:45:28.786173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.389 qpair failed and we were unable to recover it. 00:37:28.389 [2024-09-29 16:45:28.786300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.389 [2024-09-29 16:45:28.786337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.389 qpair failed and we were unable to recover it. 00:37:28.389 [2024-09-29 16:45:28.786531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.389 [2024-09-29 16:45:28.786570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.389 qpair failed and we were unable to recover it. 00:37:28.389 [2024-09-29 16:45:28.786741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.389 [2024-09-29 16:45:28.786775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.389 qpair failed and we were unable to recover it. 00:37:28.389 [2024-09-29 16:45:28.786901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.389 [2024-09-29 16:45:28.786934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.389 qpair failed and we were unable to recover it. 00:37:28.389 [2024-09-29 16:45:28.787099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.389 [2024-09-29 16:45:28.787136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.389 qpair failed and we were unable to recover it. 00:37:28.389 [2024-09-29 16:45:28.787258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.389 [2024-09-29 16:45:28.787295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.389 qpair failed and we were unable to recover it. 00:37:28.389 [2024-09-29 16:45:28.787460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.389 [2024-09-29 16:45:28.787499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.389 qpair failed and we were unable to recover it. 00:37:28.389 [2024-09-29 16:45:28.787649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.389 [2024-09-29 16:45:28.787704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.389 qpair failed and we were unable to recover it. 00:37:28.389 [2024-09-29 16:45:28.787843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.389 [2024-09-29 16:45:28.787878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.389 qpair failed and we were unable to recover it. 00:37:28.389 [2024-09-29 16:45:28.788072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.389 [2024-09-29 16:45:28.788125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.389 qpair failed and we were unable to recover it. 00:37:28.389 [2024-09-29 16:45:28.788298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.389 [2024-09-29 16:45:28.788350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.389 qpair failed and we were unable to recover it. 00:37:28.389 [2024-09-29 16:45:28.788466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.389 [2024-09-29 16:45:28.788500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.389 qpair failed and we were unable to recover it. 00:37:28.389 [2024-09-29 16:45:28.788693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.389 [2024-09-29 16:45:28.788738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.389 qpair failed and we were unable to recover it. 00:37:28.389 [2024-09-29 16:45:28.788870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.389 [2024-09-29 16:45:28.788907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.389 qpair failed and we were unable to recover it. 00:37:28.389 [2024-09-29 16:45:28.789096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.389 [2024-09-29 16:45:28.789150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.389 qpair failed and we were unable to recover it. 00:37:28.389 [2024-09-29 16:45:28.789311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.389 [2024-09-29 16:45:28.789368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.389 qpair failed and we were unable to recover it. 00:37:28.389 [2024-09-29 16:45:28.789551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.389 [2024-09-29 16:45:28.789604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.389 qpair failed and we were unable to recover it. 00:37:28.389 [2024-09-29 16:45:28.789795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.389 [2024-09-29 16:45:28.789842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.389 qpair failed and we were unable to recover it. 00:37:28.389 [2024-09-29 16:45:28.790011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.389 [2024-09-29 16:45:28.790067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.389 qpair failed and we were unable to recover it. 00:37:28.389 [2024-09-29 16:45:28.790248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.389 [2024-09-29 16:45:28.790300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.389 qpair failed and we were unable to recover it. 00:37:28.389 [2024-09-29 16:45:28.790488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.389 [2024-09-29 16:45:28.790540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.389 qpair failed and we were unable to recover it. 00:37:28.389 [2024-09-29 16:45:28.790656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.389 [2024-09-29 16:45:28.790701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.389 qpair failed and we were unable to recover it. 00:37:28.389 [2024-09-29 16:45:28.790872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.389 [2024-09-29 16:45:28.790935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.389 qpair failed and we were unable to recover it. 00:37:28.389 [2024-09-29 16:45:28.791206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.389 [2024-09-29 16:45:28.791270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.389 qpair failed and we were unable to recover it. 00:37:28.389 [2024-09-29 16:45:28.791463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.389 [2024-09-29 16:45:28.791524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.389 qpair failed and we were unable to recover it. 00:37:28.389 [2024-09-29 16:45:28.791663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.389 [2024-09-29 16:45:28.791704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.389 qpair failed and we were unable to recover it. 00:37:28.389 [2024-09-29 16:45:28.791881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.389 [2024-09-29 16:45:28.791916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.389 qpair failed and we were unable to recover it. 00:37:28.389 [2024-09-29 16:45:28.792040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.389 [2024-09-29 16:45:28.792084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.389 qpair failed and we were unable to recover it. 00:37:28.389 [2024-09-29 16:45:28.792219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.389 [2024-09-29 16:45:28.792256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.389 qpair failed and we were unable to recover it. 00:37:28.389 [2024-09-29 16:45:28.792410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.389 [2024-09-29 16:45:28.792447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.389 qpair failed and we were unable to recover it. 00:37:28.389 [2024-09-29 16:45:28.792575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.389 [2024-09-29 16:45:28.792613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.389 qpair failed and we were unable to recover it. 00:37:28.389 [2024-09-29 16:45:28.792783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.389 [2024-09-29 16:45:28.792818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.389 qpair failed and we were unable to recover it. 00:37:28.389 [2024-09-29 16:45:28.792958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.389 [2024-09-29 16:45:28.792992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.389 qpair failed and we were unable to recover it. 00:37:28.389 [2024-09-29 16:45:28.793107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.389 [2024-09-29 16:45:28.793155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.389 qpair failed and we were unable to recover it. 00:37:28.389 [2024-09-29 16:45:28.793389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.390 [2024-09-29 16:45:28.793426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.390 qpair failed and we were unable to recover it. 00:37:28.390 [2024-09-29 16:45:28.793603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.390 [2024-09-29 16:45:28.793635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.390 qpair failed and we were unable to recover it. 00:37:28.390 [2024-09-29 16:45:28.793761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.390 [2024-09-29 16:45:28.793794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.390 qpair failed and we were unable to recover it. 00:37:28.390 [2024-09-29 16:45:28.793963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.390 [2024-09-29 16:45:28.794000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.390 qpair failed and we were unable to recover it. 00:37:28.390 [2024-09-29 16:45:28.794172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.390 [2024-09-29 16:45:28.794204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.390 qpair failed and we were unable to recover it. 00:37:28.390 [2024-09-29 16:45:28.794382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.390 [2024-09-29 16:45:28.794418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.390 qpair failed and we were unable to recover it. 00:37:28.390 [2024-09-29 16:45:28.794650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.390 [2024-09-29 16:45:28.794693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.390 qpair failed and we were unable to recover it. 00:37:28.390 [2024-09-29 16:45:28.794880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.390 [2024-09-29 16:45:28.794927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.390 qpair failed and we were unable to recover it. 00:37:28.390 [2024-09-29 16:45:28.795105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.390 [2024-09-29 16:45:28.795144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.390 qpair failed and we were unable to recover it. 00:37:28.390 [2024-09-29 16:45:28.795436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.390 [2024-09-29 16:45:28.795494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.390 qpair failed and we were unable to recover it. 00:37:28.390 [2024-09-29 16:45:28.795681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.390 [2024-09-29 16:45:28.795745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.390 qpair failed and we were unable to recover it. 00:37:28.390 [2024-09-29 16:45:28.795872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.390 [2024-09-29 16:45:28.795906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.390 qpair failed and we were unable to recover it. 00:37:28.390 [2024-09-29 16:45:28.796015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.390 [2024-09-29 16:45:28.796047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.390 qpair failed and we were unable to recover it. 00:37:28.390 [2024-09-29 16:45:28.796187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.390 [2024-09-29 16:45:28.796236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.390 qpair failed and we were unable to recover it. 00:37:28.390 [2024-09-29 16:45:28.796370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.390 [2024-09-29 16:45:28.796407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.390 qpair failed and we were unable to recover it. 00:37:28.390 [2024-09-29 16:45:28.796587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.390 [2024-09-29 16:45:28.796623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.390 qpair failed and we were unable to recover it. 00:37:28.390 [2024-09-29 16:45:28.796820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.390 [2024-09-29 16:45:28.796868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.390 qpair failed and we were unable to recover it. 00:37:28.390 [2024-09-29 16:45:28.797011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.390 [2024-09-29 16:45:28.797059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.390 qpair failed and we were unable to recover it. 00:37:28.390 [2024-09-29 16:45:28.797297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.390 [2024-09-29 16:45:28.797363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.390 qpair failed and we were unable to recover it. 00:37:28.390 [2024-09-29 16:45:28.797599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.390 [2024-09-29 16:45:28.797658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.390 qpair failed and we were unable to recover it. 00:37:28.390 [2024-09-29 16:45:28.797845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.390 [2024-09-29 16:45:28.797879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.390 qpair failed and we were unable to recover it. 00:37:28.390 [2024-09-29 16:45:28.797983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.390 [2024-09-29 16:45:28.798017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.390 qpair failed and we were unable to recover it. 00:37:28.390 [2024-09-29 16:45:28.798151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.390 [2024-09-29 16:45:28.798218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.390 qpair failed and we were unable to recover it. 00:37:28.390 [2024-09-29 16:45:28.798371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.390 [2024-09-29 16:45:28.798442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.390 qpair failed and we were unable to recover it. 00:37:28.390 [2024-09-29 16:45:28.798632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.390 [2024-09-29 16:45:28.798666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.390 qpair failed and we were unable to recover it. 00:37:28.390 [2024-09-29 16:45:28.798848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.390 [2024-09-29 16:45:28.798882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.390 qpair failed and we were unable to recover it. 00:37:28.390 [2024-09-29 16:45:28.799061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.390 [2024-09-29 16:45:28.799120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.390 qpair failed and we were unable to recover it. 00:37:28.390 [2024-09-29 16:45:28.799259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.390 [2024-09-29 16:45:28.799311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.390 qpair failed and we were unable to recover it. 00:37:28.390 [2024-09-29 16:45:28.799461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.390 [2024-09-29 16:45:28.799560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.390 qpair failed and we were unable to recover it. 00:37:28.390 [2024-09-29 16:45:28.799722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.390 [2024-09-29 16:45:28.799755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.390 qpair failed and we were unable to recover it. 00:37:28.390 [2024-09-29 16:45:28.799924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.390 [2024-09-29 16:45:28.799979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.390 qpair failed and we were unable to recover it. 00:37:28.390 [2024-09-29 16:45:28.800123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.390 [2024-09-29 16:45:28.800181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.390 qpair failed and we were unable to recover it. 00:37:28.390 [2024-09-29 16:45:28.800379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.390 [2024-09-29 16:45:28.800432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.390 qpair failed and we were unable to recover it. 00:37:28.390 [2024-09-29 16:45:28.800597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.390 [2024-09-29 16:45:28.800632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.390 qpair failed and we were unable to recover it. 00:37:28.390 [2024-09-29 16:45:28.800822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.390 [2024-09-29 16:45:28.800874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.390 qpair failed and we were unable to recover it. 00:37:28.390 [2024-09-29 16:45:28.801032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.390 [2024-09-29 16:45:28.801080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.390 qpair failed and we were unable to recover it. 00:37:28.390 [2024-09-29 16:45:28.801232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.390 [2024-09-29 16:45:28.801266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.390 qpair failed and we were unable to recover it. 00:37:28.390 [2024-09-29 16:45:28.801400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.390 [2024-09-29 16:45:28.801434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.390 qpair failed and we were unable to recover it. 00:37:28.390 [2024-09-29 16:45:28.801545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.391 [2024-09-29 16:45:28.801578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.391 qpair failed and we were unable to recover it. 00:37:28.391 [2024-09-29 16:45:28.801698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.391 [2024-09-29 16:45:28.801732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.391 qpair failed and we were unable to recover it. 00:37:28.391 [2024-09-29 16:45:28.801892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.391 [2024-09-29 16:45:28.801929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.391 qpair failed and we were unable to recover it. 00:37:28.391 [2024-09-29 16:45:28.802091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.391 [2024-09-29 16:45:28.802142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.391 qpair failed and we were unable to recover it. 00:37:28.391 [2024-09-29 16:45:28.802296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.391 [2024-09-29 16:45:28.802349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.391 qpair failed and we were unable to recover it. 00:37:28.391 [2024-09-29 16:45:28.802491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.391 [2024-09-29 16:45:28.802525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.391 qpair failed and we were unable to recover it. 00:37:28.391 [2024-09-29 16:45:28.802644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.391 [2024-09-29 16:45:28.802702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.391 qpair failed and we were unable to recover it. 00:37:28.391 [2024-09-29 16:45:28.802841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.391 [2024-09-29 16:45:28.802889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.391 qpair failed and we were unable to recover it. 00:37:28.391 [2024-09-29 16:45:28.803062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.391 [2024-09-29 16:45:28.803099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.391 qpair failed and we were unable to recover it. 00:37:28.391 [2024-09-29 16:45:28.803244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.391 [2024-09-29 16:45:28.803278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.391 qpair failed and we were unable to recover it. 00:37:28.391 [2024-09-29 16:45:28.803427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.391 [2024-09-29 16:45:28.803460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.391 qpair failed and we were unable to recover it. 00:37:28.391 [2024-09-29 16:45:28.803618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.391 [2024-09-29 16:45:28.803653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.391 qpair failed and we were unable to recover it. 00:37:28.391 [2024-09-29 16:45:28.803839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.391 [2024-09-29 16:45:28.803894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.391 qpair failed and we were unable to recover it. 00:37:28.391 [2024-09-29 16:45:28.804058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.391 [2024-09-29 16:45:28.804110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.391 qpair failed and we were unable to recover it. 00:37:28.391 [2024-09-29 16:45:28.804297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.391 [2024-09-29 16:45:28.804350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.391 qpair failed and we were unable to recover it. 00:37:28.391 [2024-09-29 16:45:28.804490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.391 [2024-09-29 16:45:28.804524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.391 qpair failed and we were unable to recover it. 00:37:28.391 [2024-09-29 16:45:28.804682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.391 [2024-09-29 16:45:28.804716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.391 qpair failed and we were unable to recover it. 00:37:28.391 [2024-09-29 16:45:28.804844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.391 [2024-09-29 16:45:28.804897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.391 qpair failed and we were unable to recover it. 00:37:28.391 [2024-09-29 16:45:28.805022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.391 [2024-09-29 16:45:28.805059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.391 qpair failed and we were unable to recover it. 00:37:28.391 [2024-09-29 16:45:28.805259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.391 [2024-09-29 16:45:28.805306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.391 qpair failed and we were unable to recover it. 00:37:28.391 [2024-09-29 16:45:28.805458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.391 [2024-09-29 16:45:28.805502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.391 qpair failed and we were unable to recover it. 00:37:28.391 [2024-09-29 16:45:28.805654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.391 [2024-09-29 16:45:28.805697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.391 qpair failed and we were unable to recover it. 00:37:28.391 [2024-09-29 16:45:28.805821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.391 [2024-09-29 16:45:28.805857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.391 qpair failed and we were unable to recover it. 00:37:28.391 [2024-09-29 16:45:28.805996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.391 [2024-09-29 16:45:28.806043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.391 qpair failed and we were unable to recover it. 00:37:28.391 [2024-09-29 16:45:28.806197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.391 [2024-09-29 16:45:28.806234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.391 qpair failed and we were unable to recover it. 00:37:28.391 [2024-09-29 16:45:28.806379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.391 [2024-09-29 16:45:28.806413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.391 qpair failed and we were unable to recover it. 00:37:28.391 [2024-09-29 16:45:28.806568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.391 [2024-09-29 16:45:28.806601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.391 qpair failed and we were unable to recover it. 00:37:28.391 [2024-09-29 16:45:28.806744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.391 [2024-09-29 16:45:28.806792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.391 qpair failed and we were unable to recover it. 00:37:28.391 [2024-09-29 16:45:28.806974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.391 [2024-09-29 16:45:28.807027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.391 qpair failed and we were unable to recover it. 00:37:28.391 [2024-09-29 16:45:28.807239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.391 [2024-09-29 16:45:28.807294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.391 qpair failed and we were unable to recover it. 00:37:28.391 [2024-09-29 16:45:28.807472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.391 [2024-09-29 16:45:28.807544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.391 qpair failed and we were unable to recover it. 00:37:28.391 [2024-09-29 16:45:28.807659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.391 [2024-09-29 16:45:28.807702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.391 qpair failed and we were unable to recover it. 00:37:28.391 [2024-09-29 16:45:28.807858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.391 [2024-09-29 16:45:28.807911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.391 qpair failed and we were unable to recover it. 00:37:28.391 [2024-09-29 16:45:28.808045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.391 [2024-09-29 16:45:28.808100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.391 qpair failed and we were unable to recover it. 00:37:28.391 [2024-09-29 16:45:28.808343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.391 [2024-09-29 16:45:28.808413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.391 qpair failed and we were unable to recover it. 00:37:28.391 [2024-09-29 16:45:28.808580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.391 [2024-09-29 16:45:28.808620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.391 qpair failed and we were unable to recover it. 00:37:28.391 [2024-09-29 16:45:28.808801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.391 [2024-09-29 16:45:28.808848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.391 qpair failed and we were unable to recover it. 00:37:28.391 [2024-09-29 16:45:28.809022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.391 [2024-09-29 16:45:28.809061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.391 qpair failed and we were unable to recover it. 00:37:28.391 [2024-09-29 16:45:28.809262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.391 [2024-09-29 16:45:28.809322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.391 qpair failed and we were unable to recover it. 00:37:28.392 [2024-09-29 16:45:28.809559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.392 [2024-09-29 16:45:28.809616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.392 qpair failed and we were unable to recover it. 00:37:28.392 [2024-09-29 16:45:28.809793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.392 [2024-09-29 16:45:28.809829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.392 qpair failed and we were unable to recover it. 00:37:28.392 [2024-09-29 16:45:28.809940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.392 [2024-09-29 16:45:28.809976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.392 qpair failed and we were unable to recover it. 00:37:28.392 [2024-09-29 16:45:28.810167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.392 [2024-09-29 16:45:28.810219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.392 qpair failed and we were unable to recover it. 00:37:28.392 [2024-09-29 16:45:28.810389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.392 [2024-09-29 16:45:28.810444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.392 qpair failed and we were unable to recover it. 00:37:28.392 [2024-09-29 16:45:28.810604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.392 [2024-09-29 16:45:28.810651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.392 qpair failed and we were unable to recover it. 00:37:28.392 [2024-09-29 16:45:28.810816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.392 [2024-09-29 16:45:28.810852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.392 qpair failed and we were unable to recover it. 00:37:28.392 [2024-09-29 16:45:28.811016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.392 [2024-09-29 16:45:28.811053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.392 qpair failed and we were unable to recover it. 00:37:28.392 [2024-09-29 16:45:28.811256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.392 [2024-09-29 16:45:28.811314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.392 qpair failed and we were unable to recover it. 00:37:28.392 [2024-09-29 16:45:28.811423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.392 [2024-09-29 16:45:28.811458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.392 qpair failed and we were unable to recover it. 00:37:28.392 [2024-09-29 16:45:28.811623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.392 [2024-09-29 16:45:28.811657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.392 qpair failed and we were unable to recover it. 00:37:28.392 [2024-09-29 16:45:28.811837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.392 [2024-09-29 16:45:28.811876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.392 qpair failed and we were unable to recover it. 00:37:28.392 [2024-09-29 16:45:28.812018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.392 [2024-09-29 16:45:28.812071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.392 qpair failed and we were unable to recover it. 00:37:28.392 [2024-09-29 16:45:28.812207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.392 [2024-09-29 16:45:28.812247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.392 qpair failed and we were unable to recover it. 00:37:28.392 [2024-09-29 16:45:28.812405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.392 [2024-09-29 16:45:28.812443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.392 qpair failed and we were unable to recover it. 00:37:28.392 [2024-09-29 16:45:28.812597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.392 [2024-09-29 16:45:28.812634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.392 qpair failed and we were unable to recover it. 00:37:28.392 [2024-09-29 16:45:28.812851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.392 [2024-09-29 16:45:28.812899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.392 qpair failed and we were unable to recover it. 00:37:28.392 [2024-09-29 16:45:28.813045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.392 [2024-09-29 16:45:28.813084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.392 qpair failed and we were unable to recover it. 00:37:28.392 [2024-09-29 16:45:28.813210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.392 [2024-09-29 16:45:28.813246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.392 qpair failed and we were unable to recover it. 00:37:28.392 [2024-09-29 16:45:28.813394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.392 [2024-09-29 16:45:28.813446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.392 qpair failed and we were unable to recover it. 00:37:28.392 [2024-09-29 16:45:28.813596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.392 [2024-09-29 16:45:28.813632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.392 qpair failed and we were unable to recover it. 00:37:28.392 [2024-09-29 16:45:28.813757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.392 [2024-09-29 16:45:28.813797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.392 qpair failed and we were unable to recover it. 00:37:28.392 [2024-09-29 16:45:28.813959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.392 [2024-09-29 16:45:28.814011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.392 qpair failed and we were unable to recover it. 00:37:28.392 [2024-09-29 16:45:28.814139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.392 [2024-09-29 16:45:28.814192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.392 qpair failed and we were unable to recover it. 00:37:28.392 [2024-09-29 16:45:28.814349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.392 [2024-09-29 16:45:28.814401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.392 qpair failed and we were unable to recover it. 00:37:28.392 [2024-09-29 16:45:28.814584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.392 [2024-09-29 16:45:28.814631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.392 qpair failed and we were unable to recover it. 00:37:28.392 [2024-09-29 16:45:28.814769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.392 [2024-09-29 16:45:28.814816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.392 qpair failed and we were unable to recover it. 00:37:28.392 [2024-09-29 16:45:28.814967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.392 [2024-09-29 16:45:28.815015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.392 qpair failed and we were unable to recover it. 00:37:28.392 [2024-09-29 16:45:28.815253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.392 [2024-09-29 16:45:28.815313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.392 qpair failed and we were unable to recover it. 00:37:28.392 [2024-09-29 16:45:28.815587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.392 [2024-09-29 16:45:28.815643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.392 qpair failed and we were unable to recover it. 00:37:28.392 [2024-09-29 16:45:28.815817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.392 [2024-09-29 16:45:28.815852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.392 qpair failed and we were unable to recover it. 00:37:28.392 [2024-09-29 16:45:28.816050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.392 [2024-09-29 16:45:28.816088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.392 qpair failed and we were unable to recover it. 00:37:28.392 [2024-09-29 16:45:28.816347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.392 [2024-09-29 16:45:28.816403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.392 qpair failed and we were unable to recover it. 00:37:28.392 [2024-09-29 16:45:28.816564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.392 [2024-09-29 16:45:28.816604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.393 qpair failed and we were unable to recover it. 00:37:28.393 [2024-09-29 16:45:28.816780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.393 [2024-09-29 16:45:28.816817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.393 qpair failed and we were unable to recover it. 00:37:28.393 [2024-09-29 16:45:28.816960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.393 [2024-09-29 16:45:28.817015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.393 qpair failed and we were unable to recover it. 00:37:28.393 [2024-09-29 16:45:28.817208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.393 [2024-09-29 16:45:28.817260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.393 qpair failed and we were unable to recover it. 00:37:28.393 [2024-09-29 16:45:28.817452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.393 [2024-09-29 16:45:28.817514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.393 qpair failed and we were unable to recover it. 00:37:28.393 [2024-09-29 16:45:28.817653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.393 [2024-09-29 16:45:28.817699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.393 qpair failed and we were unable to recover it. 00:37:28.393 [2024-09-29 16:45:28.817840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.393 [2024-09-29 16:45:28.817892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.393 qpair failed and we were unable to recover it. 00:37:28.393 [2024-09-29 16:45:28.818085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.393 [2024-09-29 16:45:28.818136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.393 qpair failed and we were unable to recover it. 00:37:28.393 [2024-09-29 16:45:28.818373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.393 [2024-09-29 16:45:28.818432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.393 qpair failed and we were unable to recover it. 00:37:28.393 [2024-09-29 16:45:28.818598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.393 [2024-09-29 16:45:28.818643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.393 qpair failed and we were unable to recover it. 00:37:28.393 [2024-09-29 16:45:28.818804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.393 [2024-09-29 16:45:28.818858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.393 qpair failed and we were unable to recover it. 00:37:28.393 [2024-09-29 16:45:28.819049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.393 [2024-09-29 16:45:28.819102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.393 qpair failed and we were unable to recover it. 00:37:28.393 [2024-09-29 16:45:28.819314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.393 [2024-09-29 16:45:28.819375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.393 qpair failed and we were unable to recover it. 00:37:28.393 [2024-09-29 16:45:28.819561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.393 [2024-09-29 16:45:28.819600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.393 qpair failed and we were unable to recover it. 00:37:28.393 [2024-09-29 16:45:28.819770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.393 [2024-09-29 16:45:28.819805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.393 qpair failed and we were unable to recover it. 00:37:28.393 [2024-09-29 16:45:28.819974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.393 [2024-09-29 16:45:28.820042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.393 qpair failed and we were unable to recover it. 00:37:28.393 [2024-09-29 16:45:28.820297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.393 [2024-09-29 16:45:28.820335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.393 qpair failed and we were unable to recover it. 00:37:28.393 [2024-09-29 16:45:28.820484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.393 [2024-09-29 16:45:28.820522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.393 qpair failed and we were unable to recover it. 00:37:28.393 [2024-09-29 16:45:28.820685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.393 [2024-09-29 16:45:28.820739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.393 qpair failed and we were unable to recover it. 00:37:28.393 [2024-09-29 16:45:28.820910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.393 [2024-09-29 16:45:28.820958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.393 qpair failed and we were unable to recover it. 00:37:28.393 [2024-09-29 16:45:28.821166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.393 [2024-09-29 16:45:28.821220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.393 qpair failed and we were unable to recover it. 00:37:28.393 [2024-09-29 16:45:28.821454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.393 [2024-09-29 16:45:28.821493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.393 qpair failed and we were unable to recover it. 00:37:28.393 [2024-09-29 16:45:28.821664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.393 [2024-09-29 16:45:28.821709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.393 qpair failed and we were unable to recover it. 00:37:28.393 [2024-09-29 16:45:28.821857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.393 [2024-09-29 16:45:28.821891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.393 qpair failed and we were unable to recover it. 00:37:28.393 [2024-09-29 16:45:28.822025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.393 [2024-09-29 16:45:28.822062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.393 qpair failed and we were unable to recover it. 00:37:28.393 [2024-09-29 16:45:28.822276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.393 [2024-09-29 16:45:28.822333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.393 qpair failed and we were unable to recover it. 00:37:28.393 [2024-09-29 16:45:28.822534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.393 [2024-09-29 16:45:28.822595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.393 qpair failed and we were unable to recover it. 00:37:28.393 [2024-09-29 16:45:28.822735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.393 [2024-09-29 16:45:28.822770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.393 qpair failed and we were unable to recover it. 00:37:28.393 [2024-09-29 16:45:28.822985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.393 [2024-09-29 16:45:28.823038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.393 qpair failed and we were unable to recover it. 00:37:28.393 [2024-09-29 16:45:28.823228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.393 [2024-09-29 16:45:28.823290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.393 qpair failed and we were unable to recover it. 00:37:28.393 [2024-09-29 16:45:28.823450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.393 [2024-09-29 16:45:28.823488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.393 qpair failed and we were unable to recover it. 00:37:28.393 [2024-09-29 16:45:28.823667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.393 [2024-09-29 16:45:28.823725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.393 qpair failed and we were unable to recover it. 00:37:28.393 [2024-09-29 16:45:28.823870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.393 [2024-09-29 16:45:28.823918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.393 qpair failed and we were unable to recover it. 00:37:28.393 [2024-09-29 16:45:28.824086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.393 [2024-09-29 16:45:28.824155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.393 qpair failed and we were unable to recover it. 00:37:28.393 [2024-09-29 16:45:28.824427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.393 [2024-09-29 16:45:28.824485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.393 qpair failed and we were unable to recover it. 00:37:28.393 [2024-09-29 16:45:28.824642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.393 [2024-09-29 16:45:28.824689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.393 qpair failed and we were unable to recover it. 00:37:28.393 [2024-09-29 16:45:28.824850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.393 [2024-09-29 16:45:28.824884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.393 qpair failed and we were unable to recover it. 00:37:28.393 [2024-09-29 16:45:28.825018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.393 [2024-09-29 16:45:28.825055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.393 qpair failed and we were unable to recover it. 00:37:28.393 [2024-09-29 16:45:28.825266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.393 [2024-09-29 16:45:28.825330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.393 qpair failed and we were unable to recover it. 00:37:28.393 [2024-09-29 16:45:28.825555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.394 [2024-09-29 16:45:28.825612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.394 qpair failed and we were unable to recover it. 00:37:28.394 [2024-09-29 16:45:28.825752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.394 [2024-09-29 16:45:28.825804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.394 qpair failed and we were unable to recover it. 00:37:28.394 [2024-09-29 16:45:28.826005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.394 [2024-09-29 16:45:28.826057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.394 qpair failed and we were unable to recover it. 00:37:28.394 [2024-09-29 16:45:28.826280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.394 [2024-09-29 16:45:28.826339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.394 qpair failed and we were unable to recover it. 00:37:28.394 [2024-09-29 16:45:28.826514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.394 [2024-09-29 16:45:28.826578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.394 qpair failed and we were unable to recover it. 00:37:28.394 [2024-09-29 16:45:28.826726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.394 [2024-09-29 16:45:28.826760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.394 qpair failed and we were unable to recover it. 00:37:28.394 [2024-09-29 16:45:28.826869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.394 [2024-09-29 16:45:28.826901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.394 qpair failed and we were unable to recover it. 00:37:28.394 [2024-09-29 16:45:28.827054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.394 [2024-09-29 16:45:28.827088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.394 qpair failed and we were unable to recover it. 00:37:28.394 [2024-09-29 16:45:28.827264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.394 [2024-09-29 16:45:28.827301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.394 qpair failed and we were unable to recover it. 00:37:28.394 [2024-09-29 16:45:28.827460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.394 [2024-09-29 16:45:28.827498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.394 qpair failed and we were unable to recover it. 00:37:28.394 [2024-09-29 16:45:28.827654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.394 [2024-09-29 16:45:28.827692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.394 qpair failed and we were unable to recover it. 00:37:28.394 [2024-09-29 16:45:28.827810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.394 [2024-09-29 16:45:28.827842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.394 qpair failed and we were unable to recover it. 00:37:28.394 [2024-09-29 16:45:28.828042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.394 [2024-09-29 16:45:28.828090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.394 qpair failed and we were unable to recover it. 00:37:28.394 [2024-09-29 16:45:28.828241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.394 [2024-09-29 16:45:28.828299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.394 qpair failed and we were unable to recover it. 00:37:28.394 [2024-09-29 16:45:28.828463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.394 [2024-09-29 16:45:28.828516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.394 qpair failed and we were unable to recover it. 00:37:28.394 [2024-09-29 16:45:28.828667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.394 [2024-09-29 16:45:28.828714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.394 qpair failed and we were unable to recover it. 00:37:28.394 [2024-09-29 16:45:28.828866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.394 [2024-09-29 16:45:28.828905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.394 qpair failed and we were unable to recover it. 00:37:28.394 [2024-09-29 16:45:28.829095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.394 [2024-09-29 16:45:28.829146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.394 qpair failed and we were unable to recover it. 00:37:28.394 [2024-09-29 16:45:28.829363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.394 [2024-09-29 16:45:28.829423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.394 qpair failed and we were unable to recover it. 00:37:28.394 [2024-09-29 16:45:28.829577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.394 [2024-09-29 16:45:28.829614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.394 qpair failed and we were unable to recover it. 00:37:28.394 [2024-09-29 16:45:28.829780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.394 [2024-09-29 16:45:28.829813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.394 qpair failed and we were unable to recover it. 00:37:28.394 [2024-09-29 16:45:28.829951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.394 [2024-09-29 16:45:28.829988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.394 qpair failed and we were unable to recover it. 00:37:28.394 [2024-09-29 16:45:28.830247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.394 [2024-09-29 16:45:28.830305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.394 qpair failed and we were unable to recover it. 00:37:28.394 [2024-09-29 16:45:28.830476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.394 [2024-09-29 16:45:28.830536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.394 qpair failed and we were unable to recover it. 00:37:28.394 [2024-09-29 16:45:28.830678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.394 [2024-09-29 16:45:28.830715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.394 qpair failed and we were unable to recover it. 00:37:28.394 [2024-09-29 16:45:28.830849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.394 [2024-09-29 16:45:28.830896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.394 qpair failed and we were unable to recover it. 00:37:28.394 [2024-09-29 16:45:28.831021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.394 [2024-09-29 16:45:28.831055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.394 qpair failed and we were unable to recover it. 00:37:28.394 [2024-09-29 16:45:28.831224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.394 [2024-09-29 16:45:28.831259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.394 qpair failed and we were unable to recover it. 00:37:28.394 [2024-09-29 16:45:28.831500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.394 [2024-09-29 16:45:28.831533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.394 qpair failed and we were unable to recover it. 00:37:28.394 [2024-09-29 16:45:28.831666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.394 [2024-09-29 16:45:28.831704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.394 qpair failed and we were unable to recover it. 00:37:28.394 [2024-09-29 16:45:28.831834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.394 [2024-09-29 16:45:28.831869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.394 qpair failed and we were unable to recover it. 00:37:28.394 [2024-09-29 16:45:28.832009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.394 [2024-09-29 16:45:28.832045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.394 qpair failed and we were unable to recover it. 00:37:28.394 [2024-09-29 16:45:28.832212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.394 [2024-09-29 16:45:28.832250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.394 qpair failed and we were unable to recover it. 00:37:28.394 [2024-09-29 16:45:28.832398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.394 [2024-09-29 16:45:28.832434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.394 qpair failed and we were unable to recover it. 00:37:28.394 [2024-09-29 16:45:28.832609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.394 [2024-09-29 16:45:28.832657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.394 qpair failed and we were unable to recover it. 00:37:28.394 [2024-09-29 16:45:28.832797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.394 [2024-09-29 16:45:28.832833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.394 qpair failed and we were unable to recover it. 00:37:28.394 [2024-09-29 16:45:28.832995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.394 [2024-09-29 16:45:28.833048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.394 qpair failed and we were unable to recover it. 00:37:28.394 [2024-09-29 16:45:28.833209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.394 [2024-09-29 16:45:28.833269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.394 qpair failed and we were unable to recover it. 00:37:28.394 [2024-09-29 16:45:28.833397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.394 [2024-09-29 16:45:28.833435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.395 qpair failed and we were unable to recover it. 00:37:28.395 [2024-09-29 16:45:28.833600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.395 [2024-09-29 16:45:28.833634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.395 qpair failed and we were unable to recover it. 00:37:28.395 [2024-09-29 16:45:28.833806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.395 [2024-09-29 16:45:28.833844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.395 qpair failed and we were unable to recover it. 00:37:28.395 [2024-09-29 16:45:28.833997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.395 [2024-09-29 16:45:28.834033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.395 qpair failed and we were unable to recover it. 00:37:28.395 [2024-09-29 16:45:28.834226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.395 [2024-09-29 16:45:28.834284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.395 qpair failed and we were unable to recover it. 00:37:28.395 [2024-09-29 16:45:28.834420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.395 [2024-09-29 16:45:28.834455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.395 qpair failed and we were unable to recover it. 00:37:28.395 [2024-09-29 16:45:28.834625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.395 [2024-09-29 16:45:28.834657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.395 qpair failed and we were unable to recover it. 00:37:28.395 [2024-09-29 16:45:28.834814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.395 [2024-09-29 16:45:28.834850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.395 qpair failed and we were unable to recover it. 00:37:28.395 [2024-09-29 16:45:28.835112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.395 [2024-09-29 16:45:28.835150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.395 qpair failed and we were unable to recover it. 00:37:28.395 [2024-09-29 16:45:28.835306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.395 [2024-09-29 16:45:28.835346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.395 qpair failed and we were unable to recover it. 00:37:28.395 [2024-09-29 16:45:28.835559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.395 [2024-09-29 16:45:28.835593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.395 qpair failed and we were unable to recover it. 00:37:28.395 [2024-09-29 16:45:28.835734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.395 [2024-09-29 16:45:28.835770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.395 qpair failed and we were unable to recover it. 00:37:28.395 [2024-09-29 16:45:28.835890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.395 [2024-09-29 16:45:28.835924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.395 qpair failed and we were unable to recover it. 00:37:28.395 [2024-09-29 16:45:28.836096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.395 [2024-09-29 16:45:28.836148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.395 qpair failed and we were unable to recover it. 00:37:28.395 [2024-09-29 16:45:28.836315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.395 [2024-09-29 16:45:28.836365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.395 qpair failed and we were unable to recover it. 00:37:28.395 [2024-09-29 16:45:28.836483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.395 [2024-09-29 16:45:28.836517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.395 qpair failed and we were unable to recover it. 00:37:28.395 [2024-09-29 16:45:28.836686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.395 [2024-09-29 16:45:28.836734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.395 qpair failed and we were unable to recover it. 00:37:28.395 [2024-09-29 16:45:28.836867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.395 [2024-09-29 16:45:28.836904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.395 qpair failed and we were unable to recover it. 00:37:28.395 [2024-09-29 16:45:28.837069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.395 [2024-09-29 16:45:28.837113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.395 qpair failed and we were unable to recover it. 00:37:28.395 [2024-09-29 16:45:28.837247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.395 [2024-09-29 16:45:28.837301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.395 qpair failed and we were unable to recover it. 00:37:28.395 [2024-09-29 16:45:28.837487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.395 [2024-09-29 16:45:28.837546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.395 qpair failed and we were unable to recover it. 00:37:28.395 [2024-09-29 16:45:28.837690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.395 [2024-09-29 16:45:28.837724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.395 qpair failed and we were unable to recover it. 00:37:28.395 [2024-09-29 16:45:28.837868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.395 [2024-09-29 16:45:28.837901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.395 qpair failed and we were unable to recover it. 00:37:28.395 [2024-09-29 16:45:28.838038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.395 [2024-09-29 16:45:28.838071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.395 qpair failed and we were unable to recover it. 00:37:28.395 [2024-09-29 16:45:28.838190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.395 [2024-09-29 16:45:28.838225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.395 qpair failed and we were unable to recover it. 00:37:28.395 [2024-09-29 16:45:28.838412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.395 [2024-09-29 16:45:28.838450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.395 qpair failed and we were unable to recover it. 00:37:28.395 [2024-09-29 16:45:28.838608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.395 [2024-09-29 16:45:28.838657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.395 qpair failed and we were unable to recover it. 00:37:28.395 [2024-09-29 16:45:28.838796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.395 [2024-09-29 16:45:28.838833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.395 qpair failed and we were unable to recover it. 00:37:28.395 [2024-09-29 16:45:28.838968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.395 [2024-09-29 16:45:28.839005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.395 qpair failed and we were unable to recover it. 00:37:28.395 [2024-09-29 16:45:28.839260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.395 [2024-09-29 16:45:28.839317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.395 qpair failed and we were unable to recover it. 00:37:28.395 [2024-09-29 16:45:28.839477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.395 [2024-09-29 16:45:28.839521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.395 qpair failed and we were unable to recover it. 00:37:28.395 [2024-09-29 16:45:28.839701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.395 [2024-09-29 16:45:28.839748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.395 qpair failed and we were unable to recover it. 00:37:28.395 [2024-09-29 16:45:28.839930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.395 [2024-09-29 16:45:28.839980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.395 qpair failed and we were unable to recover it. 00:37:28.395 [2024-09-29 16:45:28.840144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.395 [2024-09-29 16:45:28.840184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.395 qpair failed and we were unable to recover it. 00:37:28.395 [2024-09-29 16:45:28.840314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.395 [2024-09-29 16:45:28.840351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.395 qpair failed and we were unable to recover it. 00:37:28.395 [2024-09-29 16:45:28.840499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.395 [2024-09-29 16:45:28.840552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.395 qpair failed and we were unable to recover it. 00:37:28.395 [2024-09-29 16:45:28.840725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.395 [2024-09-29 16:45:28.840759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.395 qpair failed and we were unable to recover it. 00:37:28.395 [2024-09-29 16:45:28.840895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.395 [2024-09-29 16:45:28.840927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.395 qpair failed and we were unable to recover it. 00:37:28.395 [2024-09-29 16:45:28.841109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.395 [2024-09-29 16:45:28.841142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.395 qpair failed and we were unable to recover it. 00:37:28.396 [2024-09-29 16:45:28.841310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.396 [2024-09-29 16:45:28.841348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.396 qpair failed and we were unable to recover it. 00:37:28.396 [2024-09-29 16:45:28.841472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.396 [2024-09-29 16:45:28.841508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.396 qpair failed and we were unable to recover it. 00:37:28.396 [2024-09-29 16:45:28.841665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.396 [2024-09-29 16:45:28.841706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.396 qpair failed and we were unable to recover it. 00:37:28.396 [2024-09-29 16:45:28.841815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.396 [2024-09-29 16:45:28.841849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.396 qpair failed and we were unable to recover it. 00:37:28.396 [2024-09-29 16:45:28.841993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.396 [2024-09-29 16:45:28.842041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.396 qpair failed and we were unable to recover it. 00:37:28.396 [2024-09-29 16:45:28.842316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.396 [2024-09-29 16:45:28.842385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.396 qpair failed and we were unable to recover it. 00:37:28.396 [2024-09-29 16:45:28.842535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.396 [2024-09-29 16:45:28.842571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.396 qpair failed and we were unable to recover it. 00:37:28.396 [2024-09-29 16:45:28.842712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.396 [2024-09-29 16:45:28.842747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.396 qpair failed and we were unable to recover it. 00:37:28.396 [2024-09-29 16:45:28.842858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.396 [2024-09-29 16:45:28.842893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.396 qpair failed and we were unable to recover it. 00:37:28.396 [2024-09-29 16:45:28.843061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.396 [2024-09-29 16:45:28.843095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.396 qpair failed and we were unable to recover it. 00:37:28.396 [2024-09-29 16:45:28.843332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.396 [2024-09-29 16:45:28.843390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.396 qpair failed and we were unable to recover it. 00:37:28.396 [2024-09-29 16:45:28.843541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.396 [2024-09-29 16:45:28.843578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.396 qpair failed and we were unable to recover it. 00:37:28.396 [2024-09-29 16:45:28.843750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.396 [2024-09-29 16:45:28.843786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.396 qpair failed and we were unable to recover it. 00:37:28.396 [2024-09-29 16:45:28.843944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.396 [2024-09-29 16:45:28.844014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.396 qpair failed and we were unable to recover it. 00:37:28.396 [2024-09-29 16:45:28.844163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.396 [2024-09-29 16:45:28.844231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.396 qpair failed and we were unable to recover it. 00:37:28.396 [2024-09-29 16:45:28.844385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.396 [2024-09-29 16:45:28.844421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.396 qpair failed and we were unable to recover it. 00:37:28.396 [2024-09-29 16:45:28.844590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.396 [2024-09-29 16:45:28.844623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.396 qpair failed and we were unable to recover it. 00:37:28.396 [2024-09-29 16:45:28.844775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.396 [2024-09-29 16:45:28.844811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.396 qpair failed and we were unable to recover it. 00:37:28.396 [2024-09-29 16:45:28.845000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.396 [2024-09-29 16:45:28.845052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.396 qpair failed and we were unable to recover it. 00:37:28.396 [2024-09-29 16:45:28.845164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.396 [2024-09-29 16:45:28.845203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.396 qpair failed and we were unable to recover it. 00:37:28.396 [2024-09-29 16:45:28.845371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.396 [2024-09-29 16:45:28.845423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.396 qpair failed and we were unable to recover it. 00:37:28.396 [2024-09-29 16:45:28.845536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.396 [2024-09-29 16:45:28.845570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.396 qpair failed and we were unable to recover it. 00:37:28.396 [2024-09-29 16:45:28.845692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.396 [2024-09-29 16:45:28.845748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.396 qpair failed and we were unable to recover it. 00:37:28.396 [2024-09-29 16:45:28.845946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.396 [2024-09-29 16:45:28.845984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.396 qpair failed and we were unable to recover it. 00:37:28.396 [2024-09-29 16:45:28.846107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.396 [2024-09-29 16:45:28.846144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.396 qpair failed and we were unable to recover it. 00:37:28.396 [2024-09-29 16:45:28.846360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.396 [2024-09-29 16:45:28.846418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.396 qpair failed and we were unable to recover it. 00:37:28.396 [2024-09-29 16:45:28.846571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.396 [2024-09-29 16:45:28.846608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.396 qpair failed and we were unable to recover it. 00:37:28.396 [2024-09-29 16:45:28.846786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.396 [2024-09-29 16:45:28.846820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.396 qpair failed and we were unable to recover it. 00:37:28.396 [2024-09-29 16:45:28.846987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.396 [2024-09-29 16:45:28.847040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.396 qpair failed and we were unable to recover it. 00:37:28.396 [2024-09-29 16:45:28.847154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.396 [2024-09-29 16:45:28.847189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.396 qpair failed and we were unable to recover it. 00:37:28.396 [2024-09-29 16:45:28.847380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.396 [2024-09-29 16:45:28.847438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.396 qpair failed and we were unable to recover it. 00:37:28.396 [2024-09-29 16:45:28.847575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.396 [2024-09-29 16:45:28.847609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.396 qpair failed and we were unable to recover it. 00:37:28.396 [2024-09-29 16:45:28.847763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.396 [2024-09-29 16:45:28.847814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.396 qpair failed and we were unable to recover it. 00:37:28.396 [2024-09-29 16:45:28.847971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.396 [2024-09-29 16:45:28.848025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.396 qpair failed and we were unable to recover it. 00:37:28.396 [2024-09-29 16:45:28.848181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.397 [2024-09-29 16:45:28.848219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.397 qpair failed and we were unable to recover it. 00:37:28.397 [2024-09-29 16:45:28.848470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.397 [2024-09-29 16:45:28.848529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.397 qpair failed and we were unable to recover it. 00:37:28.397 [2024-09-29 16:45:28.848684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.397 [2024-09-29 16:45:28.848718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.397 qpair failed and we were unable to recover it. 00:37:28.397 [2024-09-29 16:45:28.848862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.397 [2024-09-29 16:45:28.848898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.397 qpair failed and we were unable to recover it. 00:37:28.397 [2024-09-29 16:45:28.849039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.397 [2024-09-29 16:45:28.849073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.397 qpair failed and we were unable to recover it. 00:37:28.397 [2024-09-29 16:45:28.849201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.397 [2024-09-29 16:45:28.849236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.397 qpair failed and we were unable to recover it. 00:37:28.397 [2024-09-29 16:45:28.849417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.397 [2024-09-29 16:45:28.849456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.397 qpair failed and we were unable to recover it. 00:37:28.397 [2024-09-29 16:45:28.849611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.397 [2024-09-29 16:45:28.849649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.397 qpair failed and we were unable to recover it. 00:37:28.397 [2024-09-29 16:45:28.849802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.397 [2024-09-29 16:45:28.849836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.397 qpair failed and we were unable to recover it. 00:37:28.397 [2024-09-29 16:45:28.850001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.397 [2024-09-29 16:45:28.850034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.397 qpair failed and we were unable to recover it. 00:37:28.397 [2024-09-29 16:45:28.850161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.397 [2024-09-29 16:45:28.850198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.397 qpair failed and we were unable to recover it. 00:37:28.397 [2024-09-29 16:45:28.850369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.397 [2024-09-29 16:45:28.850407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.397 qpair failed and we were unable to recover it. 00:37:28.397 [2024-09-29 16:45:28.850592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.397 [2024-09-29 16:45:28.850628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.397 qpair failed and we were unable to recover it. 00:37:28.397 [2024-09-29 16:45:28.850808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.397 [2024-09-29 16:45:28.850842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.397 qpair failed and we were unable to recover it. 00:37:28.397 [2024-09-29 16:45:28.851027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.397 [2024-09-29 16:45:28.851064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.397 qpair failed and we were unable to recover it. 00:37:28.397 [2024-09-29 16:45:28.851212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.397 [2024-09-29 16:45:28.851248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.397 qpair failed and we were unable to recover it. 00:37:28.397 [2024-09-29 16:45:28.851399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.397 [2024-09-29 16:45:28.851436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.397 qpair failed and we were unable to recover it. 00:37:28.397 [2024-09-29 16:45:28.851625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.397 [2024-09-29 16:45:28.851662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.397 qpair failed and we were unable to recover it. 00:37:28.397 [2024-09-29 16:45:28.851861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.397 [2024-09-29 16:45:28.851908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.397 qpair failed and we were unable to recover it. 00:37:28.397 [2024-09-29 16:45:28.852102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.397 [2024-09-29 16:45:28.852170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.397 qpair failed and we were unable to recover it. 00:37:28.397 [2024-09-29 16:45:28.852301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.397 [2024-09-29 16:45:28.852353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.397 qpair failed and we were unable to recover it. 00:37:28.397 [2024-09-29 16:45:28.852508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.397 [2024-09-29 16:45:28.852542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.397 qpair failed and we were unable to recover it. 00:37:28.397 [2024-09-29 16:45:28.852683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.397 [2024-09-29 16:45:28.852745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.397 qpair failed and we were unable to recover it. 00:37:28.397 [2024-09-29 16:45:28.852861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.397 [2024-09-29 16:45:28.852898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.397 qpair failed and we were unable to recover it. 00:37:28.397 [2024-09-29 16:45:28.853069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.397 [2024-09-29 16:45:28.853114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.397 qpair failed and we were unable to recover it. 00:37:28.397 [2024-09-29 16:45:28.853264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.397 [2024-09-29 16:45:28.853345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.397 qpair failed and we were unable to recover it. 00:37:28.397 [2024-09-29 16:45:28.853517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.397 [2024-09-29 16:45:28.853556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.397 qpair failed and we were unable to recover it. 00:37:28.397 [2024-09-29 16:45:28.853732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.397 [2024-09-29 16:45:28.853766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.397 qpair failed and we were unable to recover it. 00:37:28.397 [2024-09-29 16:45:28.853931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.397 [2024-09-29 16:45:28.853969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.397 qpair failed and we were unable to recover it. 00:37:28.397 [2024-09-29 16:45:28.854125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.397 [2024-09-29 16:45:28.854162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.397 qpair failed and we were unable to recover it. 00:37:28.397 [2024-09-29 16:45:28.854397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.397 [2024-09-29 16:45:28.854460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.397 qpair failed and we were unable to recover it. 00:37:28.397 [2024-09-29 16:45:28.854595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.397 [2024-09-29 16:45:28.854628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.397 qpair failed and we were unable to recover it. 00:37:28.397 [2024-09-29 16:45:28.854801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.397 [2024-09-29 16:45:28.854839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.397 qpair failed and we were unable to recover it. 00:37:28.397 [2024-09-29 16:45:28.854963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.397 [2024-09-29 16:45:28.855000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.397 qpair failed and we were unable to recover it. 00:37:28.397 [2024-09-29 16:45:28.855165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.397 [2024-09-29 16:45:28.855202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.397 qpair failed and we were unable to recover it. 00:37:28.397 [2024-09-29 16:45:28.855325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.397 [2024-09-29 16:45:28.855362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.397 qpair failed and we were unable to recover it. 00:37:28.397 [2024-09-29 16:45:28.855525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.397 [2024-09-29 16:45:28.855564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.397 qpair failed and we were unable to recover it. 00:37:28.397 [2024-09-29 16:45:28.855736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.397 [2024-09-29 16:45:28.855774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.397 qpair failed and we were unable to recover it. 00:37:28.397 [2024-09-29 16:45:28.855973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.397 [2024-09-29 16:45:28.856026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.397 qpair failed and we were unable to recover it. 00:37:28.398 [2024-09-29 16:45:28.856272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.398 [2024-09-29 16:45:28.856332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.398 qpair failed and we were unable to recover it. 00:37:28.398 [2024-09-29 16:45:28.856509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.398 [2024-09-29 16:45:28.856567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.398 qpair failed and we were unable to recover it. 00:37:28.398 [2024-09-29 16:45:28.856743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.398 [2024-09-29 16:45:28.856777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.398 qpair failed and we were unable to recover it. 00:37:28.398 [2024-09-29 16:45:28.856912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.398 [2024-09-29 16:45:28.856949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.398 qpair failed and we were unable to recover it. 00:37:28.398 [2024-09-29 16:45:28.857155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.398 [2024-09-29 16:45:28.857221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.398 qpair failed and we were unable to recover it. 00:37:28.398 [2024-09-29 16:45:28.857352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.398 [2024-09-29 16:45:28.857389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.398 qpair failed and we were unable to recover it. 00:37:28.398 [2024-09-29 16:45:28.857527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.398 [2024-09-29 16:45:28.857563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.398 qpair failed and we were unable to recover it. 00:37:28.398 [2024-09-29 16:45:28.857722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.398 [2024-09-29 16:45:28.857756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.398 qpair failed and we were unable to recover it. 00:37:28.398 [2024-09-29 16:45:28.857896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.398 [2024-09-29 16:45:28.857948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.398 qpair failed and we were unable to recover it. 00:37:28.398 [2024-09-29 16:45:28.858107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.398 [2024-09-29 16:45:28.858157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.398 qpair failed and we were unable to recover it. 00:37:28.398 [2024-09-29 16:45:28.858365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.398 [2024-09-29 16:45:28.858427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.398 qpair failed and we were unable to recover it. 00:37:28.398 [2024-09-29 16:45:28.858582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.398 [2024-09-29 16:45:28.858617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.398 qpair failed and we were unable to recover it. 00:37:28.398 [2024-09-29 16:45:28.858789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.398 [2024-09-29 16:45:28.858837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.398 qpair failed and we were unable to recover it. 00:37:28.398 [2024-09-29 16:45:28.859013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.398 [2024-09-29 16:45:28.859061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.398 qpair failed and we were unable to recover it. 00:37:28.398 [2024-09-29 16:45:28.859247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.398 [2024-09-29 16:45:28.859288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.398 qpair failed and we were unable to recover it. 00:37:28.398 [2024-09-29 16:45:28.859411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.398 [2024-09-29 16:45:28.859449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.398 qpair failed and we were unable to recover it. 00:37:28.398 [2024-09-29 16:45:28.859603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.398 [2024-09-29 16:45:28.859641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.398 qpair failed and we were unable to recover it. 00:37:28.398 [2024-09-29 16:45:28.859809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.398 [2024-09-29 16:45:28.859845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.398 qpair failed and we were unable to recover it. 00:37:28.398 [2024-09-29 16:45:28.859985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.398 [2024-09-29 16:45:28.860022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.398 qpair failed and we were unable to recover it. 00:37:28.398 [2024-09-29 16:45:28.860225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.398 [2024-09-29 16:45:28.860281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.398 qpair failed and we were unable to recover it. 00:37:28.398 [2024-09-29 16:45:28.860406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.398 [2024-09-29 16:45:28.860444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.398 qpair failed and we were unable to recover it. 00:37:28.398 [2024-09-29 16:45:28.860609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.398 [2024-09-29 16:45:28.860644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.398 qpair failed and we were unable to recover it. 00:37:28.398 [2024-09-29 16:45:28.860820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.398 [2024-09-29 16:45:28.860868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.398 qpair failed and we were unable to recover it. 00:37:28.398 [2024-09-29 16:45:28.861006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.398 [2024-09-29 16:45:28.861045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.398 qpair failed and we were unable to recover it. 00:37:28.398 [2024-09-29 16:45:28.861227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.398 [2024-09-29 16:45:28.861295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.398 qpair failed and we were unable to recover it. 00:37:28.398 [2024-09-29 16:45:28.861631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.398 [2024-09-29 16:45:28.861701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.398 qpair failed and we were unable to recover it. 00:37:28.398 [2024-09-29 16:45:28.861865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.398 [2024-09-29 16:45:28.861904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.398 qpair failed and we were unable to recover it. 00:37:28.398 [2024-09-29 16:45:28.862136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.398 [2024-09-29 16:45:28.862185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.398 qpair failed and we were unable to recover it. 00:37:28.398 [2024-09-29 16:45:28.862399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.398 [2024-09-29 16:45:28.862471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.398 qpair failed and we were unable to recover it. 00:37:28.398 [2024-09-29 16:45:28.862667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.398 [2024-09-29 16:45:28.862730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.398 qpair failed and we were unable to recover it. 00:37:28.398 [2024-09-29 16:45:28.862874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.398 [2024-09-29 16:45:28.862910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.398 qpair failed and we were unable to recover it. 00:37:28.398 [2024-09-29 16:45:28.863060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.398 [2024-09-29 16:45:28.863111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.398 qpair failed and we were unable to recover it. 00:37:28.398 [2024-09-29 16:45:28.863266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.398 [2024-09-29 16:45:28.863304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.398 qpair failed and we were unable to recover it. 00:37:28.398 [2024-09-29 16:45:28.863461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.398 [2024-09-29 16:45:28.863499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.398 qpair failed and we were unable to recover it. 00:37:28.398 [2024-09-29 16:45:28.863695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.398 [2024-09-29 16:45:28.863734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.398 qpair failed and we were unable to recover it. 00:37:28.398 [2024-09-29 16:45:28.863850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.398 [2024-09-29 16:45:28.863884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.398 qpair failed and we were unable to recover it. 00:37:28.398 [2024-09-29 16:45:28.864061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.398 [2024-09-29 16:45:28.864095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.398 qpair failed and we were unable to recover it. 00:37:28.398 [2024-09-29 16:45:28.864247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.398 [2024-09-29 16:45:28.864326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.398 qpair failed and we were unable to recover it. 00:37:28.398 [2024-09-29 16:45:28.864469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.399 [2024-09-29 16:45:28.864507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.399 qpair failed and we were unable to recover it. 00:37:28.399 [2024-09-29 16:45:28.864645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.399 [2024-09-29 16:45:28.864691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.399 qpair failed and we were unable to recover it. 00:37:28.399 [2024-09-29 16:45:28.864912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.399 [2024-09-29 16:45:28.864959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.399 qpair failed and we were unable to recover it. 00:37:28.399 [2024-09-29 16:45:28.865133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.399 [2024-09-29 16:45:28.865186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.399 qpair failed and we were unable to recover it. 00:37:28.399 [2024-09-29 16:45:28.865351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.399 [2024-09-29 16:45:28.865409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.399 qpair failed and we were unable to recover it. 00:37:28.399 [2024-09-29 16:45:28.865550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.399 [2024-09-29 16:45:28.865584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.399 qpair failed and we were unable to recover it. 00:37:28.399 [2024-09-29 16:45:28.865749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.399 [2024-09-29 16:45:28.865784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.399 qpair failed and we were unable to recover it. 00:37:28.399 [2024-09-29 16:45:28.865937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.399 [2024-09-29 16:45:28.866006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.399 qpair failed and we were unable to recover it. 00:37:28.399 [2024-09-29 16:45:28.866263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.399 [2024-09-29 16:45:28.866302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.399 qpair failed and we were unable to recover it. 00:37:28.399 [2024-09-29 16:45:28.866428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.399 [2024-09-29 16:45:28.866466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.399 qpair failed and we were unable to recover it. 00:37:28.399 [2024-09-29 16:45:28.866633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.399 [2024-09-29 16:45:28.866666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.399 qpair failed and we were unable to recover it. 00:37:28.399 [2024-09-29 16:45:28.866844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.399 [2024-09-29 16:45:28.866877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.399 qpair failed and we were unable to recover it. 00:37:28.399 [2024-09-29 16:45:28.866988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.399 [2024-09-29 16:45:28.867040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.399 qpair failed and we were unable to recover it. 00:37:28.399 [2024-09-29 16:45:28.867178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.399 [2024-09-29 16:45:28.867216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.399 qpair failed and we were unable to recover it. 00:37:28.399 [2024-09-29 16:45:28.867399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.399 [2024-09-29 16:45:28.867436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.399 qpair failed and we were unable to recover it. 00:37:28.399 [2024-09-29 16:45:28.867623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.399 [2024-09-29 16:45:28.867660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.399 qpair failed and we were unable to recover it. 00:37:28.399 [2024-09-29 16:45:28.867796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.399 [2024-09-29 16:45:28.867830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.399 qpair failed and we were unable to recover it. 00:37:28.399 [2024-09-29 16:45:28.868032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.399 [2024-09-29 16:45:28.868069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.399 qpair failed and we were unable to recover it. 00:37:28.399 [2024-09-29 16:45:28.868226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.399 [2024-09-29 16:45:28.868263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.399 qpair failed and we were unable to recover it. 00:37:28.399 [2024-09-29 16:45:28.868422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.399 [2024-09-29 16:45:28.868459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.399 qpair failed and we were unable to recover it. 00:37:28.399 [2024-09-29 16:45:28.868595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.399 [2024-09-29 16:45:28.868628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.399 qpair failed and we were unable to recover it. 00:37:28.399 [2024-09-29 16:45:28.868761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.399 [2024-09-29 16:45:28.868794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.399 qpair failed and we were unable to recover it. 00:37:28.399 [2024-09-29 16:45:28.868948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.399 [2024-09-29 16:45:28.868985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.399 qpair failed and we were unable to recover it. 00:37:28.399 [2024-09-29 16:45:28.869115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.399 [2024-09-29 16:45:28.869152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.399 qpair failed and we were unable to recover it. 00:37:28.399 [2024-09-29 16:45:28.869285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.399 [2024-09-29 16:45:28.869321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.399 qpair failed and we were unable to recover it. 00:37:28.399 [2024-09-29 16:45:28.869444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.399 [2024-09-29 16:45:28.869481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.399 qpair failed and we were unable to recover it. 00:37:28.399 [2024-09-29 16:45:28.869662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.399 [2024-09-29 16:45:28.869719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.399 qpair failed and we were unable to recover it. 00:37:28.399 [2024-09-29 16:45:28.869875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.399 [2024-09-29 16:45:28.869913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.399 qpair failed and we were unable to recover it. 00:37:28.399 [2024-09-29 16:45:28.870024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.399 [2024-09-29 16:45:28.870065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.399 qpair failed and we were unable to recover it. 00:37:28.399 [2024-09-29 16:45:28.870227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.399 [2024-09-29 16:45:28.870279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.399 qpair failed and we were unable to recover it. 00:37:28.399 [2024-09-29 16:45:28.870434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.399 [2024-09-29 16:45:28.870487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.399 qpair failed and we were unable to recover it. 00:37:28.399 [2024-09-29 16:45:28.870632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.399 [2024-09-29 16:45:28.870667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.399 qpair failed and we were unable to recover it. 00:37:28.399 [2024-09-29 16:45:28.870876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.399 [2024-09-29 16:45:28.870927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.399 qpair failed and we were unable to recover it. 00:37:28.399 [2024-09-29 16:45:28.871062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.399 [2024-09-29 16:45:28.871112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.399 qpair failed and we were unable to recover it. 00:37:28.399 [2024-09-29 16:45:28.871224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.399 [2024-09-29 16:45:28.871258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.399 qpair failed and we were unable to recover it. 00:37:28.399 [2024-09-29 16:45:28.871396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.399 [2024-09-29 16:45:28.871430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.399 qpair failed and we were unable to recover it. 00:37:28.399 [2024-09-29 16:45:28.871545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.399 [2024-09-29 16:45:28.871578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.399 qpair failed and we were unable to recover it. 00:37:28.399 [2024-09-29 16:45:28.871710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.399 [2024-09-29 16:45:28.871745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.399 qpair failed and we were unable to recover it. 00:37:28.399 [2024-09-29 16:45:28.871906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.400 [2024-09-29 16:45:28.871953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.400 qpair failed and we were unable to recover it. 00:37:28.400 [2024-09-29 16:45:28.872131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.400 [2024-09-29 16:45:28.872166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.400 qpair failed and we were unable to recover it. 00:37:28.400 [2024-09-29 16:45:28.872284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.400 [2024-09-29 16:45:28.872317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.400 qpair failed and we were unable to recover it. 00:37:28.400 [2024-09-29 16:45:28.872453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.400 [2024-09-29 16:45:28.872486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.400 qpair failed and we were unable to recover it. 00:37:28.400 [2024-09-29 16:45:28.872608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.400 [2024-09-29 16:45:28.872640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.400 qpair failed and we were unable to recover it. 00:37:28.400 [2024-09-29 16:45:28.872796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.400 [2024-09-29 16:45:28.872845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.400 qpair failed and we were unable to recover it. 00:37:28.400 [2024-09-29 16:45:28.873053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.400 [2024-09-29 16:45:28.873115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.400 qpair failed and we were unable to recover it. 00:37:28.400 [2024-09-29 16:45:28.873251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.400 [2024-09-29 16:45:28.873289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.400 qpair failed and we were unable to recover it. 00:37:28.400 [2024-09-29 16:45:28.873448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.400 [2024-09-29 16:45:28.873482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.400 qpair failed and we were unable to recover it. 00:37:28.400 [2024-09-29 16:45:28.873620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.400 [2024-09-29 16:45:28.873655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.400 qpair failed and we were unable to recover it. 00:37:28.400 [2024-09-29 16:45:28.873855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.400 [2024-09-29 16:45:28.873889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.400 qpair failed and we were unable to recover it. 00:37:28.400 [2024-09-29 16:45:28.874027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.400 [2024-09-29 16:45:28.874065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.400 qpair failed and we were unable to recover it. 00:37:28.400 [2024-09-29 16:45:28.874318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.400 [2024-09-29 16:45:28.874376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.400 qpair failed and we were unable to recover it. 00:37:28.400 [2024-09-29 16:45:28.874488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.400 [2024-09-29 16:45:28.874525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.400 qpair failed and we were unable to recover it. 00:37:28.400 [2024-09-29 16:45:28.874715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.400 [2024-09-29 16:45:28.874749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.400 qpair failed and we were unable to recover it. 00:37:28.400 [2024-09-29 16:45:28.874944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.400 [2024-09-29 16:45:28.874993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.400 qpair failed and we were unable to recover it. 00:37:28.400 [2024-09-29 16:45:28.875119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.400 [2024-09-29 16:45:28.875156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.400 qpair failed and we were unable to recover it. 00:37:28.400 [2024-09-29 16:45:28.875378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.400 [2024-09-29 16:45:28.875452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.400 qpair failed and we were unable to recover it. 00:37:28.400 [2024-09-29 16:45:28.875566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.400 [2024-09-29 16:45:28.875604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.400 qpair failed and we were unable to recover it. 00:37:28.400 [2024-09-29 16:45:28.875766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.400 [2024-09-29 16:45:28.875801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.400 qpair failed and we were unable to recover it. 00:37:28.400 [2024-09-29 16:45:28.875955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.400 [2024-09-29 16:45:28.875990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.400 qpair failed and we were unable to recover it. 00:37:28.400 [2024-09-29 16:45:28.876160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.400 [2024-09-29 16:45:28.876213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.400 qpair failed and we were unable to recover it. 00:37:28.400 [2024-09-29 16:45:28.876378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.400 [2024-09-29 16:45:28.876431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.400 qpair failed and we were unable to recover it. 00:37:28.400 [2024-09-29 16:45:28.876576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.400 [2024-09-29 16:45:28.876610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.400 qpair failed and we were unable to recover it. 00:37:28.400 [2024-09-29 16:45:28.876768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.400 [2024-09-29 16:45:28.876815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.400 qpair failed and we were unable to recover it. 00:37:28.400 [2024-09-29 16:45:28.876935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.400 [2024-09-29 16:45:28.876970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.400 qpair failed and we were unable to recover it. 00:37:28.400 [2024-09-29 16:45:28.877092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.400 [2024-09-29 16:45:28.877127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.400 qpair failed and we were unable to recover it. 00:37:28.400 [2024-09-29 16:45:28.877294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.400 [2024-09-29 16:45:28.877328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.400 qpair failed and we were unable to recover it. 00:37:28.400 [2024-09-29 16:45:28.877468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.400 [2024-09-29 16:45:28.877501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.400 qpair failed and we were unable to recover it. 00:37:28.400 [2024-09-29 16:45:28.877722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.400 [2024-09-29 16:45:28.877758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.400 qpair failed and we were unable to recover it. 00:37:28.400 [2024-09-29 16:45:28.877951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.400 [2024-09-29 16:45:28.877993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.400 qpair failed and we were unable to recover it. 00:37:28.400 [2024-09-29 16:45:28.878119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.400 [2024-09-29 16:45:28.878156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.400 qpair failed and we were unable to recover it. 00:37:28.400 [2024-09-29 16:45:28.878279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.400 [2024-09-29 16:45:28.878317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.400 qpair failed and we were unable to recover it. 00:37:28.400 [2024-09-29 16:45:28.878489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.400 [2024-09-29 16:45:28.878525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.400 qpair failed and we were unable to recover it. 00:37:28.400 [2024-09-29 16:45:28.878694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.400 [2024-09-29 16:45:28.878729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.400 qpair failed and we were unable to recover it. 00:37:28.400 [2024-09-29 16:45:28.878888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.400 [2024-09-29 16:45:28.878946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.400 qpair failed and we were unable to recover it. 00:37:28.400 [2024-09-29 16:45:28.879084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.400 [2024-09-29 16:45:28.879136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.400 qpair failed and we were unable to recover it. 00:37:28.400 [2024-09-29 16:45:28.879298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.400 [2024-09-29 16:45:28.879349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.400 qpair failed and we were unable to recover it. 00:37:28.400 [2024-09-29 16:45:28.879458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.400 [2024-09-29 16:45:28.879492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.400 qpair failed and we were unable to recover it. 00:37:28.401 [2024-09-29 16:45:28.879632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.401 [2024-09-29 16:45:28.879666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.401 qpair failed and we were unable to recover it. 00:37:28.401 [2024-09-29 16:45:28.879813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.401 [2024-09-29 16:45:28.879850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.401 qpair failed and we were unable to recover it. 00:37:28.401 [2024-09-29 16:45:28.880025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.401 [2024-09-29 16:45:28.880078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.401 qpair failed and we were unable to recover it. 00:37:28.401 [2024-09-29 16:45:28.880314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.401 [2024-09-29 16:45:28.880375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.401 qpair failed and we were unable to recover it. 00:37:28.401 [2024-09-29 16:45:28.880559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.401 [2024-09-29 16:45:28.880597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.401 qpair failed and we were unable to recover it. 00:37:28.401 [2024-09-29 16:45:28.880760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.401 [2024-09-29 16:45:28.880796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.401 qpair failed and we were unable to recover it. 00:37:28.401 [2024-09-29 16:45:28.881038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.401 [2024-09-29 16:45:28.881094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.401 qpair failed and we were unable to recover it. 00:37:28.401 [2024-09-29 16:45:28.881206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.401 [2024-09-29 16:45:28.881243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.401 qpair failed and we were unable to recover it. 00:37:28.401 [2024-09-29 16:45:28.881427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.401 [2024-09-29 16:45:28.881465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.401 qpair failed and we were unable to recover it. 00:37:28.401 [2024-09-29 16:45:28.881640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.401 [2024-09-29 16:45:28.881684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.401 qpair failed and we were unable to recover it. 00:37:28.401 [2024-09-29 16:45:28.881797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.401 [2024-09-29 16:45:28.881831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.401 qpair failed and we were unable to recover it. 00:37:28.401 [2024-09-29 16:45:28.882023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.401 [2024-09-29 16:45:28.882074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.401 qpair failed and we were unable to recover it. 00:37:28.401 [2024-09-29 16:45:28.882209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.401 [2024-09-29 16:45:28.882269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.401 qpair failed and we were unable to recover it. 00:37:28.401 [2024-09-29 16:45:28.882399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.401 [2024-09-29 16:45:28.882451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.401 qpair failed and we were unable to recover it. 00:37:28.401 [2024-09-29 16:45:28.882602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.401 [2024-09-29 16:45:28.882636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.401 qpair failed and we were unable to recover it. 00:37:28.401 [2024-09-29 16:45:28.882803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.401 [2024-09-29 16:45:28.882857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.401 qpair failed and we were unable to recover it. 00:37:28.401 [2024-09-29 16:45:28.883019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.401 [2024-09-29 16:45:28.883067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.401 qpair failed and we were unable to recover it. 00:37:28.401 [2024-09-29 16:45:28.883189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.401 [2024-09-29 16:45:28.883225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.401 qpair failed and we were unable to recover it. 00:37:28.401 [2024-09-29 16:45:28.883397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.401 [2024-09-29 16:45:28.883445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.401 qpair failed and we were unable to recover it. 00:37:28.401 [2024-09-29 16:45:28.883620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.401 [2024-09-29 16:45:28.883655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.401 qpair failed and we were unable to recover it. 00:37:28.401 [2024-09-29 16:45:28.883828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.401 [2024-09-29 16:45:28.883875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.401 qpair failed and we were unable to recover it. 00:37:28.401 [2024-09-29 16:45:28.884055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.401 [2024-09-29 16:45:28.884095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.401 qpair failed and we were unable to recover it. 00:37:28.401 [2024-09-29 16:45:28.884254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.401 [2024-09-29 16:45:28.884292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.401 qpair failed and we were unable to recover it. 00:37:28.401 [2024-09-29 16:45:28.884434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.401 [2024-09-29 16:45:28.884488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.401 qpair failed and we were unable to recover it. 00:37:28.401 [2024-09-29 16:45:28.884626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.401 [2024-09-29 16:45:28.884660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.401 qpair failed and we were unable to recover it. 00:37:28.401 [2024-09-29 16:45:28.884831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.401 [2024-09-29 16:45:28.884885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.401 qpair failed and we were unable to recover it. 00:37:28.401 [2024-09-29 16:45:28.885012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.401 [2024-09-29 16:45:28.885049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.401 qpair failed and we were unable to recover it. 00:37:28.401 [2024-09-29 16:45:28.885207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.401 [2024-09-29 16:45:28.885248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.401 qpair failed and we were unable to recover it. 00:37:28.401 [2024-09-29 16:45:28.885442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.401 [2024-09-29 16:45:28.885479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.401 qpair failed and we were unable to recover it. 00:37:28.401 [2024-09-29 16:45:28.885666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.401 [2024-09-29 16:45:28.885715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.401 qpair failed and we were unable to recover it. 00:37:28.401 [2024-09-29 16:45:28.885883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.401 [2024-09-29 16:45:28.885917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.401 qpair failed and we were unable to recover it. 00:37:28.401 [2024-09-29 16:45:28.886063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.401 [2024-09-29 16:45:28.886107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.401 qpair failed and we were unable to recover it. 00:37:28.401 [2024-09-29 16:45:28.886296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.401 [2024-09-29 16:45:28.886332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.401 qpair failed and we were unable to recover it. 00:37:28.401 [2024-09-29 16:45:28.886514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.401 [2024-09-29 16:45:28.886552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.401 qpair failed and we were unable to recover it. 00:37:28.401 [2024-09-29 16:45:28.886724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.401 [2024-09-29 16:45:28.886759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.401 qpair failed and we were unable to recover it. 00:37:28.401 [2024-09-29 16:45:28.886891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.401 [2024-09-29 16:45:28.886929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.401 qpair failed and we were unable to recover it. 00:37:28.402 [2024-09-29 16:45:28.887057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.402 [2024-09-29 16:45:28.887095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.402 qpair failed and we were unable to recover it. 00:37:28.402 [2024-09-29 16:45:28.887307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.402 [2024-09-29 16:45:28.887366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.402 qpair failed and we were unable to recover it. 00:37:28.402 [2024-09-29 16:45:28.887489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.402 [2024-09-29 16:45:28.887526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.402 qpair failed and we were unable to recover it. 00:37:28.402 [2024-09-29 16:45:28.887657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.402 [2024-09-29 16:45:28.887704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.402 qpair failed and we were unable to recover it. 00:37:28.402 [2024-09-29 16:45:28.887910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.402 [2024-09-29 16:45:28.887957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.402 qpair failed and we were unable to recover it. 00:37:28.402 [2024-09-29 16:45:28.888117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.402 [2024-09-29 16:45:28.888197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.402 qpair failed and we were unable to recover it. 00:37:28.402 [2024-09-29 16:45:28.888448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.402 [2024-09-29 16:45:28.888508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.402 qpair failed and we were unable to recover it. 00:37:28.402 [2024-09-29 16:45:28.888649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.402 [2024-09-29 16:45:28.888716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.402 qpair failed and we were unable to recover it. 00:37:28.402 [2024-09-29 16:45:28.888881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.402 [2024-09-29 16:45:28.888935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.402 qpair failed and we were unable to recover it. 00:37:28.402 [2024-09-29 16:45:28.889135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.402 [2024-09-29 16:45:28.889188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.402 qpair failed and we were unable to recover it. 00:37:28.402 [2024-09-29 16:45:28.889395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.402 [2024-09-29 16:45:28.889450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.402 qpair failed and we were unable to recover it. 00:37:28.402 [2024-09-29 16:45:28.889620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.402 [2024-09-29 16:45:28.889654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.402 qpair failed and we were unable to recover it. 00:37:28.402 [2024-09-29 16:45:28.889834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.402 [2024-09-29 16:45:28.889887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.402 qpair failed and we were unable to recover it. 00:37:28.402 [2024-09-29 16:45:28.890094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.402 [2024-09-29 16:45:28.890146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.402 qpair failed and we were unable to recover it. 00:37:28.402 [2024-09-29 16:45:28.890364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.402 [2024-09-29 16:45:28.890422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.402 qpair failed and we were unable to recover it. 00:37:28.402 [2024-09-29 16:45:28.890593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.402 [2024-09-29 16:45:28.890626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.402 qpair failed and we were unable to recover it. 00:37:28.402 [2024-09-29 16:45:28.890754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.402 [2024-09-29 16:45:28.890788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.402 qpair failed and we were unable to recover it. 00:37:28.402 [2024-09-29 16:45:28.890907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.402 [2024-09-29 16:45:28.890940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.402 qpair failed and we were unable to recover it. 00:37:28.402 [2024-09-29 16:45:28.891132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.402 [2024-09-29 16:45:28.891169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.402 qpair failed and we were unable to recover it. 00:37:28.402 [2024-09-29 16:45:28.891335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.402 [2024-09-29 16:45:28.891393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.402 qpair failed and we were unable to recover it. 00:37:28.402 [2024-09-29 16:45:28.891592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.402 [2024-09-29 16:45:28.891629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.402 qpair failed and we were unable to recover it. 00:37:28.402 [2024-09-29 16:45:28.891793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.402 [2024-09-29 16:45:28.891841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.402 qpair failed and we were unable to recover it. 00:37:28.402 [2024-09-29 16:45:28.891980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.402 [2024-09-29 16:45:28.892028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.402 qpair failed and we were unable to recover it. 00:37:28.402 [2024-09-29 16:45:28.892208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.402 [2024-09-29 16:45:28.892247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.402 qpair failed and we were unable to recover it. 00:37:28.402 [2024-09-29 16:45:28.892463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.402 [2024-09-29 16:45:28.892522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.402 qpair failed and we were unable to recover it. 00:37:28.402 [2024-09-29 16:45:28.892690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.402 [2024-09-29 16:45:28.892734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.402 qpair failed and we were unable to recover it. 00:37:28.402 [2024-09-29 16:45:28.892849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.402 [2024-09-29 16:45:28.892883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.402 qpair failed and we were unable to recover it. 00:37:28.402 [2024-09-29 16:45:28.893083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.402 [2024-09-29 16:45:28.893120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.402 qpair failed and we were unable to recover it. 00:37:28.402 [2024-09-29 16:45:28.893321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.402 [2024-09-29 16:45:28.893391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.402 qpair failed and we were unable to recover it. 00:37:28.402 [2024-09-29 16:45:28.893541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.402 [2024-09-29 16:45:28.893591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.402 qpair failed and we were unable to recover it. 00:37:28.402 [2024-09-29 16:45:28.893739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.402 [2024-09-29 16:45:28.893774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.402 qpair failed and we were unable to recover it. 00:37:28.402 [2024-09-29 16:45:28.893922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.402 [2024-09-29 16:45:28.893955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.402 qpair failed and we were unable to recover it. 00:37:28.402 [2024-09-29 16:45:28.894092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.402 [2024-09-29 16:45:28.894136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.402 qpair failed and we were unable to recover it. 00:37:28.402 [2024-09-29 16:45:28.894288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.402 [2024-09-29 16:45:28.894362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.402 qpair failed and we were unable to recover it. 00:37:28.402 [2024-09-29 16:45:28.894580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.402 [2024-09-29 16:45:28.894634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.402 qpair failed and we were unable to recover it. 00:37:28.402 [2024-09-29 16:45:28.894833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.402 [2024-09-29 16:45:28.894885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.402 qpair failed and we were unable to recover it. 00:37:28.402 [2024-09-29 16:45:28.895044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.402 [2024-09-29 16:45:28.895081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.402 qpair failed and we were unable to recover it. 00:37:28.402 [2024-09-29 16:45:28.895274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.402 [2024-09-29 16:45:28.895333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.402 qpair failed and we were unable to recover it. 00:37:28.402 [2024-09-29 16:45:28.895492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.402 [2024-09-29 16:45:28.895531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.402 qpair failed and we were unable to recover it. 00:37:28.402 [2024-09-29 16:45:28.895689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.402 [2024-09-29 16:45:28.895742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.402 qpair failed and we were unable to recover it. 00:37:28.402 [2024-09-29 16:45:28.895908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.402 [2024-09-29 16:45:28.895965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.402 qpair failed and we were unable to recover it. 00:37:28.402 [2024-09-29 16:45:28.896206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.403 [2024-09-29 16:45:28.896269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.403 qpair failed and we were unable to recover it. 00:37:28.403 [2024-09-29 16:45:28.896494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.403 [2024-09-29 16:45:28.896553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.403 qpair failed and we were unable to recover it. 00:37:28.403 [2024-09-29 16:45:28.896690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.403 [2024-09-29 16:45:28.896762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.403 qpair failed and we were unable to recover it. 00:37:28.403 [2024-09-29 16:45:28.896927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.403 [2024-09-29 16:45:28.896964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.403 qpair failed and we were unable to recover it. 00:37:28.403 [2024-09-29 16:45:28.897130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.403 [2024-09-29 16:45:28.897182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.403 qpair failed and we were unable to recover it. 00:37:28.403 [2024-09-29 16:45:28.897306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.403 [2024-09-29 16:45:28.897344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.403 qpair failed and we were unable to recover it. 00:37:28.403 [2024-09-29 16:45:28.897509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.403 [2024-09-29 16:45:28.897543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.403 qpair failed and we were unable to recover it. 00:37:28.403 [2024-09-29 16:45:28.897654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.403 [2024-09-29 16:45:28.897696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.403 qpair failed and we were unable to recover it. 00:37:28.403 [2024-09-29 16:45:28.897839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.403 [2024-09-29 16:45:28.897873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.403 qpair failed and we were unable to recover it. 00:37:28.403 [2024-09-29 16:45:28.897998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.403 [2024-09-29 16:45:28.898032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.403 qpair failed and we were unable to recover it. 00:37:28.403 [2024-09-29 16:45:28.898174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.403 [2024-09-29 16:45:28.898210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.403 qpair failed and we were unable to recover it. 00:37:28.403 [2024-09-29 16:45:28.898327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.403 [2024-09-29 16:45:28.898361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.403 qpair failed and we were unable to recover it. 00:37:28.403 [2024-09-29 16:45:28.898509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.403 [2024-09-29 16:45:28.898547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.403 qpair failed and we were unable to recover it. 00:37:28.403 [2024-09-29 16:45:28.898666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.403 [2024-09-29 16:45:28.898735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.403 qpair failed and we were unable to recover it. 00:37:28.403 [2024-09-29 16:45:28.898884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.403 [2024-09-29 16:45:28.898931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.403 qpair failed and we were unable to recover it. 00:37:28.720 [2024-09-29 16:45:28.899153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.720 [2024-09-29 16:45:28.899194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.720 qpair failed and we were unable to recover it. 00:37:28.720 [2024-09-29 16:45:28.899330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.720 [2024-09-29 16:45:28.899369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.720 qpair failed and we were unable to recover it. 00:37:28.720 [2024-09-29 16:45:28.899518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.720 [2024-09-29 16:45:28.899556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.720 qpair failed and we were unable to recover it. 00:37:28.720 [2024-09-29 16:45:28.899755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.720 [2024-09-29 16:45:28.899811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.720 qpair failed and we were unable to recover it. 00:37:28.720 [2024-09-29 16:45:28.899928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.720 [2024-09-29 16:45:28.899962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.720 qpair failed and we were unable to recover it. 00:37:28.720 [2024-09-29 16:45:28.900093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.720 [2024-09-29 16:45:28.900147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.720 qpair failed and we were unable to recover it. 00:37:28.720 [2024-09-29 16:45:28.900298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.720 [2024-09-29 16:45:28.900332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.720 qpair failed and we were unable to recover it. 00:37:28.720 [2024-09-29 16:45:28.900483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.720 [2024-09-29 16:45:28.900518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.720 qpair failed and we were unable to recover it. 00:37:28.720 [2024-09-29 16:45:28.900679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.720 [2024-09-29 16:45:28.900725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.720 qpair failed and we were unable to recover it. 00:37:28.720 [2024-09-29 16:45:28.900895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.720 [2024-09-29 16:45:28.900929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.720 qpair failed and we were unable to recover it. 00:37:28.720 [2024-09-29 16:45:28.901081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.720 [2024-09-29 16:45:28.901115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.720 qpair failed and we were unable to recover it. 00:37:28.720 [2024-09-29 16:45:28.901278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.720 [2024-09-29 16:45:28.901315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.720 qpair failed and we were unable to recover it. 00:37:28.720 [2024-09-29 16:45:28.901462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.720 [2024-09-29 16:45:28.901497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.720 qpair failed and we were unable to recover it. 00:37:28.720 [2024-09-29 16:45:28.901641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.720 [2024-09-29 16:45:28.901685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.720 qpair failed and we were unable to recover it. 00:37:28.720 [2024-09-29 16:45:28.901845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.720 [2024-09-29 16:45:28.901884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.720 qpair failed and we were unable to recover it. 00:37:28.720 [2024-09-29 16:45:28.902037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.720 [2024-09-29 16:45:28.902074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.720 qpair failed and we were unable to recover it. 00:37:28.720 [2024-09-29 16:45:28.902261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.720 [2024-09-29 16:45:28.902326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.720 qpair failed and we were unable to recover it. 00:37:28.721 [2024-09-29 16:45:28.902482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.721 [2024-09-29 16:45:28.902519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.721 qpair failed and we were unable to recover it. 00:37:28.721 [2024-09-29 16:45:28.902738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.721 [2024-09-29 16:45:28.902774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.721 qpair failed and we were unable to recover it. 00:37:28.721 [2024-09-29 16:45:28.902911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.721 [2024-09-29 16:45:28.902975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.721 qpair failed and we were unable to recover it. 00:37:28.721 [2024-09-29 16:45:28.903158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.721 [2024-09-29 16:45:28.903194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.721 qpair failed and we were unable to recover it. 00:37:28.721 [2024-09-29 16:45:28.903364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.721 [2024-09-29 16:45:28.903401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.721 qpair failed and we were unable to recover it. 00:37:28.721 [2024-09-29 16:45:28.903554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.721 [2024-09-29 16:45:28.903589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.721 qpair failed and we were unable to recover it. 00:37:28.721 [2024-09-29 16:45:28.903722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.721 [2024-09-29 16:45:28.903771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.721 qpair failed and we were unable to recover it. 00:37:28.721 [2024-09-29 16:45:28.903917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.721 [2024-09-29 16:45:28.903953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.721 qpair failed and we were unable to recover it. 00:37:28.721 [2024-09-29 16:45:28.904098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.721 [2024-09-29 16:45:28.904151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.721 qpair failed and we were unable to recover it. 00:37:28.721 [2024-09-29 16:45:28.904306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.721 [2024-09-29 16:45:28.904358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.721 qpair failed and we were unable to recover it. 00:37:28.721 [2024-09-29 16:45:28.904495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.721 [2024-09-29 16:45:28.904530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.721 qpair failed and we were unable to recover it. 00:37:28.721 [2024-09-29 16:45:28.904666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.721 [2024-09-29 16:45:28.904708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.721 qpair failed and we were unable to recover it. 00:37:28.721 [2024-09-29 16:45:28.904853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.721 [2024-09-29 16:45:28.904887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.721 qpair failed and we were unable to recover it. 00:37:28.721 [2024-09-29 16:45:28.904997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.721 [2024-09-29 16:45:28.905031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.721 qpair failed and we were unable to recover it. 00:37:28.721 [2024-09-29 16:45:28.905176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.721 [2024-09-29 16:45:28.905210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.721 qpair failed and we were unable to recover it. 00:37:28.721 [2024-09-29 16:45:28.905353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.721 [2024-09-29 16:45:28.905386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.721 qpair failed and we were unable to recover it. 00:37:28.721 [2024-09-29 16:45:28.905551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.721 [2024-09-29 16:45:28.905598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.721 qpair failed and we were unable to recover it. 00:37:28.721 [2024-09-29 16:45:28.905747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.721 [2024-09-29 16:45:28.905787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.721 qpair failed and we were unable to recover it. 00:37:28.721 [2024-09-29 16:45:28.905946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.721 [2024-09-29 16:45:28.905986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.721 qpair failed and we were unable to recover it. 00:37:28.721 [2024-09-29 16:45:28.906211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.721 [2024-09-29 16:45:28.906268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.721 qpair failed and we were unable to recover it. 00:37:28.721 [2024-09-29 16:45:28.906479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.721 [2024-09-29 16:45:28.906513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.721 qpair failed and we were unable to recover it. 00:37:28.721 [2024-09-29 16:45:28.906681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.721 [2024-09-29 16:45:28.906716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.721 qpair failed and we were unable to recover it. 00:37:28.721 [2024-09-29 16:45:28.906833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.721 [2024-09-29 16:45:28.906869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.721 qpair failed and we were unable to recover it. 00:37:28.721 [2024-09-29 16:45:28.907017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.721 [2024-09-29 16:45:28.907070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.721 qpair failed and we were unable to recover it. 00:37:28.721 [2024-09-29 16:45:28.907231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.721 [2024-09-29 16:45:28.907296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.721 qpair failed and we were unable to recover it. 00:37:28.721 [2024-09-29 16:45:28.907549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.721 [2024-09-29 16:45:28.907583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.721 qpair failed and we were unable to recover it. 00:37:28.721 [2024-09-29 16:45:28.907736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.721 [2024-09-29 16:45:28.907772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.721 qpair failed and we were unable to recover it. 00:37:28.721 [2024-09-29 16:45:28.907891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.721 [2024-09-29 16:45:28.907936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.721 qpair failed and we were unable to recover it. 00:37:28.721 [2024-09-29 16:45:28.908078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.721 [2024-09-29 16:45:28.908113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.721 qpair failed and we were unable to recover it. 00:37:28.721 [2024-09-29 16:45:28.908300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.721 [2024-09-29 16:45:28.908373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.721 qpair failed and we were unable to recover it. 00:37:28.721 [2024-09-29 16:45:28.908616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.721 [2024-09-29 16:45:28.908683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.721 qpair failed and we were unable to recover it. 00:37:28.721 [2024-09-29 16:45:28.908834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.721 [2024-09-29 16:45:28.908867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.721 qpair failed and we were unable to recover it. 00:37:28.721 [2024-09-29 16:45:28.909066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.721 [2024-09-29 16:45:28.909118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.721 qpair failed and we were unable to recover it. 00:37:28.721 [2024-09-29 16:45:28.909312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.721 [2024-09-29 16:45:28.909373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.721 qpair failed and we were unable to recover it. 00:37:28.721 [2024-09-29 16:45:28.909512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.721 [2024-09-29 16:45:28.909545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.721 qpair failed and we were unable to recover it. 00:37:28.721 [2024-09-29 16:45:28.909665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.721 [2024-09-29 16:45:28.909723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.721 qpair failed and we were unable to recover it. 00:37:28.721 [2024-09-29 16:45:28.909886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.721 [2024-09-29 16:45:28.909924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.721 qpair failed and we were unable to recover it. 00:37:28.721 [2024-09-29 16:45:28.910047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.721 [2024-09-29 16:45:28.910086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.721 qpair failed and we were unable to recover it. 00:37:28.721 [2024-09-29 16:45:28.910298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.721 [2024-09-29 16:45:28.910356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.721 qpair failed and we were unable to recover it. 00:37:28.721 [2024-09-29 16:45:28.910527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.721 [2024-09-29 16:45:28.910561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.721 qpair failed and we were unable to recover it. 00:37:28.721 [2024-09-29 16:45:28.910729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.722 [2024-09-29 16:45:28.910763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.722 qpair failed and we were unable to recover it. 00:37:28.722 [2024-09-29 16:45:28.910922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.722 [2024-09-29 16:45:28.910957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.722 qpair failed and we were unable to recover it. 00:37:28.722 [2024-09-29 16:45:28.911126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.722 [2024-09-29 16:45:28.911192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.722 qpair failed and we were unable to recover it. 00:37:28.722 [2024-09-29 16:45:28.911351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.722 [2024-09-29 16:45:28.911403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.722 qpair failed and we were unable to recover it. 00:37:28.722 [2024-09-29 16:45:28.911542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.722 [2024-09-29 16:45:28.911575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.722 qpair failed and we were unable to recover it. 00:37:28.722 [2024-09-29 16:45:28.911753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.722 [2024-09-29 16:45:28.911787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.722 qpair failed and we were unable to recover it. 00:37:28.722 [2024-09-29 16:45:28.911935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.722 [2024-09-29 16:45:28.911971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.722 qpair failed and we were unable to recover it. 00:37:28.722 [2024-09-29 16:45:28.912097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.722 [2024-09-29 16:45:28.912132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.722 qpair failed and we were unable to recover it. 00:37:28.722 [2024-09-29 16:45:28.912241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.722 [2024-09-29 16:45:28.912275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.722 qpair failed and we were unable to recover it. 00:37:28.722 [2024-09-29 16:45:28.912429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.722 [2024-09-29 16:45:28.912462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.722 qpair failed and we were unable to recover it. 00:37:28.722 [2024-09-29 16:45:28.912629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.722 [2024-09-29 16:45:28.912663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.722 qpair failed and we were unable to recover it. 00:37:28.722 [2024-09-29 16:45:28.912832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.722 [2024-09-29 16:45:28.912884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.722 qpair failed and we were unable to recover it. 00:37:28.722 [2024-09-29 16:45:28.913051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.722 [2024-09-29 16:45:28.913091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.722 qpair failed and we were unable to recover it. 00:37:28.722 [2024-09-29 16:45:28.913291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.722 [2024-09-29 16:45:28.913369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.722 qpair failed and we were unable to recover it. 00:37:28.722 [2024-09-29 16:45:28.913545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.722 [2024-09-29 16:45:28.913581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.722 qpair failed and we were unable to recover it. 00:37:28.722 [2024-09-29 16:45:28.913731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.722 [2024-09-29 16:45:28.913766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.722 qpair failed and we were unable to recover it. 00:37:28.722 [2024-09-29 16:45:28.913921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.722 [2024-09-29 16:45:28.913956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.722 qpair failed and we were unable to recover it. 00:37:28.722 [2024-09-29 16:45:28.914206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.722 [2024-09-29 16:45:28.914263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.722 qpair failed and we were unable to recover it. 00:37:28.722 [2024-09-29 16:45:28.914486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.722 [2024-09-29 16:45:28.914546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.722 qpair failed and we were unable to recover it. 00:37:28.722 [2024-09-29 16:45:28.914731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.722 [2024-09-29 16:45:28.914766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.722 qpair failed and we were unable to recover it. 00:37:28.722 [2024-09-29 16:45:28.914886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.722 [2024-09-29 16:45:28.914922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.722 qpair failed and we were unable to recover it. 00:37:28.722 [2024-09-29 16:45:28.915092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.722 [2024-09-29 16:45:28.915144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.722 qpair failed and we were unable to recover it. 00:37:28.722 [2024-09-29 16:45:28.915335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.722 [2024-09-29 16:45:28.915394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.722 qpair failed and we were unable to recover it. 00:37:28.722 [2024-09-29 16:45:28.915567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.722 [2024-09-29 16:45:28.915602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.722 qpair failed and we were unable to recover it. 00:37:28.722 [2024-09-29 16:45:28.915759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.722 [2024-09-29 16:45:28.915812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.722 qpair failed and we were unable to recover it. 00:37:28.722 [2024-09-29 16:45:28.915977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.722 [2024-09-29 16:45:28.916034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.722 qpair failed and we were unable to recover it. 00:37:28.722 [2024-09-29 16:45:28.916178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.722 [2024-09-29 16:45:28.916218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.722 qpair failed and we were unable to recover it. 00:37:28.722 [2024-09-29 16:45:28.916479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.722 [2024-09-29 16:45:28.916552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.722 qpair failed and we were unable to recover it. 00:37:28.722 [2024-09-29 16:45:28.916728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.722 [2024-09-29 16:45:28.916763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.722 qpair failed and we were unable to recover it. 00:37:28.722 [2024-09-29 16:45:28.916955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.722 [2024-09-29 16:45:28.917002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.722 qpair failed and we were unable to recover it. 00:37:28.722 [2024-09-29 16:45:28.917144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.722 [2024-09-29 16:45:28.917180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.722 qpair failed and we were unable to recover it. 00:37:28.722 [2024-09-29 16:45:28.917321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.722 [2024-09-29 16:45:28.917355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.722 qpair failed and we were unable to recover it. 00:37:28.722 [2024-09-29 16:45:28.917546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.722 [2024-09-29 16:45:28.917583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.722 qpair failed and we were unable to recover it. 00:37:28.722 [2024-09-29 16:45:28.917748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.722 [2024-09-29 16:45:28.917796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.722 qpair failed and we were unable to recover it. 00:37:28.722 [2024-09-29 16:45:28.917938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.722 [2024-09-29 16:45:28.918005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.722 qpair failed and we were unable to recover it. 00:37:28.722 [2024-09-29 16:45:28.918193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.722 [2024-09-29 16:45:28.918231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.722 qpair failed and we were unable to recover it. 00:37:28.722 [2024-09-29 16:45:28.918357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.722 [2024-09-29 16:45:28.918394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.722 qpair failed and we were unable to recover it. 00:37:28.722 [2024-09-29 16:45:28.918547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.722 [2024-09-29 16:45:28.918584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.722 qpair failed and we were unable to recover it. 00:37:28.722 [2024-09-29 16:45:28.918722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.722 [2024-09-29 16:45:28.918758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.722 qpair failed and we were unable to recover it. 00:37:28.722 [2024-09-29 16:45:28.918924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.722 [2024-09-29 16:45:28.918958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.722 qpair failed and we were unable to recover it. 00:37:28.722 [2024-09-29 16:45:28.919100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.722 [2024-09-29 16:45:28.919151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.722 qpair failed and we were unable to recover it. 00:37:28.722 [2024-09-29 16:45:28.919348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.722 [2024-09-29 16:45:28.919407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.722 qpair failed and we were unable to recover it. 00:37:28.722 [2024-09-29 16:45:28.919549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.722 [2024-09-29 16:45:28.919588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.722 qpair failed and we were unable to recover it. 00:37:28.723 [2024-09-29 16:45:28.919754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.723 [2024-09-29 16:45:28.919809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.723 qpair failed and we were unable to recover it. 00:37:28.723 [2024-09-29 16:45:28.920036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.723 [2024-09-29 16:45:28.920108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.723 qpair failed and we were unable to recover it. 00:37:28.723 [2024-09-29 16:45:28.920308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.723 [2024-09-29 16:45:28.920370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.723 qpair failed and we were unable to recover it. 00:37:28.723 [2024-09-29 16:45:28.920500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.723 [2024-09-29 16:45:28.920538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.723 qpair failed and we were unable to recover it. 00:37:28.723 [2024-09-29 16:45:28.920658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.723 [2024-09-29 16:45:28.920705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.723 qpair failed and we were unable to recover it. 00:37:28.723 [2024-09-29 16:45:28.920894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.723 [2024-09-29 16:45:28.920947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.723 qpair failed and we were unable to recover it. 00:37:28.723 [2024-09-29 16:45:28.921148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.723 [2024-09-29 16:45:28.921189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.723 qpair failed and we were unable to recover it. 00:37:28.723 [2024-09-29 16:45:28.921330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.723 [2024-09-29 16:45:28.921406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.723 qpair failed and we were unable to recover it. 00:37:28.723 [2024-09-29 16:45:28.921564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.723 [2024-09-29 16:45:28.921604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.723 qpair failed and we were unable to recover it. 00:37:28.723 [2024-09-29 16:45:28.921761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.723 [2024-09-29 16:45:28.921796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.723 qpair failed and we were unable to recover it. 00:37:28.723 [2024-09-29 16:45:28.921972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.723 [2024-09-29 16:45:28.922026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.723 qpair failed and we were unable to recover it. 00:37:28.723 [2024-09-29 16:45:28.922193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.723 [2024-09-29 16:45:28.922245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.723 qpair failed and we were unable to recover it. 00:37:28.723 [2024-09-29 16:45:28.922435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.723 [2024-09-29 16:45:28.922488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.723 qpair failed and we were unable to recover it. 00:37:28.723 [2024-09-29 16:45:28.922633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.723 [2024-09-29 16:45:28.922667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.723 qpair failed and we were unable to recover it. 00:37:28.723 [2024-09-29 16:45:28.922810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.723 [2024-09-29 16:45:28.922856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.723 qpair failed and we were unable to recover it. 00:37:28.723 [2024-09-29 16:45:28.922995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.723 [2024-09-29 16:45:28.923043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.723 qpair failed and we were unable to recover it. 00:37:28.723 [2024-09-29 16:45:28.923199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.723 [2024-09-29 16:45:28.923235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.723 qpair failed and we were unable to recover it. 00:37:28.723 [2024-09-29 16:45:28.923451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.723 [2024-09-29 16:45:28.923514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.723 qpair failed and we were unable to recover it. 00:37:28.723 [2024-09-29 16:45:28.923663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.723 [2024-09-29 16:45:28.923707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.723 qpair failed and we were unable to recover it. 00:37:28.723 [2024-09-29 16:45:28.923862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.723 [2024-09-29 16:45:28.923896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.723 qpair failed and we were unable to recover it. 00:37:28.723 [2024-09-29 16:45:28.924131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.723 [2024-09-29 16:45:28.924170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.723 qpair failed and we were unable to recover it. 00:37:28.723 [2024-09-29 16:45:28.924359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.723 [2024-09-29 16:45:28.924421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.723 qpair failed and we were unable to recover it. 00:37:28.723 [2024-09-29 16:45:28.924601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.723 [2024-09-29 16:45:28.924639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.723 qpair failed and we were unable to recover it. 00:37:28.723 [2024-09-29 16:45:28.924811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.723 [2024-09-29 16:45:28.924847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.723 qpair failed and we were unable to recover it. 00:37:28.723 [2024-09-29 16:45:28.925007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.723 [2024-09-29 16:45:28.925054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.723 qpair failed and we were unable to recover it. 00:37:28.723 [2024-09-29 16:45:28.925216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.723 [2024-09-29 16:45:28.925256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.723 qpair failed and we were unable to recover it. 00:37:28.723 [2024-09-29 16:45:28.925495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.723 [2024-09-29 16:45:28.925555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.723 qpair failed and we were unable to recover it. 00:37:28.723 [2024-09-29 16:45:28.925746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.723 [2024-09-29 16:45:28.925780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.723 qpair failed and we were unable to recover it. 00:37:28.723 [2024-09-29 16:45:28.925920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.723 [2024-09-29 16:45:28.925955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.723 qpair failed and we were unable to recover it. 00:37:28.723 [2024-09-29 16:45:28.926092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.723 [2024-09-29 16:45:28.926129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.723 qpair failed and we were unable to recover it. 00:37:28.723 [2024-09-29 16:45:28.926291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.723 [2024-09-29 16:45:28.926328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.723 qpair failed and we were unable to recover it. 00:37:28.723 [2024-09-29 16:45:28.926455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.723 [2024-09-29 16:45:28.926492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.723 qpair failed and we were unable to recover it. 00:37:28.723 [2024-09-29 16:45:28.926662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.723 [2024-09-29 16:45:28.926741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.723 qpair failed and we were unable to recover it. 00:37:28.723 [2024-09-29 16:45:28.926902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.723 [2024-09-29 16:45:28.926951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.723 qpair failed and we were unable to recover it. 00:37:28.723 [2024-09-29 16:45:28.927127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.723 [2024-09-29 16:45:28.927187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.723 qpair failed and we were unable to recover it. 00:37:28.723 [2024-09-29 16:45:28.927360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.723 [2024-09-29 16:45:28.927414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.723 qpair failed and we were unable to recover it. 00:37:28.723 [2024-09-29 16:45:28.927526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.723 [2024-09-29 16:45:28.927561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.723 qpair failed and we were unable to recover it. 00:37:28.723 [2024-09-29 16:45:28.927756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.723 [2024-09-29 16:45:28.927805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.723 qpair failed and we were unable to recover it. 00:37:28.723 [2024-09-29 16:45:28.927968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.723 [2024-09-29 16:45:28.928004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.723 qpair failed and we were unable to recover it. 00:37:28.723 [2024-09-29 16:45:28.928146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.723 [2024-09-29 16:45:28.928186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.723 qpair failed and we were unable to recover it. 00:37:28.723 [2024-09-29 16:45:28.928328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.723 [2024-09-29 16:45:28.928362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.723 qpair failed and we were unable to recover it. 00:37:28.723 [2024-09-29 16:45:28.928503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.724 [2024-09-29 16:45:28.928537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.724 qpair failed and we were unable to recover it. 00:37:28.724 [2024-09-29 16:45:28.928690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.724 [2024-09-29 16:45:28.928734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.724 qpair failed and we were unable to recover it. 00:37:28.724 [2024-09-29 16:45:28.928879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.724 [2024-09-29 16:45:28.928913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.724 qpair failed and we were unable to recover it. 00:37:28.724 [2024-09-29 16:45:28.929032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.724 [2024-09-29 16:45:28.929066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.724 qpair failed and we were unable to recover it. 00:37:28.724 [2024-09-29 16:45:28.929201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.724 [2024-09-29 16:45:28.929247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.724 qpair failed and we were unable to recover it. 00:37:28.724 [2024-09-29 16:45:28.929429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.724 [2024-09-29 16:45:28.929490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.724 qpair failed and we were unable to recover it. 00:37:28.724 [2024-09-29 16:45:28.929600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.724 [2024-09-29 16:45:28.929635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.724 qpair failed and we were unable to recover it. 00:37:28.724 [2024-09-29 16:45:28.929784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.724 [2024-09-29 16:45:28.929819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.724 qpair failed and we were unable to recover it. 00:37:28.724 [2024-09-29 16:45:28.930012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.724 [2024-09-29 16:45:28.930062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.724 qpair failed and we were unable to recover it. 00:37:28.724 [2024-09-29 16:45:28.930223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.724 [2024-09-29 16:45:28.930281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.724 qpair failed and we were unable to recover it. 00:37:28.724 [2024-09-29 16:45:28.930424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.724 [2024-09-29 16:45:28.930458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.724 qpair failed and we were unable to recover it. 00:37:28.724 [2024-09-29 16:45:28.930601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.724 [2024-09-29 16:45:28.930637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.724 qpair failed and we were unable to recover it. 00:37:28.724 [2024-09-29 16:45:28.930843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.724 [2024-09-29 16:45:28.930896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.724 qpair failed and we were unable to recover it. 00:37:28.724 [2024-09-29 16:45:28.931047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.724 [2024-09-29 16:45:28.931115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.724 qpair failed and we were unable to recover it. 00:37:28.724 [2024-09-29 16:45:28.931338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.724 [2024-09-29 16:45:28.931397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.724 qpair failed and we were unable to recover it. 00:37:28.724 [2024-09-29 16:45:28.931525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.724 [2024-09-29 16:45:28.931559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.724 qpair failed and we were unable to recover it. 00:37:28.724 [2024-09-29 16:45:28.931729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.724 [2024-09-29 16:45:28.931777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.724 qpair failed and we were unable to recover it. 00:37:28.724 [2024-09-29 16:45:28.931941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.724 [2024-09-29 16:45:28.932005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.724 qpair failed and we were unable to recover it. 00:37:28.724 [2024-09-29 16:45:28.932230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.724 [2024-09-29 16:45:28.932270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.724 qpair failed and we were unable to recover it. 00:37:28.724 [2024-09-29 16:45:28.932410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.724 [2024-09-29 16:45:28.932459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.724 qpair failed and we were unable to recover it. 00:37:28.724 [2024-09-29 16:45:28.932607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.724 [2024-09-29 16:45:28.932644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.724 qpair failed and we were unable to recover it. 00:37:28.724 [2024-09-29 16:45:28.932833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.724 [2024-09-29 16:45:28.932902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.724 qpair failed and we were unable to recover it. 00:37:28.724 [2024-09-29 16:45:28.933112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.724 [2024-09-29 16:45:28.933166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.724 qpair failed and we were unable to recover it. 00:37:28.724 [2024-09-29 16:45:28.933318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.724 [2024-09-29 16:45:28.933375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.724 qpair failed and we were unable to recover it. 00:37:28.724 [2024-09-29 16:45:28.933488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.724 [2024-09-29 16:45:28.933524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.724 qpair failed and we were unable to recover it. 00:37:28.724 [2024-09-29 16:45:28.933690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.724 [2024-09-29 16:45:28.933740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.724 qpair failed and we were unable to recover it. 00:37:28.724 [2024-09-29 16:45:28.933976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.724 [2024-09-29 16:45:28.934011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.724 qpair failed and we were unable to recover it. 00:37:28.724 [2024-09-29 16:45:28.934199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.724 [2024-09-29 16:45:28.934251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.724 qpair failed and we were unable to recover it. 00:37:28.724 [2024-09-29 16:45:28.934363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.724 [2024-09-29 16:45:28.934398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.724 qpair failed and we were unable to recover it. 00:37:28.724 [2024-09-29 16:45:28.934511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.724 [2024-09-29 16:45:28.934546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.724 qpair failed and we were unable to recover it. 00:37:28.724 [2024-09-29 16:45:28.934729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.724 [2024-09-29 16:45:28.934782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.724 qpair failed and we were unable to recover it. 00:37:28.724 [2024-09-29 16:45:28.934921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.724 [2024-09-29 16:45:28.934974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.724 qpair failed and we were unable to recover it. 00:37:28.724 [2024-09-29 16:45:28.935196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.724 [2024-09-29 16:45:28.935257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.724 qpair failed and we were unable to recover it. 00:37:28.724 [2024-09-29 16:45:28.935425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.724 [2024-09-29 16:45:28.935461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.724 qpair failed and we were unable to recover it. 00:37:28.724 [2024-09-29 16:45:28.935575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.724 [2024-09-29 16:45:28.935609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.724 qpair failed and we were unable to recover it. 00:37:28.724 [2024-09-29 16:45:28.935799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.724 [2024-09-29 16:45:28.935838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.724 qpair failed and we were unable to recover it. 00:37:28.724 [2024-09-29 16:45:28.935982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.724 [2024-09-29 16:45:28.936020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.724 qpair failed and we were unable to recover it. 00:37:28.724 [2024-09-29 16:45:28.936216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.724 [2024-09-29 16:45:28.936276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.724 qpair failed and we were unable to recover it. 00:37:28.724 [2024-09-29 16:45:28.936411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.725 [2024-09-29 16:45:28.936446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.725 qpair failed and we were unable to recover it. 00:37:28.725 [2024-09-29 16:45:28.936597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.725 [2024-09-29 16:45:28.936632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.725 qpair failed and we were unable to recover it. 00:37:28.725 [2024-09-29 16:45:28.936811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.725 [2024-09-29 16:45:28.936859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.725 qpair failed and we were unable to recover it. 00:37:28.725 [2024-09-29 16:45:28.937007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.725 [2024-09-29 16:45:28.937068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.725 qpair failed and we were unable to recover it. 00:37:28.725 [2024-09-29 16:45:28.937260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.725 [2024-09-29 16:45:28.937312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.725 qpair failed and we were unable to recover it. 00:37:28.725 [2024-09-29 16:45:28.937446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.725 [2024-09-29 16:45:28.937481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.725 qpair failed and we were unable to recover it. 00:37:28.725 [2024-09-29 16:45:28.937614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.725 [2024-09-29 16:45:28.937647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.725 qpair failed and we were unable to recover it. 00:37:28.725 [2024-09-29 16:45:28.937838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.725 [2024-09-29 16:45:28.937887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.725 qpair failed and we were unable to recover it. 00:37:28.725 [2024-09-29 16:45:28.938048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.725 [2024-09-29 16:45:28.938083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.725 qpair failed and we were unable to recover it. 00:37:28.725 [2024-09-29 16:45:28.938197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.725 [2024-09-29 16:45:28.938231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.725 qpair failed and we were unable to recover it. 00:37:28.725 [2024-09-29 16:45:28.938340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.725 [2024-09-29 16:45:28.938374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.725 qpair failed and we were unable to recover it. 00:37:28.725 [2024-09-29 16:45:28.938502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.725 [2024-09-29 16:45:28.938536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.725 qpair failed and we were unable to recover it. 00:37:28.725 [2024-09-29 16:45:28.938654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.725 [2024-09-29 16:45:28.938714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.725 qpair failed and we were unable to recover it. 00:37:28.725 [2024-09-29 16:45:28.938841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.725 [2024-09-29 16:45:28.938877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.725 qpair failed and we were unable to recover it. 00:37:28.725 [2024-09-29 16:45:28.939064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.725 [2024-09-29 16:45:28.939128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.725 qpair failed and we were unable to recover it. 00:37:28.725 [2024-09-29 16:45:28.939370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.725 [2024-09-29 16:45:28.939429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.725 qpair failed and we were unable to recover it. 00:37:28.725 [2024-09-29 16:45:28.939566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.725 [2024-09-29 16:45:28.939600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.725 qpair failed and we were unable to recover it. 00:37:28.725 [2024-09-29 16:45:28.939789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.725 [2024-09-29 16:45:28.939843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.725 qpair failed and we were unable to recover it. 00:37:28.725 [2024-09-29 16:45:28.940012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.725 [2024-09-29 16:45:28.940046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.725 qpair failed and we were unable to recover it. 00:37:28.725 [2024-09-29 16:45:28.940182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.725 [2024-09-29 16:45:28.940216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.725 qpair failed and we were unable to recover it. 00:37:28.725 [2024-09-29 16:45:28.940383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.725 [2024-09-29 16:45:28.940417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.725 qpair failed and we were unable to recover it. 00:37:28.725 [2024-09-29 16:45:28.940533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.725 [2024-09-29 16:45:28.940568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.725 qpair failed and we were unable to recover it. 00:37:28.725 [2024-09-29 16:45:28.940720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.725 [2024-09-29 16:45:28.940754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.725 qpair failed and we were unable to recover it. 00:37:28.725 [2024-09-29 16:45:28.940911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.725 [2024-09-29 16:45:28.940964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.725 qpair failed and we were unable to recover it. 00:37:28.725 [2024-09-29 16:45:28.941141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.725 [2024-09-29 16:45:28.941178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.725 qpair failed and we were unable to recover it. 00:37:28.725 [2024-09-29 16:45:28.941370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.725 [2024-09-29 16:45:28.941433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.725 qpair failed and we were unable to recover it. 00:37:28.725 [2024-09-29 16:45:28.941576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.725 [2024-09-29 16:45:28.941610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.725 qpair failed and we were unable to recover it. 00:37:28.725 [2024-09-29 16:45:28.941773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.725 [2024-09-29 16:45:28.941812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.725 qpair failed and we were unable to recover it. 00:37:28.725 [2024-09-29 16:45:28.941930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.725 [2024-09-29 16:45:28.941982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.725 qpair failed and we were unable to recover it. 00:37:28.725 [2024-09-29 16:45:28.942145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.725 [2024-09-29 16:45:28.942183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.725 qpair failed and we were unable to recover it. 00:37:28.725 [2024-09-29 16:45:28.942363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.725 [2024-09-29 16:45:28.942400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.725 qpair failed and we were unable to recover it. 00:37:28.725 [2024-09-29 16:45:28.942574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.725 [2024-09-29 16:45:28.942626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.725 qpair failed and we were unable to recover it. 00:37:28.725 [2024-09-29 16:45:28.942783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.725 [2024-09-29 16:45:28.942819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.725 qpair failed and we were unable to recover it. 00:37:28.725 [2024-09-29 16:45:28.942968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.725 [2024-09-29 16:45:28.943016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.725 qpair failed and we were unable to recover it. 00:37:28.725 [2024-09-29 16:45:28.943232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.725 [2024-09-29 16:45:28.943288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.725 qpair failed and we were unable to recover it. 00:37:28.725 [2024-09-29 16:45:28.943503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.725 [2024-09-29 16:45:28.943559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.725 qpair failed and we were unable to recover it. 00:37:28.725 [2024-09-29 16:45:28.943699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.725 [2024-09-29 16:45:28.943742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.725 qpair failed and we were unable to recover it. 00:37:28.725 [2024-09-29 16:45:28.943897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.725 [2024-09-29 16:45:28.944073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.725 qpair failed and we were unable to recover it. 00:37:28.725 [2024-09-29 16:45:28.944315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.725 [2024-09-29 16:45:28.944368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.725 qpair failed and we were unable to recover it. 00:37:28.725 [2024-09-29 16:45:28.944505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.725 [2024-09-29 16:45:28.944539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.725 qpair failed and we were unable to recover it. 00:37:28.725 [2024-09-29 16:45:28.944723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.725 [2024-09-29 16:45:28.944762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.725 qpair failed and we were unable to recover it. 00:37:28.725 [2024-09-29 16:45:28.944905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.725 [2024-09-29 16:45:28.944959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.725 qpair failed and we were unable to recover it. 00:37:28.725 [2024-09-29 16:45:28.945112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.725 [2024-09-29 16:45:28.945146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.725 qpair failed and we were unable to recover it. 00:37:28.725 [2024-09-29 16:45:28.945291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.726 [2024-09-29 16:45:28.945325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.726 qpair failed and we were unable to recover it. 00:37:28.726 [2024-09-29 16:45:28.945460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.726 [2024-09-29 16:45:28.945510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.726 qpair failed and we were unable to recover it. 00:37:28.726 [2024-09-29 16:45:28.945720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.726 [2024-09-29 16:45:28.945767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.726 qpair failed and we were unable to recover it. 00:37:28.726 [2024-09-29 16:45:28.945909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.726 [2024-09-29 16:45:28.945968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.726 qpair failed and we were unable to recover it. 00:37:28.726 [2024-09-29 16:45:28.946159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.726 [2024-09-29 16:45:28.946211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.726 qpair failed and we were unable to recover it. 00:37:28.726 [2024-09-29 16:45:28.946369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.726 [2024-09-29 16:45:28.946426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.726 qpair failed and we were unable to recover it. 00:37:28.726 [2024-09-29 16:45:28.946568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.726 [2024-09-29 16:45:28.946602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.726 qpair failed and we were unable to recover it. 00:37:28.726 [2024-09-29 16:45:28.946741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.726 [2024-09-29 16:45:28.946794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.726 qpair failed and we were unable to recover it. 00:37:28.726 [2024-09-29 16:45:28.946964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.726 [2024-09-29 16:45:28.947015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.726 qpair failed and we were unable to recover it. 00:37:28.726 [2024-09-29 16:45:28.947149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.726 [2024-09-29 16:45:28.947200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.726 qpair failed and we were unable to recover it. 00:37:28.726 [2024-09-29 16:45:28.947449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.726 [2024-09-29 16:45:28.947518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.726 qpair failed and we were unable to recover it. 00:37:28.726 [2024-09-29 16:45:28.947698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.726 [2024-09-29 16:45:28.947742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.726 qpair failed and we were unable to recover it. 00:37:28.726 [2024-09-29 16:45:28.947905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.726 [2024-09-29 16:45:28.947957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.726 qpair failed and we were unable to recover it. 00:37:28.726 [2024-09-29 16:45:28.948152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.726 [2024-09-29 16:45:28.948209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.726 qpair failed and we were unable to recover it. 00:37:28.726 [2024-09-29 16:45:28.948352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.726 [2024-09-29 16:45:28.948386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.726 qpair failed and we were unable to recover it. 00:37:28.726 [2024-09-29 16:45:28.948552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.726 [2024-09-29 16:45:28.948586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.726 qpair failed and we were unable to recover it. 00:37:28.726 [2024-09-29 16:45:28.948753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.726 [2024-09-29 16:45:28.948791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.726 qpair failed and we were unable to recover it. 00:37:28.726 [2024-09-29 16:45:28.948909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.726 [2024-09-29 16:45:28.948957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.726 qpair failed and we were unable to recover it. 00:37:28.726 [2024-09-29 16:45:28.949170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.726 [2024-09-29 16:45:28.949242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.726 qpair failed and we were unable to recover it. 00:37:28.726 [2024-09-29 16:45:28.949465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.726 [2024-09-29 16:45:28.949524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.726 qpair failed and we were unable to recover it. 00:37:28.726 [2024-09-29 16:45:28.949694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.726 [2024-09-29 16:45:28.949733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.726 qpair failed and we were unable to recover it. 00:37:28.726 [2024-09-29 16:45:28.949892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.726 [2024-09-29 16:45:28.949944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.726 qpair failed and we were unable to recover it. 00:37:28.726 [2024-09-29 16:45:28.950102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.726 [2024-09-29 16:45:28.950155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.726 qpair failed and we were unable to recover it. 00:37:28.726 [2024-09-29 16:45:28.950347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.726 [2024-09-29 16:45:28.950414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.726 qpair failed and we were unable to recover it. 00:37:28.726 [2024-09-29 16:45:28.950573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.726 [2024-09-29 16:45:28.950612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.726 qpair failed and we were unable to recover it. 00:37:28.726 [2024-09-29 16:45:28.950795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.726 [2024-09-29 16:45:28.950849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.726 qpair failed and we were unable to recover it. 00:37:28.726 [2024-09-29 16:45:28.951001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.726 [2024-09-29 16:45:28.951055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.726 qpair failed and we were unable to recover it. 00:37:28.726 [2024-09-29 16:45:28.951270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.726 [2024-09-29 16:45:28.951339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.726 qpair failed and we were unable to recover it. 00:37:28.726 [2024-09-29 16:45:28.951607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.726 [2024-09-29 16:45:28.951648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.726 qpair failed and we were unable to recover it. 00:37:28.726 [2024-09-29 16:45:28.951798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.726 [2024-09-29 16:45:28.951838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.726 qpair failed and we were unable to recover it. 00:37:28.726 [2024-09-29 16:45:28.951990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.726 [2024-09-29 16:45:28.952043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.726 qpair failed and we were unable to recover it. 00:37:28.726 [2024-09-29 16:45:28.952237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.726 [2024-09-29 16:45:28.952277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.726 qpair failed and we were unable to recover it. 00:37:28.726 [2024-09-29 16:45:28.952507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.726 [2024-09-29 16:45:28.952564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.726 qpair failed and we were unable to recover it. 00:37:28.726 [2024-09-29 16:45:28.952727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.726 [2024-09-29 16:45:28.952781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.726 qpair failed and we were unable to recover it. 00:37:28.726 [2024-09-29 16:45:28.952972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.726 [2024-09-29 16:45:28.953023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.726 qpair failed and we were unable to recover it. 00:37:28.726 [2024-09-29 16:45:28.953203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.726 [2024-09-29 16:45:28.953260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.726 qpair failed and we were unable to recover it. 00:37:28.726 [2024-09-29 16:45:28.953376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.726 [2024-09-29 16:45:28.953409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.726 qpair failed and we were unable to recover it. 00:37:28.726 [2024-09-29 16:45:28.953569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.726 [2024-09-29 16:45:28.953603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.726 qpair failed and we were unable to recover it. 00:37:28.726 [2024-09-29 16:45:28.953771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.726 [2024-09-29 16:45:28.953822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.726 qpair failed and we were unable to recover it. 00:37:28.726 [2024-09-29 16:45:28.954013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.726 [2024-09-29 16:45:28.954065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.726 qpair failed and we were unable to recover it. 00:37:28.726 [2024-09-29 16:45:28.954229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.726 [2024-09-29 16:45:28.954285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.726 qpair failed and we were unable to recover it. 00:37:28.726 [2024-09-29 16:45:28.954395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.726 [2024-09-29 16:45:28.954430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.726 qpair failed and we were unable to recover it. 00:37:28.726 [2024-09-29 16:45:28.954566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.726 [2024-09-29 16:45:28.954600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.726 qpair failed and we were unable to recover it. 00:37:28.726 [2024-09-29 16:45:28.954749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.726 [2024-09-29 16:45:28.954784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.726 qpair failed and we were unable to recover it. 00:37:28.726 [2024-09-29 16:45:28.954915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.726 [2024-09-29 16:45:28.954952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.726 qpair failed and we were unable to recover it. 00:37:28.726 [2024-09-29 16:45:28.955134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.726 [2024-09-29 16:45:28.955168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.726 qpair failed and we were unable to recover it. 00:37:28.726 [2024-09-29 16:45:28.955304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.726 [2024-09-29 16:45:28.955337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.726 qpair failed and we were unable to recover it. 00:37:28.727 [2024-09-29 16:45:28.955455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.727 [2024-09-29 16:45:28.955489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.727 qpair failed and we were unable to recover it. 00:37:28.727 [2024-09-29 16:45:28.955629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.727 [2024-09-29 16:45:28.955663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.727 qpair failed and we were unable to recover it. 00:37:28.727 [2024-09-29 16:45:28.955838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.727 [2024-09-29 16:45:28.955886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.727 qpair failed and we were unable to recover it. 00:37:28.727 [2024-09-29 16:45:28.956003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.727 [2024-09-29 16:45:28.956038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.727 qpair failed and we were unable to recover it. 00:37:28.727 [2024-09-29 16:45:28.956280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.727 [2024-09-29 16:45:28.956335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.727 qpair failed and we were unable to recover it. 00:37:28.727 [2024-09-29 16:45:28.956446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.727 [2024-09-29 16:45:28.956480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.727 qpair failed and we were unable to recover it. 00:37:28.727 [2024-09-29 16:45:28.956649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.727 [2024-09-29 16:45:28.956692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.727 qpair failed and we were unable to recover it. 00:37:28.727 [2024-09-29 16:45:28.956835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.727 [2024-09-29 16:45:28.956887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.727 qpair failed and we were unable to recover it. 00:37:28.727 [2024-09-29 16:45:28.957039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.727 [2024-09-29 16:45:28.957096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.727 qpair failed and we were unable to recover it. 00:37:28.727 [2024-09-29 16:45:28.957226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.727 [2024-09-29 16:45:28.957279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.727 qpair failed and we were unable to recover it. 00:37:28.727 [2024-09-29 16:45:28.957435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.727 [2024-09-29 16:45:28.957470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.727 qpair failed and we were unable to recover it. 00:37:28.727 [2024-09-29 16:45:28.957632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.727 [2024-09-29 16:45:28.957687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.727 qpair failed and we were unable to recover it. 00:37:28.727 [2024-09-29 16:45:28.957884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.727 [2024-09-29 16:45:28.957931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.727 qpair failed and we were unable to recover it. 00:37:28.727 [2024-09-29 16:45:28.958051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.727 [2024-09-29 16:45:28.958086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.727 qpair failed and we were unable to recover it. 00:37:28.727 [2024-09-29 16:45:28.958246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.727 [2024-09-29 16:45:28.958316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.727 qpair failed and we were unable to recover it. 00:37:28.727 [2024-09-29 16:45:28.958470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.727 [2024-09-29 16:45:28.958507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.727 qpair failed and we were unable to recover it. 00:37:28.727 [2024-09-29 16:45:28.958686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.727 [2024-09-29 16:45:28.958739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.727 qpair failed and we were unable to recover it. 00:37:28.727 [2024-09-29 16:45:28.958872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.727 [2024-09-29 16:45:28.958931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.727 qpair failed and we were unable to recover it. 00:37:28.727 [2024-09-29 16:45:28.959086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.727 [2024-09-29 16:45:28.959139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.727 qpair failed and we were unable to recover it. 00:37:28.727 [2024-09-29 16:45:28.959395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.727 [2024-09-29 16:45:28.959455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.727 qpair failed and we were unable to recover it. 00:37:28.727 [2024-09-29 16:45:28.959626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.727 [2024-09-29 16:45:28.959661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.727 qpair failed and we were unable to recover it. 00:37:28.727 [2024-09-29 16:45:28.959810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.727 [2024-09-29 16:45:28.959844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.727 qpair failed and we were unable to recover it. 00:37:28.727 [2024-09-29 16:45:28.959968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.727 [2024-09-29 16:45:28.960005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.727 qpair failed and we were unable to recover it. 00:37:28.727 [2024-09-29 16:45:28.960240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.727 [2024-09-29 16:45:28.960301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.727 qpair failed and we were unable to recover it. 00:37:28.727 [2024-09-29 16:45:28.960527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.727 [2024-09-29 16:45:28.960584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.727 qpair failed and we were unable to recover it. 00:37:28.727 [2024-09-29 16:45:28.960727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.727 [2024-09-29 16:45:28.960760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.727 qpair failed and we were unable to recover it. 00:37:28.727 [2024-09-29 16:45:28.960904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.727 [2024-09-29 16:45:28.960937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.727 qpair failed and we were unable to recover it. 00:37:28.727 [2024-09-29 16:45:28.961170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.727 [2024-09-29 16:45:28.961244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.727 qpair failed and we were unable to recover it. 00:37:28.727 [2024-09-29 16:45:28.961434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.727 [2024-09-29 16:45:28.961493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.727 qpair failed and we were unable to recover it. 00:37:28.727 [2024-09-29 16:45:28.961658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.727 [2024-09-29 16:45:28.961713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.727 qpair failed and we were unable to recover it. 00:37:28.727 [2024-09-29 16:45:28.961868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.727 [2024-09-29 16:45:28.961901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.727 qpair failed and we were unable to recover it. 00:37:28.727 [2024-09-29 16:45:28.962034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.727 [2024-09-29 16:45:28.962072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.727 qpair failed and we were unable to recover it. 00:37:28.727 [2024-09-29 16:45:28.962201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.727 [2024-09-29 16:45:28.962238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.727 qpair failed and we were unable to recover it. 00:37:28.727 [2024-09-29 16:45:28.962449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.727 [2024-09-29 16:45:28.962486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.727 qpair failed and we were unable to recover it. 00:37:28.727 [2024-09-29 16:45:28.962636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.727 [2024-09-29 16:45:28.962698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.727 qpair failed and we were unable to recover it. 00:37:28.727 [2024-09-29 16:45:28.962852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.727 [2024-09-29 16:45:28.962886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.727 qpair failed and we were unable to recover it. 00:37:28.727 [2024-09-29 16:45:28.963043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.727 [2024-09-29 16:45:28.963081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.727 qpair failed and we were unable to recover it. 00:37:28.727 [2024-09-29 16:45:28.963258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.727 [2024-09-29 16:45:28.963295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.727 qpair failed and we were unable to recover it. 00:37:28.727 [2024-09-29 16:45:28.963450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.727 [2024-09-29 16:45:28.963487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.727 qpair failed and we were unable to recover it. 00:37:28.727 [2024-09-29 16:45:28.963649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.727 [2024-09-29 16:45:28.963690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.727 qpair failed and we were unable to recover it. 00:37:28.727 [2024-09-29 16:45:28.963843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.727 [2024-09-29 16:45:28.963876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.727 qpair failed and we were unable to recover it. 00:37:28.727 [2024-09-29 16:45:28.964006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.727 [2024-09-29 16:45:28.964044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.727 qpair failed and we were unable to recover it. 00:37:28.727 [2024-09-29 16:45:28.964330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.727 [2024-09-29 16:45:28.964368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.727 qpair failed and we were unable to recover it. 00:37:28.727 [2024-09-29 16:45:28.964538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.727 [2024-09-29 16:45:28.964571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.727 qpair failed and we were unable to recover it. 00:37:28.727 [2024-09-29 16:45:28.964727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.727 [2024-09-29 16:45:28.964761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.727 qpair failed and we were unable to recover it. 00:37:28.727 [2024-09-29 16:45:28.964920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.727 [2024-09-29 16:45:28.964957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.727 qpair failed and we were unable to recover it. 00:37:28.727 [2024-09-29 16:45:28.965099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.727 [2024-09-29 16:45:28.965136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.727 qpair failed and we were unable to recover it. 00:37:28.727 [2024-09-29 16:45:28.965292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.728 [2024-09-29 16:45:28.965328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.728 qpair failed and we were unable to recover it. 00:37:28.728 [2024-09-29 16:45:28.965478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.728 [2024-09-29 16:45:28.965514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.728 qpair failed and we were unable to recover it. 00:37:28.728 [2024-09-29 16:45:28.965654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.728 [2024-09-29 16:45:28.965694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.728 qpair failed and we were unable to recover it. 00:37:28.728 [2024-09-29 16:45:28.965821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.728 [2024-09-29 16:45:28.965855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.728 qpair failed and we were unable to recover it. 00:37:28.728 [2024-09-29 16:45:28.965994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.728 [2024-09-29 16:45:28.966030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.728 qpair failed and we were unable to recover it. 00:37:28.728 [2024-09-29 16:45:28.966147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.728 [2024-09-29 16:45:28.966183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.728 qpair failed and we were unable to recover it. 00:37:28.728 [2024-09-29 16:45:28.966335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.728 [2024-09-29 16:45:28.966372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.728 qpair failed and we were unable to recover it. 00:37:28.728 [2024-09-29 16:45:28.966552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.728 [2024-09-29 16:45:28.966600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.728 qpair failed and we were unable to recover it. 00:37:28.728 [2024-09-29 16:45:28.966775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.728 [2024-09-29 16:45:28.966829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.728 qpair failed and we were unable to recover it. 00:37:28.728 [2024-09-29 16:45:28.967016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.728 [2024-09-29 16:45:28.967083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.728 qpair failed and we were unable to recover it. 00:37:28.728 [2024-09-29 16:45:28.967258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.728 [2024-09-29 16:45:28.967307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.728 qpair failed and we were unable to recover it. 00:37:28.728 [2024-09-29 16:45:28.967472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.728 [2024-09-29 16:45:28.967510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.728 qpair failed and we were unable to recover it. 00:37:28.728 [2024-09-29 16:45:28.967634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.728 [2024-09-29 16:45:28.967679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.728 qpair failed and we were unable to recover it. 00:37:28.728 [2024-09-29 16:45:28.967853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.728 [2024-09-29 16:45:28.967889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.728 qpair failed and we were unable to recover it. 00:37:28.728 [2024-09-29 16:45:28.968021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.728 [2024-09-29 16:45:28.968060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.728 qpair failed and we were unable to recover it. 00:37:28.728 [2024-09-29 16:45:28.968261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.728 [2024-09-29 16:45:28.968316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.728 qpair failed and we were unable to recover it. 00:37:28.728 [2024-09-29 16:45:28.968435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.728 [2024-09-29 16:45:28.968471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.728 qpair failed and we were unable to recover it. 00:37:28.728 [2024-09-29 16:45:28.968590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.728 [2024-09-29 16:45:28.968624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.728 qpair failed and we were unable to recover it. 00:37:28.728 [2024-09-29 16:45:28.968787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.728 [2024-09-29 16:45:28.968840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.728 qpair failed and we were unable to recover it. 00:37:28.728 [2024-09-29 16:45:28.969008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.728 [2024-09-29 16:45:28.969048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.728 qpair failed and we were unable to recover it. 00:37:28.728 [2024-09-29 16:45:28.969202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.728 [2024-09-29 16:45:28.969255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.728 qpair failed and we were unable to recover it. 00:37:28.728 [2024-09-29 16:45:28.969506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.728 [2024-09-29 16:45:28.969565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.728 qpair failed and we were unable to recover it. 00:37:28.728 [2024-09-29 16:45:28.969738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.728 [2024-09-29 16:45:28.969773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.728 qpair failed and we were unable to recover it. 00:37:28.728 [2024-09-29 16:45:28.969927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.728 [2024-09-29 16:45:28.969980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.728 qpair failed and we were unable to recover it. 00:37:28.728 [2024-09-29 16:45:28.970181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.728 [2024-09-29 16:45:28.970273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.728 qpair failed and we were unable to recover it. 00:37:28.728 [2024-09-29 16:45:28.970391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.728 [2024-09-29 16:45:28.970425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.728 qpair failed and we were unable to recover it. 00:37:28.728 [2024-09-29 16:45:28.970569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.728 [2024-09-29 16:45:28.970604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.728 qpair failed and we were unable to recover it. 00:37:28.728 [2024-09-29 16:45:28.970758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.728 [2024-09-29 16:45:28.970810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.728 qpair failed and we were unable to recover it. 00:37:28.728 [2024-09-29 16:45:28.971005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.728 [2024-09-29 16:45:28.971058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.728 qpair failed and we were unable to recover it. 00:37:28.728 [2024-09-29 16:45:28.971306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.728 [2024-09-29 16:45:28.971347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.728 qpair failed and we were unable to recover it. 00:37:28.728 [2024-09-29 16:45:28.971561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.728 [2024-09-29 16:45:28.971620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.728 qpair failed and we were unable to recover it. 00:37:28.728 [2024-09-29 16:45:28.971824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.728 [2024-09-29 16:45:28.971858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.728 qpair failed and we were unable to recover it. 00:37:28.728 [2024-09-29 16:45:28.972006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.728 [2024-09-29 16:45:28.972043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.728 qpair failed and we were unable to recover it. 00:37:28.728 [2024-09-29 16:45:28.972218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.728 [2024-09-29 16:45:28.972276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.728 qpair failed and we were unable to recover it. 00:37:28.728 [2024-09-29 16:45:28.972475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.728 [2024-09-29 16:45:28.972535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.728 qpair failed and we were unable to recover it. 00:37:28.728 [2024-09-29 16:45:28.972689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.728 [2024-09-29 16:45:28.972742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.728 qpair failed and we were unable to recover it. 00:37:28.728 [2024-09-29 16:45:28.972874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.728 [2024-09-29 16:45:28.972922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.728 qpair failed and we were unable to recover it. 00:37:28.728 [2024-09-29 16:45:28.973157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.728 [2024-09-29 16:45:28.973196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.728 qpair failed and we were unable to recover it. 00:37:28.728 [2024-09-29 16:45:28.973355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.728 [2024-09-29 16:45:28.973392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.728 qpair failed and we were unable to recover it. 00:37:28.728 [2024-09-29 16:45:28.973569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.729 [2024-09-29 16:45:28.973607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.729 qpair failed and we were unable to recover it. 00:37:28.729 [2024-09-29 16:45:28.973769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.729 [2024-09-29 16:45:28.973813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.729 qpair failed and we were unable to recover it. 00:37:28.729 [2024-09-29 16:45:28.973949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.729 [2024-09-29 16:45:28.973989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.729 qpair failed and we were unable to recover it. 00:37:28.729 [2024-09-29 16:45:28.974143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.729 [2024-09-29 16:45:28.974180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.729 qpair failed and we were unable to recover it. 00:37:28.729 [2024-09-29 16:45:28.974340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.729 [2024-09-29 16:45:28.974377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.729 qpair failed and we were unable to recover it. 00:37:28.729 [2024-09-29 16:45:28.974554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.729 [2024-09-29 16:45:28.974607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.729 qpair failed and we were unable to recover it. 00:37:28.729 [2024-09-29 16:45:28.974788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.729 [2024-09-29 16:45:28.974823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.729 qpair failed and we were unable to recover it. 00:37:28.729 [2024-09-29 16:45:28.974996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.729 [2024-09-29 16:45:28.975032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.729 qpair failed and we were unable to recover it. 00:37:28.729 [2024-09-29 16:45:28.975161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.729 [2024-09-29 16:45:28.975199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.729 qpair failed and we were unable to recover it. 00:37:28.729 [2024-09-29 16:45:28.975357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.729 [2024-09-29 16:45:28.975394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.729 qpair failed and we were unable to recover it. 00:37:28.729 [2024-09-29 16:45:28.975572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.729 [2024-09-29 16:45:28.975620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.729 qpair failed and we were unable to recover it. 00:37:28.729 [2024-09-29 16:45:28.975777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.729 [2024-09-29 16:45:28.975817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.729 qpair failed and we were unable to recover it. 00:37:28.729 [2024-09-29 16:45:28.975927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.729 [2024-09-29 16:45:28.975979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.729 qpair failed and we were unable to recover it. 00:37:28.729 [2024-09-29 16:45:28.976140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.729 [2024-09-29 16:45:28.976208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.729 qpair failed and we were unable to recover it. 00:37:28.729 [2024-09-29 16:45:28.976472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.729 [2024-09-29 16:45:28.976531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.729 qpair failed and we were unable to recover it. 00:37:28.729 [2024-09-29 16:45:28.976687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.729 [2024-09-29 16:45:28.976720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.729 qpair failed and we were unable to recover it. 00:37:28.729 [2024-09-29 16:45:28.976886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.729 [2024-09-29 16:45:28.976919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.729 qpair failed and we were unable to recover it. 00:37:28.729 [2024-09-29 16:45:28.977054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.729 [2024-09-29 16:45:28.977104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.729 qpair failed and we were unable to recover it. 00:37:28.729 [2024-09-29 16:45:28.977240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.729 [2024-09-29 16:45:28.977276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.729 qpair failed and we were unable to recover it. 00:37:28.729 [2024-09-29 16:45:28.977521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.729 [2024-09-29 16:45:28.977580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.729 qpair failed and we were unable to recover it. 00:37:28.729 [2024-09-29 16:45:28.977762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.729 [2024-09-29 16:45:28.977810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.729 qpair failed and we were unable to recover it. 00:37:28.729 [2024-09-29 16:45:28.978010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.729 [2024-09-29 16:45:28.978063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.729 qpair failed and we were unable to recover it. 00:37:28.729 [2024-09-29 16:45:28.978296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.729 [2024-09-29 16:45:28.978353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.729 qpair failed and we were unable to recover it. 00:37:28.729 [2024-09-29 16:45:28.978513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.729 [2024-09-29 16:45:28.978569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.729 qpair failed and we were unable to recover it. 00:37:28.729 [2024-09-29 16:45:28.978738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.729 [2024-09-29 16:45:28.978786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.729 qpair failed and we were unable to recover it. 00:37:28.729 [2024-09-29 16:45:28.978927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.729 [2024-09-29 16:45:28.978966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.729 qpair failed and we were unable to recover it. 00:37:28.729 [2024-09-29 16:45:28.979123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.729 [2024-09-29 16:45:28.979161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.729 qpair failed and we were unable to recover it. 00:37:28.729 [2024-09-29 16:45:28.979313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.729 [2024-09-29 16:45:28.979350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.729 qpair failed and we were unable to recover it. 00:37:28.729 [2024-09-29 16:45:28.979506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.729 [2024-09-29 16:45:28.979543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.729 qpair failed and we were unable to recover it. 00:37:28.729 [2024-09-29 16:45:28.979706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.729 [2024-09-29 16:45:28.979740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.729 qpair failed and we were unable to recover it. 00:37:28.729 [2024-09-29 16:45:28.979903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.729 [2024-09-29 16:45:28.979941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.729 qpair failed and we were unable to recover it. 00:37:28.729 [2024-09-29 16:45:28.980123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.729 [2024-09-29 16:45:28.980160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.729 qpair failed and we were unable to recover it. 00:37:28.729 [2024-09-29 16:45:28.980322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.729 [2024-09-29 16:45:28.980370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.729 qpair failed and we were unable to recover it. 00:37:28.729 [2024-09-29 16:45:28.980553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.729 [2024-09-29 16:45:28.980592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.729 qpair failed and we were unable to recover it. 00:37:28.729 [2024-09-29 16:45:28.980782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.729 [2024-09-29 16:45:28.980831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.729 qpair failed and we were unable to recover it. 00:37:28.729 [2024-09-29 16:45:28.980981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.729 [2024-09-29 16:45:28.981017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.729 qpair failed and we were unable to recover it. 00:37:28.729 [2024-09-29 16:45:28.981193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.729 [2024-09-29 16:45:28.981257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.729 qpair failed and we were unable to recover it. 00:37:28.729 [2024-09-29 16:45:28.981446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.729 [2024-09-29 16:45:28.981507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.729 qpair failed and we were unable to recover it. 00:37:28.729 [2024-09-29 16:45:28.981684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.729 [2024-09-29 16:45:28.981730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.729 qpair failed and we were unable to recover it. 00:37:28.729 [2024-09-29 16:45:28.981845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.729 [2024-09-29 16:45:28.981879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.729 qpair failed and we were unable to recover it. 00:37:28.729 [2024-09-29 16:45:28.982060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.729 [2024-09-29 16:45:28.982097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.729 qpair failed and we were unable to recover it. 00:37:28.729 [2024-09-29 16:45:28.982289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.729 [2024-09-29 16:45:28.982327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.729 qpair failed and we were unable to recover it. 00:37:28.729 [2024-09-29 16:45:28.982516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.729 [2024-09-29 16:45:28.982570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.729 qpair failed and we were unable to recover it. 00:37:28.729 [2024-09-29 16:45:28.982742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.729 [2024-09-29 16:45:28.982777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.729 qpair failed and we were unable to recover it. 00:37:28.729 [2024-09-29 16:45:28.982939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.729 [2024-09-29 16:45:28.982991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.729 qpair failed and we were unable to recover it. 00:37:28.729 [2024-09-29 16:45:28.983148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.729 [2024-09-29 16:45:28.983201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.730 qpair failed and we were unable to recover it. 00:37:28.730 [2024-09-29 16:45:28.983355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.730 [2024-09-29 16:45:28.983407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.730 qpair failed and we were unable to recover it. 00:37:28.730 [2024-09-29 16:45:28.983573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.730 [2024-09-29 16:45:28.983621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.730 qpair failed and we were unable to recover it. 00:37:28.730 [2024-09-29 16:45:28.983799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.730 [2024-09-29 16:45:28.983833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.730 qpair failed and we were unable to recover it. 00:37:28.730 [2024-09-29 16:45:28.984020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.730 [2024-09-29 16:45:28.984073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.730 qpair failed and we were unable to recover it. 00:37:28.730 [2024-09-29 16:45:28.984258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.730 [2024-09-29 16:45:28.984296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.730 qpair failed and we were unable to recover it. 00:37:28.730 [2024-09-29 16:45:28.984513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.730 [2024-09-29 16:45:28.984584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.730 qpair failed and we were unable to recover it. 00:37:28.730 [2024-09-29 16:45:28.984738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.730 [2024-09-29 16:45:28.984772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.730 qpair failed and we were unable to recover it. 00:37:28.730 [2024-09-29 16:45:28.984912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.730 [2024-09-29 16:45:28.984945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.730 qpair failed and we were unable to recover it. 00:37:28.730 [2024-09-29 16:45:28.985082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.730 [2024-09-29 16:45:28.985119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.730 qpair failed and we were unable to recover it. 00:37:28.730 [2024-09-29 16:45:28.985261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.730 [2024-09-29 16:45:28.985324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.730 qpair failed and we were unable to recover it. 00:37:28.730 [2024-09-29 16:45:28.985521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.730 [2024-09-29 16:45:28.985570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.730 qpair failed and we were unable to recover it. 00:37:28.730 [2024-09-29 16:45:28.985751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.730 [2024-09-29 16:45:28.985805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.730 qpair failed and we were unable to recover it. 00:37:28.730 [2024-09-29 16:45:28.985973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.730 [2024-09-29 16:45:28.986013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.730 qpair failed and we were unable to recover it. 00:37:28.730 [2024-09-29 16:45:28.986261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.730 [2024-09-29 16:45:28.986317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.730 qpair failed and we were unable to recover it. 00:37:28.730 [2024-09-29 16:45:28.986533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.730 [2024-09-29 16:45:28.986588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.730 qpair failed and we were unable to recover it. 00:37:28.730 [2024-09-29 16:45:28.986746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.730 [2024-09-29 16:45:28.986780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.730 qpair failed and we were unable to recover it. 00:37:28.730 [2024-09-29 16:45:28.986915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.730 [2024-09-29 16:45:28.986953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.730 qpair failed and we were unable to recover it. 00:37:28.730 [2024-09-29 16:45:28.987164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.730 [2024-09-29 16:45:28.987197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.730 qpair failed and we were unable to recover it. 00:37:28.730 [2024-09-29 16:45:28.987409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.730 [2024-09-29 16:45:28.987467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.730 qpair failed and we were unable to recover it. 00:37:28.730 [2024-09-29 16:45:28.987603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.730 [2024-09-29 16:45:28.987641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.730 qpair failed and we were unable to recover it. 00:37:28.730 [2024-09-29 16:45:28.987784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.730 [2024-09-29 16:45:28.987818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.730 qpair failed and we were unable to recover it. 00:37:28.730 [2024-09-29 16:45:28.987985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.730 [2024-09-29 16:45:28.988033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.730 qpair failed and we were unable to recover it. 00:37:28.730 [2024-09-29 16:45:28.988184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.730 [2024-09-29 16:45:28.988220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.730 qpair failed and we were unable to recover it. 00:37:28.730 [2024-09-29 16:45:28.988371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.730 [2024-09-29 16:45:28.988438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.730 qpair failed and we were unable to recover it. 00:37:28.730 [2024-09-29 16:45:28.988556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.730 [2024-09-29 16:45:28.988591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.730 qpair failed and we were unable to recover it. 00:37:28.730 [2024-09-29 16:45:28.988752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.730 [2024-09-29 16:45:28.988807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.730 qpair failed and we were unable to recover it. 00:37:28.730 [2024-09-29 16:45:28.988978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.730 [2024-09-29 16:45:28.989027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.730 qpair failed and we were unable to recover it. 00:37:28.730 [2024-09-29 16:45:28.989191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.730 [2024-09-29 16:45:28.989243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.730 qpair failed and we were unable to recover it. 00:37:28.730 [2024-09-29 16:45:28.989420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.730 [2024-09-29 16:45:28.989493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.730 qpair failed and we were unable to recover it. 00:37:28.730 [2024-09-29 16:45:28.989638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.730 [2024-09-29 16:45:28.989682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.730 qpair failed and we were unable to recover it. 00:37:28.730 [2024-09-29 16:45:28.989884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.730 [2024-09-29 16:45:28.989937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.730 qpair failed and we were unable to recover it. 00:37:28.730 [2024-09-29 16:45:28.990166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.730 [2024-09-29 16:45:28.990206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.730 qpair failed and we were unable to recover it. 00:37:28.730 [2024-09-29 16:45:28.990470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.730 [2024-09-29 16:45:28.990529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.730 qpair failed and we were unable to recover it. 00:37:28.730 [2024-09-29 16:45:28.990659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.730 [2024-09-29 16:45:28.990704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.730 qpair failed and we were unable to recover it. 00:37:28.730 [2024-09-29 16:45:28.990862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.730 [2024-09-29 16:45:28.990916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.730 qpair failed and we were unable to recover it. 00:37:28.730 [2024-09-29 16:45:28.991051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.730 [2024-09-29 16:45:28.991088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.730 qpair failed and we were unable to recover it. 00:37:28.730 [2024-09-29 16:45:28.991297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.730 [2024-09-29 16:45:28.991360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.730 qpair failed and we were unable to recover it. 00:37:28.730 [2024-09-29 16:45:28.991479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.730 [2024-09-29 16:45:28.991516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.730 qpair failed and we were unable to recover it. 00:37:28.730 [2024-09-29 16:45:28.991670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.730 [2024-09-29 16:45:28.991732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.730 qpair failed and we were unable to recover it. 00:37:28.730 [2024-09-29 16:45:28.991843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.730 [2024-09-29 16:45:28.991877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.730 qpair failed and we were unable to recover it. 00:37:28.730 [2024-09-29 16:45:28.992017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.730 [2024-09-29 16:45:28.992071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.730 qpair failed and we were unable to recover it. 00:37:28.730 [2024-09-29 16:45:28.992232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.730 [2024-09-29 16:45:28.992284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.730 qpair failed and we were unable to recover it. 00:37:28.730 [2024-09-29 16:45:28.992544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.730 [2024-09-29 16:45:28.992618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.730 qpair failed and we were unable to recover it. 00:37:28.730 [2024-09-29 16:45:28.992782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.730 [2024-09-29 16:45:28.992818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.730 qpair failed and we were unable to recover it. 00:37:28.730 [2024-09-29 16:45:28.992967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.730 [2024-09-29 16:45:28.993020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.730 qpair failed and we were unable to recover it. 00:37:28.730 [2024-09-29 16:45:28.993152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.730 [2024-09-29 16:45:28.993198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.730 qpair failed and we were unable to recover it. 00:37:28.730 [2024-09-29 16:45:28.993415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.730 [2024-09-29 16:45:28.993474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.730 qpair failed and we were unable to recover it. 00:37:28.730 [2024-09-29 16:45:28.993641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.730 [2024-09-29 16:45:28.993681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.730 qpair failed and we were unable to recover it. 00:37:28.730 [2024-09-29 16:45:28.993864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.731 [2024-09-29 16:45:28.993901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.731 qpair failed and we were unable to recover it. 00:37:28.731 [2024-09-29 16:45:28.994047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.731 [2024-09-29 16:45:28.994115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.731 qpair failed and we were unable to recover it. 00:37:28.731 [2024-09-29 16:45:28.994246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.731 [2024-09-29 16:45:28.994282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.731 qpair failed and we were unable to recover it. 00:37:28.731 [2024-09-29 16:45:28.994500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.731 [2024-09-29 16:45:28.994559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.731 qpair failed and we were unable to recover it. 00:37:28.731 [2024-09-29 16:45:28.994760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.731 [2024-09-29 16:45:28.994797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.731 qpair failed and we were unable to recover it. 00:37:28.731 [2024-09-29 16:45:28.994934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.731 [2024-09-29 16:45:28.994967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.731 qpair failed and we were unable to recover it. 00:37:28.731 [2024-09-29 16:45:28.995097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.731 [2024-09-29 16:45:28.995134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.731 qpair failed and we were unable to recover it. 00:37:28.731 [2024-09-29 16:45:28.995289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.731 [2024-09-29 16:45:28.995326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.731 qpair failed and we were unable to recover it. 00:37:28.731 [2024-09-29 16:45:28.995454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.731 [2024-09-29 16:45:28.995491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.731 qpair failed and we were unable to recover it. 00:37:28.731 [2024-09-29 16:45:28.995648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.731 [2024-09-29 16:45:28.995718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.731 qpair failed and we were unable to recover it. 00:37:28.731 [2024-09-29 16:45:28.995857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.731 [2024-09-29 16:45:28.995896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.731 qpair failed and we were unable to recover it. 00:37:28.731 [2024-09-29 16:45:28.996073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.731 [2024-09-29 16:45:28.996110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.731 qpair failed and we were unable to recover it. 00:37:28.731 [2024-09-29 16:45:28.996290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.731 [2024-09-29 16:45:28.996327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.731 qpair failed and we were unable to recover it. 00:37:28.731 [2024-09-29 16:45:28.996477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.731 [2024-09-29 16:45:28.996514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.731 qpair failed and we were unable to recover it. 00:37:28.731 [2024-09-29 16:45:28.996683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.731 [2024-09-29 16:45:28.996725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.731 qpair failed and we were unable to recover it. 00:37:28.731 [2024-09-29 16:45:28.996864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.731 [2024-09-29 16:45:28.996897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.731 qpair failed and we were unable to recover it. 00:37:28.731 [2024-09-29 16:45:28.997091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.731 [2024-09-29 16:45:28.997128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.731 qpair failed and we were unable to recover it. 00:37:28.731 [2024-09-29 16:45:28.997254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.731 [2024-09-29 16:45:28.997291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.731 qpair failed and we were unable to recover it. 00:37:28.731 [2024-09-29 16:45:28.997426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.731 [2024-09-29 16:45:28.997465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.731 qpair failed and we were unable to recover it. 00:37:28.731 [2024-09-29 16:45:28.997633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.731 [2024-09-29 16:45:28.997669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.731 qpair failed and we were unable to recover it. 00:37:28.731 [2024-09-29 16:45:28.997802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.731 [2024-09-29 16:45:28.997837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.731 qpair failed and we were unable to recover it. 00:37:28.731 [2024-09-29 16:45:28.997971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.731 [2024-09-29 16:45:28.998022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.731 qpair failed and we were unable to recover it. 00:37:28.731 [2024-09-29 16:45:28.998150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.731 [2024-09-29 16:45:28.998203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.731 qpair failed and we were unable to recover it. 00:37:28.731 [2024-09-29 16:45:28.998393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.731 [2024-09-29 16:45:28.998445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.731 qpair failed and we were unable to recover it. 00:37:28.731 [2024-09-29 16:45:28.998617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.731 [2024-09-29 16:45:28.998651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.731 qpair failed and we were unable to recover it. 00:37:28.731 [2024-09-29 16:45:28.998786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.731 [2024-09-29 16:45:28.998820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.731 qpair failed and we were unable to recover it. 00:37:28.731 [2024-09-29 16:45:28.998953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.731 [2024-09-29 16:45:28.999001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.731 qpair failed and we were unable to recover it. 00:37:28.731 [2024-09-29 16:45:28.999214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.731 [2024-09-29 16:45:28.999276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.731 qpair failed and we were unable to recover it. 00:37:28.731 [2024-09-29 16:45:28.999513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.731 [2024-09-29 16:45:28.999572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.731 qpair failed and we were unable to recover it. 00:37:28.731 [2024-09-29 16:45:28.999690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.731 [2024-09-29 16:45:28.999742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.731 qpair failed and we were unable to recover it. 00:37:28.731 [2024-09-29 16:45:28.999894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.731 [2024-09-29 16:45:28.999930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.731 qpair failed and we were unable to recover it. 00:37:28.731 [2024-09-29 16:45:29.000069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.731 [2024-09-29 16:45:29.000141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.731 qpair failed and we were unable to recover it. 00:37:28.731 [2024-09-29 16:45:29.000304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.731 [2024-09-29 16:45:29.000363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.731 qpair failed and we were unable to recover it. 00:37:28.731 [2024-09-29 16:45:29.000507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.731 [2024-09-29 16:45:29.000541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.731 qpair failed and we were unable to recover it. 00:37:28.731 [2024-09-29 16:45:29.000701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.731 [2024-09-29 16:45:29.000749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.731 qpair failed and we were unable to recover it. 00:37:28.731 [2024-09-29 16:45:29.000869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.731 [2024-09-29 16:45:29.000904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.731 qpair failed and we were unable to recover it. 00:37:28.731 [2024-09-29 16:45:29.001030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.731 [2024-09-29 16:45:29.001064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.731 qpair failed and we were unable to recover it. 00:37:28.731 [2024-09-29 16:45:29.001178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.731 [2024-09-29 16:45:29.001216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.731 qpair failed and we were unable to recover it. 00:37:28.731 [2024-09-29 16:45:29.001355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.731 [2024-09-29 16:45:29.001388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.731 qpair failed and we were unable to recover it. 00:37:28.731 [2024-09-29 16:45:29.001493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.731 [2024-09-29 16:45:29.001526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.731 qpair failed and we were unable to recover it. 00:37:28.731 [2024-09-29 16:45:29.001677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.731 [2024-09-29 16:45:29.001715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.731 qpair failed and we were unable to recover it. 00:37:28.731 [2024-09-29 16:45:29.001834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.731 [2024-09-29 16:45:29.001869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.731 qpair failed and we were unable to recover it. 00:37:28.731 [2024-09-29 16:45:29.002013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.731 [2024-09-29 16:45:29.002049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.731 qpair failed and we were unable to recover it. 00:37:28.731 [2024-09-29 16:45:29.002166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.731 [2024-09-29 16:45:29.002200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.731 qpair failed and we were unable to recover it. 00:37:28.731 [2024-09-29 16:45:29.002343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.731 [2024-09-29 16:45:29.002377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.731 qpair failed and we were unable to recover it. 00:37:28.731 [2024-09-29 16:45:29.002546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.731 [2024-09-29 16:45:29.002580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.731 qpair failed and we were unable to recover it. 00:37:28.731 [2024-09-29 16:45:29.002730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.731 [2024-09-29 16:45:29.002765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.731 qpair failed and we were unable to recover it. 00:37:28.731 [2024-09-29 16:45:29.002913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.731 [2024-09-29 16:45:29.002947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.731 qpair failed and we were unable to recover it. 00:37:28.731 [2024-09-29 16:45:29.003122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.731 [2024-09-29 16:45:29.003160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.731 qpair failed and we were unable to recover it. 00:37:28.732 [2024-09-29 16:45:29.003282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.732 [2024-09-29 16:45:29.003319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.732 qpair failed and we were unable to recover it. 00:37:28.732 [2024-09-29 16:45:29.003491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.732 [2024-09-29 16:45:29.003543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.732 qpair failed and we were unable to recover it. 00:37:28.732 [2024-09-29 16:45:29.003754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.732 [2024-09-29 16:45:29.003790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.732 qpair failed and we were unable to recover it. 00:37:28.732 [2024-09-29 16:45:29.003959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.732 [2024-09-29 16:45:29.004012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.732 qpair failed and we were unable to recover it. 00:37:28.732 [2024-09-29 16:45:29.004199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.732 [2024-09-29 16:45:29.004251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.732 qpair failed and we were unable to recover it. 00:37:28.732 [2024-09-29 16:45:29.004393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.732 [2024-09-29 16:45:29.004448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.732 qpair failed and we were unable to recover it. 00:37:28.732 [2024-09-29 16:45:29.004594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.732 [2024-09-29 16:45:29.004629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.732 qpair failed and we were unable to recover it. 00:37:28.732 [2024-09-29 16:45:29.004844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.732 [2024-09-29 16:45:29.004896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.732 qpair failed and we were unable to recover it. 00:37:28.732 [2024-09-29 16:45:29.005085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.732 [2024-09-29 16:45:29.005120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.732 qpair failed and we were unable to recover it. 00:37:28.732 [2024-09-29 16:45:29.005229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.732 [2024-09-29 16:45:29.005264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.732 qpair failed and we were unable to recover it. 00:37:28.732 [2024-09-29 16:45:29.005432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.732 [2024-09-29 16:45:29.005466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.732 qpair failed and we were unable to recover it. 00:37:28.732 [2024-09-29 16:45:29.005641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.732 [2024-09-29 16:45:29.005685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.732 qpair failed and we were unable to recover it. 00:37:28.732 [2024-09-29 16:45:29.005863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.732 [2024-09-29 16:45:29.005897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.732 qpair failed and we were unable to recover it. 00:37:28.732 [2024-09-29 16:45:29.006163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.732 [2024-09-29 16:45:29.006234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.732 qpair failed and we were unable to recover it. 00:37:28.732 [2024-09-29 16:45:29.006446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.732 [2024-09-29 16:45:29.006508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.732 qpair failed and we were unable to recover it. 00:37:28.732 [2024-09-29 16:45:29.006687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.732 [2024-09-29 16:45:29.006740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.732 qpair failed and we were unable to recover it. 00:37:28.732 [2024-09-29 16:45:29.006889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.732 [2024-09-29 16:45:29.006926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.732 qpair failed and we were unable to recover it. 00:37:28.732 [2024-09-29 16:45:29.007095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.732 [2024-09-29 16:45:29.007132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.732 qpair failed and we were unable to recover it. 00:37:28.732 [2024-09-29 16:45:29.007256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.732 [2024-09-29 16:45:29.007295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.732 qpair failed and we were unable to recover it. 00:37:28.732 [2024-09-29 16:45:29.007520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.732 [2024-09-29 16:45:29.007597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.732 qpair failed and we were unable to recover it. 00:37:28.732 [2024-09-29 16:45:29.007746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.732 [2024-09-29 16:45:29.007780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.732 qpair failed and we were unable to recover it. 00:37:28.732 [2024-09-29 16:45:29.007970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.732 [2024-09-29 16:45:29.008023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.732 qpair failed and we were unable to recover it. 00:37:28.732 [2024-09-29 16:45:29.008263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.732 [2024-09-29 16:45:29.008320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.732 qpair failed and we were unable to recover it. 00:37:28.732 [2024-09-29 16:45:29.008454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.732 [2024-09-29 16:45:29.008488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.732 qpair failed and we were unable to recover it. 00:37:28.732 [2024-09-29 16:45:29.008632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.732 [2024-09-29 16:45:29.008666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.732 qpair failed and we were unable to recover it. 00:37:28.732 [2024-09-29 16:45:29.008831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.732 [2024-09-29 16:45:29.008870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.732 qpair failed and we were unable to recover it. 00:37:28.732 [2024-09-29 16:45:29.009020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.732 [2024-09-29 16:45:29.009073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.732 qpair failed and we were unable to recover it. 00:37:28.732 [2024-09-29 16:45:29.009280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.732 [2024-09-29 16:45:29.009314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.732 qpair failed and we were unable to recover it. 00:37:28.732 [2024-09-29 16:45:29.009539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.732 [2024-09-29 16:45:29.009603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.732 qpair failed and we were unable to recover it. 00:37:28.732 [2024-09-29 16:45:29.009754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.732 [2024-09-29 16:45:29.009788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.732 qpair failed and we were unable to recover it. 00:37:28.732 [2024-09-29 16:45:29.009905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.732 [2024-09-29 16:45:29.009938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.732 qpair failed and we were unable to recover it. 00:37:28.732 [2024-09-29 16:45:29.010189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.732 [2024-09-29 16:45:29.010261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.732 qpair failed and we were unable to recover it. 00:37:28.732 [2024-09-29 16:45:29.010480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.732 [2024-09-29 16:45:29.010538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.732 qpair failed and we were unable to recover it. 00:37:28.732 [2024-09-29 16:45:29.010687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.732 [2024-09-29 16:45:29.010750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.732 qpair failed and we were unable to recover it. 00:37:28.732 [2024-09-29 16:45:29.010870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.732 [2024-09-29 16:45:29.010903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.732 qpair failed and we were unable to recover it. 00:37:28.732 [2024-09-29 16:45:29.011056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.732 [2024-09-29 16:45:29.011093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.732 qpair failed and we were unable to recover it. 00:37:28.732 [2024-09-29 16:45:29.011279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.732 [2024-09-29 16:45:29.011352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.732 qpair failed and we were unable to recover it. 00:37:28.732 [2024-09-29 16:45:29.011506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.732 [2024-09-29 16:45:29.011542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.732 qpair failed and we were unable to recover it. 00:37:28.732 [2024-09-29 16:45:29.011670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.732 [2024-09-29 16:45:29.011735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.732 qpair failed and we were unable to recover it. 00:37:28.732 [2024-09-29 16:45:29.011897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.732 [2024-09-29 16:45:29.011945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.732 qpair failed and we were unable to recover it. 00:37:28.732 [2024-09-29 16:45:29.012111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.732 [2024-09-29 16:45:29.012177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.732 qpair failed and we were unable to recover it. 00:37:28.732 [2024-09-29 16:45:29.012344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.732 [2024-09-29 16:45:29.012397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.732 qpair failed and we were unable to recover it. 00:37:28.732 [2024-09-29 16:45:29.012514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.732 [2024-09-29 16:45:29.012549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.732 qpair failed and we were unable to recover it. 00:37:28.732 [2024-09-29 16:45:29.012698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.732 [2024-09-29 16:45:29.012736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.732 qpair failed and we were unable to recover it. 00:37:28.732 [2024-09-29 16:45:29.012878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.732 [2024-09-29 16:45:29.012913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.732 qpair failed and we were unable to recover it. 00:37:28.733 [2024-09-29 16:45:29.013091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.733 [2024-09-29 16:45:29.013125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.733 qpair failed and we were unable to recover it. 00:37:28.733 [2024-09-29 16:45:29.013294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.733 [2024-09-29 16:45:29.013327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.733 qpair failed and we were unable to recover it. 00:37:28.733 [2024-09-29 16:45:29.013450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.733 [2024-09-29 16:45:29.013484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.733 qpair failed and we were unable to recover it. 00:37:28.733 [2024-09-29 16:45:29.013592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.733 [2024-09-29 16:45:29.013625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.733 qpair failed and we were unable to recover it. 00:37:28.733 [2024-09-29 16:45:29.013764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.733 [2024-09-29 16:45:29.013797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.733 qpair failed and we were unable to recover it. 00:37:28.733 [2024-09-29 16:45:29.013910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.733 [2024-09-29 16:45:29.013962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.733 qpair failed and we were unable to recover it. 00:37:28.733 [2024-09-29 16:45:29.014179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.733 [2024-09-29 16:45:29.014240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.733 qpair failed and we were unable to recover it. 00:37:28.733 [2024-09-29 16:45:29.014448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.733 [2024-09-29 16:45:29.014481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.733 qpair failed and we were unable to recover it. 00:37:28.733 [2024-09-29 16:45:29.014604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.733 [2024-09-29 16:45:29.014638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.733 qpair failed and we were unable to recover it. 00:37:28.733 [2024-09-29 16:45:29.014806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.733 [2024-09-29 16:45:29.014840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.733 qpair failed and we were unable to recover it. 00:37:28.733 [2024-09-29 16:45:29.015010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.733 [2024-09-29 16:45:29.015059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.733 qpair failed and we were unable to recover it. 00:37:28.733 [2024-09-29 16:45:29.015178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.733 [2024-09-29 16:45:29.015214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.733 qpair failed and we were unable to recover it. 00:37:28.733 [2024-09-29 16:45:29.015385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.733 [2024-09-29 16:45:29.015424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.733 qpair failed and we were unable to recover it. 00:37:28.733 [2024-09-29 16:45:29.015580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.733 [2024-09-29 16:45:29.015619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.733 qpair failed and we were unable to recover it. 00:37:28.733 [2024-09-29 16:45:29.015792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.733 [2024-09-29 16:45:29.015839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.733 qpair failed and we were unable to recover it. 00:37:28.733 [2024-09-29 16:45:29.015968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.733 [2024-09-29 16:45:29.016004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.733 qpair failed and we were unable to recover it. 00:37:28.733 [2024-09-29 16:45:29.016172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.733 [2024-09-29 16:45:29.016225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.733 qpair failed and we were unable to recover it. 00:37:28.733 [2024-09-29 16:45:29.016386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.733 [2024-09-29 16:45:29.016438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.733 qpair failed and we were unable to recover it. 00:37:28.733 [2024-09-29 16:45:29.016567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.733 [2024-09-29 16:45:29.016601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.733 qpair failed and we were unable to recover it. 00:37:28.733 [2024-09-29 16:45:29.016780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.733 [2024-09-29 16:45:29.016830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.733 qpair failed and we were unable to recover it. 00:37:28.733 [2024-09-29 16:45:29.016960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.733 [2024-09-29 16:45:29.016996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.733 qpair failed and we were unable to recover it. 00:37:28.733 [2024-09-29 16:45:29.017141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.733 [2024-09-29 16:45:29.017176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.733 qpair failed and we were unable to recover it. 00:37:28.733 [2024-09-29 16:45:29.017337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.733 [2024-09-29 16:45:29.017404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.733 qpair failed and we were unable to recover it. 00:37:28.733 [2024-09-29 16:45:29.017596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.733 [2024-09-29 16:45:29.017641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.733 qpair failed and we were unable to recover it. 00:37:28.733 [2024-09-29 16:45:29.017819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.733 [2024-09-29 16:45:29.017867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.733 qpair failed and we were unable to recover it. 00:37:28.733 [2024-09-29 16:45:29.018041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.733 [2024-09-29 16:45:29.018094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.733 qpair failed and we were unable to recover it. 00:37:28.733 [2024-09-29 16:45:29.018283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.733 [2024-09-29 16:45:29.018337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.733 qpair failed and we were unable to recover it. 00:37:28.733 [2024-09-29 16:45:29.018472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.733 [2024-09-29 16:45:29.018525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.733 qpair failed and we were unable to recover it. 00:37:28.733 [2024-09-29 16:45:29.018639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.733 [2024-09-29 16:45:29.018692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.733 qpair failed and we were unable to recover it. 00:37:28.733 [2024-09-29 16:45:29.018839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.733 [2024-09-29 16:45:29.018892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.733 qpair failed and we were unable to recover it. 00:37:28.733 [2024-09-29 16:45:29.019023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.733 [2024-09-29 16:45:29.019076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.733 qpair failed and we were unable to recover it. 00:37:28.733 [2024-09-29 16:45:29.019253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.733 [2024-09-29 16:45:29.019325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.733 qpair failed and we were unable to recover it. 00:37:28.733 [2024-09-29 16:45:29.019483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.733 [2024-09-29 16:45:29.019554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.733 qpair failed and we were unable to recover it. 00:37:28.733 [2024-09-29 16:45:29.019746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.733 [2024-09-29 16:45:29.019781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.733 qpair failed and we were unable to recover it. 00:37:28.733 [2024-09-29 16:45:29.019942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.733 [2024-09-29 16:45:29.019980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.733 qpair failed and we were unable to recover it. 00:37:28.733 [2024-09-29 16:45:29.020144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.733 [2024-09-29 16:45:29.020206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.733 qpair failed and we were unable to recover it. 00:37:28.733 [2024-09-29 16:45:29.020389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.733 [2024-09-29 16:45:29.020426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.733 qpair failed and we were unable to recover it. 00:37:28.733 [2024-09-29 16:45:29.020573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.733 [2024-09-29 16:45:29.020607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.733 qpair failed and we were unable to recover it. 00:37:28.733 [2024-09-29 16:45:29.020762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.733 [2024-09-29 16:45:29.020810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.733 qpair failed and we were unable to recover it. 00:37:28.733 [2024-09-29 16:45:29.021002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.733 [2024-09-29 16:45:29.021050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.733 qpair failed and we were unable to recover it. 00:37:28.733 [2024-09-29 16:45:29.021182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.733 [2024-09-29 16:45:29.021217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.733 qpair failed and we were unable to recover it. 00:37:28.733 [2024-09-29 16:45:29.021391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.733 [2024-09-29 16:45:29.021451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.733 qpair failed and we were unable to recover it. 00:37:28.733 [2024-09-29 16:45:29.021630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.733 [2024-09-29 16:45:29.021666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.733 qpair failed and we were unable to recover it. 00:37:28.733 [2024-09-29 16:45:29.021812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.733 [2024-09-29 16:45:29.021853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.733 qpair failed and we were unable to recover it. 00:37:28.733 [2024-09-29 16:45:29.022000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.733 [2024-09-29 16:45:29.022038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.733 qpair failed and we were unable to recover it. 00:37:28.733 [2024-09-29 16:45:29.022195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.733 [2024-09-29 16:45:29.022245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.733 qpair failed and we were unable to recover it. 00:37:28.733 [2024-09-29 16:45:29.022400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.733 [2024-09-29 16:45:29.022437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.733 qpair failed and we were unable to recover it. 00:37:28.733 [2024-09-29 16:45:29.022585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.733 [2024-09-29 16:45:29.022623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.733 qpair failed and we were unable to recover it. 00:37:28.733 [2024-09-29 16:45:29.022783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.733 [2024-09-29 16:45:29.022831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.733 qpair failed and we were unable to recover it. 00:37:28.733 [2024-09-29 16:45:29.022969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.733 [2024-09-29 16:45:29.023004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.733 qpair failed and we were unable to recover it. 00:37:28.733 [2024-09-29 16:45:29.023186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.733 [2024-09-29 16:45:29.023259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.733 qpair failed and we were unable to recover it. 00:37:28.733 [2024-09-29 16:45:29.023413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.733 [2024-09-29 16:45:29.023468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.734 qpair failed and we were unable to recover it. 00:37:28.734 [2024-09-29 16:45:29.023588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.734 [2024-09-29 16:45:29.023623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.734 qpair failed and we were unable to recover it. 00:37:28.734 [2024-09-29 16:45:29.023750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.734 [2024-09-29 16:45:29.023785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.734 qpair failed and we were unable to recover it. 00:37:28.734 [2024-09-29 16:45:29.023956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.734 [2024-09-29 16:45:29.023994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.734 qpair failed and we were unable to recover it. 00:37:28.734 [2024-09-29 16:45:29.024216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.734 [2024-09-29 16:45:29.024272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.734 qpair failed and we were unable to recover it. 00:37:28.734 [2024-09-29 16:45:29.024429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.734 [2024-09-29 16:45:29.024498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.734 qpair failed and we were unable to recover it. 00:37:28.734 [2024-09-29 16:45:29.024651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.734 [2024-09-29 16:45:29.024695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.734 qpair failed and we were unable to recover it. 00:37:28.734 [2024-09-29 16:45:29.024848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.734 [2024-09-29 16:45:29.024882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.734 qpair failed and we were unable to recover it. 00:37:28.734 [2024-09-29 16:45:29.025035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.734 [2024-09-29 16:45:29.025073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.734 qpair failed and we were unable to recover it. 00:37:28.734 [2024-09-29 16:45:29.025209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.734 [2024-09-29 16:45:29.025261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.734 qpair failed and we were unable to recover it. 00:37:28.734 [2024-09-29 16:45:29.025429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.734 [2024-09-29 16:45:29.025482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.734 qpair failed and we were unable to recover it. 00:37:28.734 [2024-09-29 16:45:29.025630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.734 [2024-09-29 16:45:29.025693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.734 qpair failed and we were unable to recover it. 00:37:28.734 [2024-09-29 16:45:29.025871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.734 [2024-09-29 16:45:29.025946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.734 qpair failed and we were unable to recover it. 00:37:28.734 [2024-09-29 16:45:29.026156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.734 [2024-09-29 16:45:29.026214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.734 qpair failed and we were unable to recover it. 00:37:28.734 [2024-09-29 16:45:29.026472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.734 [2024-09-29 16:45:29.026532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.734 qpair failed and we were unable to recover it. 00:37:28.734 [2024-09-29 16:45:29.026686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.734 [2024-09-29 16:45:29.026731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.734 qpair failed and we were unable to recover it. 00:37:28.734 [2024-09-29 16:45:29.026906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.734 [2024-09-29 16:45:29.026959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.734 qpair failed and we were unable to recover it. 00:37:28.734 [2024-09-29 16:45:29.027095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.734 [2024-09-29 16:45:29.027159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.734 qpair failed and we were unable to recover it. 00:37:28.734 [2024-09-29 16:45:29.027321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.734 [2024-09-29 16:45:29.027374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.734 qpair failed and we were unable to recover it. 00:37:28.734 [2024-09-29 16:45:29.027509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.734 [2024-09-29 16:45:29.027543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.734 qpair failed and we were unable to recover it. 00:37:28.734 [2024-09-29 16:45:29.027656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.734 [2024-09-29 16:45:29.027699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.734 qpair failed and we were unable to recover it. 00:37:28.734 [2024-09-29 16:45:29.027899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.734 [2024-09-29 16:45:29.027953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.734 qpair failed and we were unable to recover it. 00:37:28.734 [2024-09-29 16:45:29.028115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.734 [2024-09-29 16:45:29.028168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.734 qpair failed and we were unable to recover it. 00:37:28.734 [2024-09-29 16:45:29.028336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.734 [2024-09-29 16:45:29.028403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.734 qpair failed and we were unable to recover it. 00:37:28.734 [2024-09-29 16:45:29.028536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.734 [2024-09-29 16:45:29.028570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.734 qpair failed and we were unable to recover it. 00:37:28.734 [2024-09-29 16:45:29.028732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.734 [2024-09-29 16:45:29.028769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.734 qpair failed and we were unable to recover it. 00:37:28.734 [2024-09-29 16:45:29.028935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.734 [2024-09-29 16:45:29.028988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.734 qpair failed and we were unable to recover it. 00:37:28.734 [2024-09-29 16:45:29.029202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.734 [2024-09-29 16:45:29.029240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.734 qpair failed and we were unable to recover it. 00:37:28.734 [2024-09-29 16:45:29.029379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.734 [2024-09-29 16:45:29.029435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.734 qpair failed and we were unable to recover it. 00:37:28.734 [2024-09-29 16:45:29.029612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.734 [2024-09-29 16:45:29.029646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.734 qpair failed and we were unable to recover it. 00:37:28.734 [2024-09-29 16:45:29.029796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.734 [2024-09-29 16:45:29.029844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.734 qpair failed and we were unable to recover it. 00:37:28.734 [2024-09-29 16:45:29.030058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.734 [2024-09-29 16:45:29.030121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.734 qpair failed and we were unable to recover it. 00:37:28.734 [2024-09-29 16:45:29.030264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.734 [2024-09-29 16:45:29.030322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.734 qpair failed and we were unable to recover it. 00:37:28.734 [2024-09-29 16:45:29.030441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.734 [2024-09-29 16:45:29.030476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.734 qpair failed and we were unable to recover it. 00:37:28.734 [2024-09-29 16:45:29.030603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.734 [2024-09-29 16:45:29.030639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.734 qpair failed and we were unable to recover it. 00:37:28.734 [2024-09-29 16:45:29.030799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.734 [2024-09-29 16:45:29.030834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.734 qpair failed and we were unable to recover it. 00:37:28.734 [2024-09-29 16:45:29.030975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.734 [2024-09-29 16:45:29.031009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.734 qpair failed and we were unable to recover it. 00:37:28.734 [2024-09-29 16:45:29.031223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.734 [2024-09-29 16:45:29.031277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.734 qpair failed and we were unable to recover it. 00:37:28.734 [2024-09-29 16:45:29.031439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.734 [2024-09-29 16:45:29.031473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.734 qpair failed and we were unable to recover it. 00:37:28.734 [2024-09-29 16:45:29.031620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.734 [2024-09-29 16:45:29.031655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.734 qpair failed and we were unable to recover it. 00:37:28.734 [2024-09-29 16:45:29.031809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.734 [2024-09-29 16:45:29.031847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.734 qpair failed and we were unable to recover it. 00:37:28.734 [2024-09-29 16:45:29.031998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.734 [2024-09-29 16:45:29.032035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.734 qpair failed and we were unable to recover it. 00:37:28.734 [2024-09-29 16:45:29.032160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.734 [2024-09-29 16:45:29.032198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.734 qpair failed and we were unable to recover it. 00:37:28.734 [2024-09-29 16:45:29.032361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.734 [2024-09-29 16:45:29.032415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.734 qpair failed and we were unable to recover it. 00:37:28.734 [2024-09-29 16:45:29.032561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.734 [2024-09-29 16:45:29.032597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.734 qpair failed and we were unable to recover it. 00:37:28.734 [2024-09-29 16:45:29.032782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.734 [2024-09-29 16:45:29.032840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.734 qpair failed and we were unable to recover it. 00:37:28.734 [2024-09-29 16:45:29.032996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.734 [2024-09-29 16:45:29.033031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.734 qpair failed and we were unable to recover it. 00:37:28.734 [2024-09-29 16:45:29.033176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.734 [2024-09-29 16:45:29.033210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.734 qpair failed and we were unable to recover it. 00:37:28.734 [2024-09-29 16:45:29.033348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.734 [2024-09-29 16:45:29.033385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.734 qpair failed and we were unable to recover it. 00:37:28.734 [2024-09-29 16:45:29.033518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.734 [2024-09-29 16:45:29.033557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.734 qpair failed and we were unable to recover it. 00:37:28.734 [2024-09-29 16:45:29.033688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.734 [2024-09-29 16:45:29.033743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.734 qpair failed and we were unable to recover it. 00:37:28.734 [2024-09-29 16:45:29.033908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.735 [2024-09-29 16:45:29.033966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.735 qpair failed and we were unable to recover it. 00:37:28.735 [2024-09-29 16:45:29.034101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.735 [2024-09-29 16:45:29.034161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.735 qpair failed and we were unable to recover it. 00:37:28.735 [2024-09-29 16:45:29.034327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.735 [2024-09-29 16:45:29.034379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.735 qpair failed and we were unable to recover it. 00:37:28.735 [2024-09-29 16:45:29.034523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.735 [2024-09-29 16:45:29.034557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.735 qpair failed and we were unable to recover it. 00:37:28.735 [2024-09-29 16:45:29.034693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.735 [2024-09-29 16:45:29.034741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.735 qpair failed and we were unable to recover it. 00:37:28.735 [2024-09-29 16:45:29.034878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.735 [2024-09-29 16:45:29.034925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.735 qpair failed and we were unable to recover it. 00:37:28.735 [2024-09-29 16:45:29.035047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.735 [2024-09-29 16:45:29.035082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.735 qpair failed and we were unable to recover it. 00:37:28.735 [2024-09-29 16:45:29.035209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.735 [2024-09-29 16:45:29.035243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.735 qpair failed and we were unable to recover it. 00:37:28.735 [2024-09-29 16:45:29.035411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.735 [2024-09-29 16:45:29.035444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.735 qpair failed and we were unable to recover it. 00:37:28.735 [2024-09-29 16:45:29.035563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.735 [2024-09-29 16:45:29.035597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.735 qpair failed and we were unable to recover it. 00:37:28.735 [2024-09-29 16:45:29.035742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.735 [2024-09-29 16:45:29.035776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.735 qpair failed and we were unable to recover it. 00:37:28.735 [2024-09-29 16:45:29.035935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.735 [2024-09-29 16:45:29.035984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.735 qpair failed and we were unable to recover it. 00:37:28.735 [2024-09-29 16:45:29.036143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.735 [2024-09-29 16:45:29.036191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.735 qpair failed and we were unable to recover it. 00:37:28.735 [2024-09-29 16:45:29.036354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.735 [2024-09-29 16:45:29.036409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.735 qpair failed and we were unable to recover it. 00:37:28.735 [2024-09-29 16:45:29.036531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.735 [2024-09-29 16:45:29.036565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.735 qpair failed and we were unable to recover it. 00:37:28.735 [2024-09-29 16:45:29.036698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.735 [2024-09-29 16:45:29.036733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.735 qpair failed and we were unable to recover it. 00:37:28.735 [2024-09-29 16:45:29.036860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.735 [2024-09-29 16:45:29.036900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.735 qpair failed and we were unable to recover it. 00:37:28.735 [2024-09-29 16:45:29.037089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.735 [2024-09-29 16:45:29.037123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.735 qpair failed and we were unable to recover it. 00:37:28.735 [2024-09-29 16:45:29.037263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.735 [2024-09-29 16:45:29.037297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.735 qpair failed and we were unable to recover it. 00:37:28.735 [2024-09-29 16:45:29.037418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.735 [2024-09-29 16:45:29.037454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.735 qpair failed and we were unable to recover it. 00:37:28.735 [2024-09-29 16:45:29.037615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.735 [2024-09-29 16:45:29.037663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.735 qpair failed and we were unable to recover it. 00:37:28.735 [2024-09-29 16:45:29.037822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.735 [2024-09-29 16:45:29.037857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.735 qpair failed and we were unable to recover it. 00:37:28.735 [2024-09-29 16:45:29.038002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.735 [2024-09-29 16:45:29.038050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.735 qpair failed and we were unable to recover it. 00:37:28.735 [2024-09-29 16:45:29.038196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.735 [2024-09-29 16:45:29.038260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.735 qpair failed and we were unable to recover it. 00:37:28.735 [2024-09-29 16:45:29.038412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.735 [2024-09-29 16:45:29.038469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.735 qpair failed and we were unable to recover it. 00:37:28.735 [2024-09-29 16:45:29.038610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.735 [2024-09-29 16:45:29.038644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.735 qpair failed and we were unable to recover it. 00:37:28.735 [2024-09-29 16:45:29.038817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.735 [2024-09-29 16:45:29.038855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.735 qpair failed and we were unable to recover it. 00:37:28.735 [2024-09-29 16:45:29.039028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.735 [2024-09-29 16:45:29.039076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.735 qpair failed and we were unable to recover it. 00:37:28.735 [2024-09-29 16:45:29.039195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.735 [2024-09-29 16:45:29.039235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.735 qpair failed and we were unable to recover it. 00:37:28.735 [2024-09-29 16:45:29.039424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.735 [2024-09-29 16:45:29.039483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.735 qpair failed and we were unable to recover it. 00:37:28.735 [2024-09-29 16:45:29.039642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.735 [2024-09-29 16:45:29.039691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.735 qpair failed and we were unable to recover it. 00:37:28.735 [2024-09-29 16:45:29.039873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.735 [2024-09-29 16:45:29.039920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.735 qpair failed and we were unable to recover it. 00:37:28.735 [2024-09-29 16:45:29.040212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.735 [2024-09-29 16:45:29.040268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.735 qpair failed and we were unable to recover it. 00:37:28.735 [2024-09-29 16:45:29.040487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.735 [2024-09-29 16:45:29.040524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.735 qpair failed and we were unable to recover it. 00:37:28.735 [2024-09-29 16:45:29.040683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.735 [2024-09-29 16:45:29.040737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.735 qpair failed and we were unable to recover it. 00:37:28.735 [2024-09-29 16:45:29.040876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.735 [2024-09-29 16:45:29.040910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.735 qpair failed and we were unable to recover it. 00:37:28.735 [2024-09-29 16:45:29.041029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.735 [2024-09-29 16:45:29.041082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.735 qpair failed and we were unable to recover it. 00:37:28.735 [2024-09-29 16:45:29.041301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.735 [2024-09-29 16:45:29.041339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.735 qpair failed and we were unable to recover it. 00:37:28.735 [2024-09-29 16:45:29.041525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.735 [2024-09-29 16:45:29.041562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.735 qpair failed and we were unable to recover it. 00:37:28.735 [2024-09-29 16:45:29.041735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.735 [2024-09-29 16:45:29.041783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.735 qpair failed and we were unable to recover it. 00:37:28.735 [2024-09-29 16:45:29.041939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.735 [2024-09-29 16:45:29.042006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.735 qpair failed and we were unable to recover it. 00:37:28.735 [2024-09-29 16:45:29.042183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.735 [2024-09-29 16:45:29.042248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.735 qpair failed and we were unable to recover it. 00:37:28.735 [2024-09-29 16:45:29.042413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.735 [2024-09-29 16:45:29.042452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.735 qpair failed and we were unable to recover it. 00:37:28.735 [2024-09-29 16:45:29.042637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.735 [2024-09-29 16:45:29.042685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.735 qpair failed and we were unable to recover it. 00:37:28.735 [2024-09-29 16:45:29.042838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.735 [2024-09-29 16:45:29.042873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.735 qpair failed and we were unable to recover it. 00:37:28.735 [2024-09-29 16:45:29.043035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.735 [2024-09-29 16:45:29.043072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.735 qpair failed and we were unable to recover it. 00:37:28.735 [2024-09-29 16:45:29.043294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.735 [2024-09-29 16:45:29.043354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.735 qpair failed and we were unable to recover it. 00:37:28.735 [2024-09-29 16:45:29.043509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.736 [2024-09-29 16:45:29.043558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.736 qpair failed and we were unable to recover it. 00:37:28.736 [2024-09-29 16:45:29.043717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.736 [2024-09-29 16:45:29.043754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.736 qpair failed and we were unable to recover it. 00:37:28.736 [2024-09-29 16:45:29.043874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.736 [2024-09-29 16:45:29.043910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.736 qpair failed and we were unable to recover it. 00:37:28.736 [2024-09-29 16:45:29.044080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.736 [2024-09-29 16:45:29.044114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.736 qpair failed and we were unable to recover it. 00:37:28.736 [2024-09-29 16:45:29.044350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.736 [2024-09-29 16:45:29.044410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.736 qpair failed and we were unable to recover it. 00:37:28.736 [2024-09-29 16:45:29.044581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.736 [2024-09-29 16:45:29.044615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.736 qpair failed and we were unable to recover it. 00:37:28.736 [2024-09-29 16:45:29.044750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.736 [2024-09-29 16:45:29.044785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.736 qpair failed and we were unable to recover it. 00:37:28.736 [2024-09-29 16:45:29.044959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.736 [2024-09-29 16:45:29.044997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.736 qpair failed and we were unable to recover it. 00:37:28.736 [2024-09-29 16:45:29.045132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.736 [2024-09-29 16:45:29.045171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.736 qpair failed and we were unable to recover it. 00:37:28.736 [2024-09-29 16:45:29.045356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.736 [2024-09-29 16:45:29.045429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.736 qpair failed and we were unable to recover it. 00:37:28.736 [2024-09-29 16:45:29.045581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.736 [2024-09-29 16:45:29.045617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.736 qpair failed and we were unable to recover it. 00:37:28.736 [2024-09-29 16:45:29.045778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.736 [2024-09-29 16:45:29.045832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.736 qpair failed and we were unable to recover it. 00:37:28.736 [2024-09-29 16:45:29.045961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.736 [2024-09-29 16:45:29.046002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.736 qpair failed and we were unable to recover it. 00:37:28.736 [2024-09-29 16:45:29.046191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.736 [2024-09-29 16:45:29.046251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.736 qpair failed and we were unable to recover it. 00:37:28.736 [2024-09-29 16:45:29.046433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.736 [2024-09-29 16:45:29.046467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.736 qpair failed and we were unable to recover it. 00:37:28.736 [2024-09-29 16:45:29.046611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.736 [2024-09-29 16:45:29.046646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.736 qpair failed and we were unable to recover it. 00:37:28.736 [2024-09-29 16:45:29.046787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.736 [2024-09-29 16:45:29.046835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.736 qpair failed and we were unable to recover it. 00:37:28.736 [2024-09-29 16:45:29.047008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.736 [2024-09-29 16:45:29.047077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.736 qpair failed and we were unable to recover it. 00:37:28.736 [2024-09-29 16:45:29.047238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.736 [2024-09-29 16:45:29.047298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.736 qpair failed and we were unable to recover it. 00:37:28.736 [2024-09-29 16:45:29.047463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.736 [2024-09-29 16:45:29.047531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.736 qpair failed and we were unable to recover it. 00:37:28.736 [2024-09-29 16:45:29.047688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.736 [2024-09-29 16:45:29.047724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.736 qpair failed and we were unable to recover it. 00:37:28.736 [2024-09-29 16:45:29.047868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.736 [2024-09-29 16:45:29.047909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.736 qpair failed and we were unable to recover it. 00:37:28.736 [2024-09-29 16:45:29.048079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.736 [2024-09-29 16:45:29.048131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.736 qpair failed and we were unable to recover it. 00:37:28.736 [2024-09-29 16:45:29.048296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.736 [2024-09-29 16:45:29.048358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.736 qpair failed and we were unable to recover it. 00:37:28.736 [2024-09-29 16:45:29.048471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.736 [2024-09-29 16:45:29.048504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.736 qpair failed and we were unable to recover it. 00:37:28.736 [2024-09-29 16:45:29.048661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.736 [2024-09-29 16:45:29.048715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.736 qpair failed and we were unable to recover it. 00:37:28.736 [2024-09-29 16:45:29.048854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.736 [2024-09-29 16:45:29.048901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.736 qpair failed and we were unable to recover it. 00:37:28.736 [2024-09-29 16:45:29.049107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.736 [2024-09-29 16:45:29.049167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.736 qpair failed and we were unable to recover it. 00:37:28.736 [2024-09-29 16:45:29.049299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.736 [2024-09-29 16:45:29.049351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.736 qpair failed and we were unable to recover it. 00:37:28.736 [2024-09-29 16:45:29.049499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.736 [2024-09-29 16:45:29.049554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.736 qpair failed and we were unable to recover it. 00:37:28.736 [2024-09-29 16:45:29.049747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.736 [2024-09-29 16:45:29.049796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.736 qpair failed and we were unable to recover it. 00:37:28.736 [2024-09-29 16:45:29.049926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.736 [2024-09-29 16:45:29.049962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.736 qpair failed and we were unable to recover it. 00:37:28.736 [2024-09-29 16:45:29.050100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.736 [2024-09-29 16:45:29.050141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.736 qpair failed and we were unable to recover it. 00:37:28.736 [2024-09-29 16:45:29.050318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.736 [2024-09-29 16:45:29.050356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.736 qpair failed and we were unable to recover it. 00:37:28.736 [2024-09-29 16:45:29.050514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.736 [2024-09-29 16:45:29.050552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.736 qpair failed and we were unable to recover it. 00:37:28.736 [2024-09-29 16:45:29.050685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.736 [2024-09-29 16:45:29.050739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.736 qpair failed and we were unable to recover it. 00:37:28.736 [2024-09-29 16:45:29.050888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.736 [2024-09-29 16:45:29.050921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.736 qpair failed and we were unable to recover it. 00:37:28.736 [2024-09-29 16:45:29.051074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.736 [2024-09-29 16:45:29.051126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.736 qpair failed and we were unable to recover it. 00:37:28.736 [2024-09-29 16:45:29.051308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.736 [2024-09-29 16:45:29.051345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.736 qpair failed and we were unable to recover it. 00:37:28.736 [2024-09-29 16:45:29.051473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.736 [2024-09-29 16:45:29.051509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.736 qpair failed and we were unable to recover it. 00:37:28.736 [2024-09-29 16:45:29.051650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.736 [2024-09-29 16:45:29.051691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.736 qpair failed and we were unable to recover it. 00:37:28.736 [2024-09-29 16:45:29.051808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.736 [2024-09-29 16:45:29.051841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.736 qpair failed and we were unable to recover it. 00:37:28.736 [2024-09-29 16:45:29.051969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.736 [2024-09-29 16:45:29.052006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.736 qpair failed and we were unable to recover it. 00:37:28.736 [2024-09-29 16:45:29.052182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.736 [2024-09-29 16:45:29.052218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.736 qpair failed and we were unable to recover it. 00:37:28.736 [2024-09-29 16:45:29.052379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.736 [2024-09-29 16:45:29.052415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.736 qpair failed and we were unable to recover it. 00:37:28.736 [2024-09-29 16:45:29.052553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.736 [2024-09-29 16:45:29.052593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.736 qpair failed and we were unable to recover it. 00:37:28.736 [2024-09-29 16:45:29.052759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.736 [2024-09-29 16:45:29.052794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.736 qpair failed and we were unable to recover it. 00:37:28.736 [2024-09-29 16:45:29.052964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.736 [2024-09-29 16:45:29.053010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.736 qpair failed and we were unable to recover it. 00:37:28.736 [2024-09-29 16:45:29.053139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.736 [2024-09-29 16:45:29.053173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.736 qpair failed and we were unable to recover it. 00:37:28.736 [2024-09-29 16:45:29.053339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.736 [2024-09-29 16:45:29.053377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.736 qpair failed and we were unable to recover it. 00:37:28.736 [2024-09-29 16:45:29.053506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.736 [2024-09-29 16:45:29.053545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.736 qpair failed and we were unable to recover it. 00:37:28.736 [2024-09-29 16:45:29.053709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.736 [2024-09-29 16:45:29.053768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.736 qpair failed and we were unable to recover it. 00:37:28.736 [2024-09-29 16:45:29.053902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.736 [2024-09-29 16:45:29.053950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.736 qpair failed and we were unable to recover it. 00:37:28.736 [2024-09-29 16:45:29.054108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.736 [2024-09-29 16:45:29.054161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.736 qpair failed and we were unable to recover it. 00:37:28.736 [2024-09-29 16:45:29.054353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.737 [2024-09-29 16:45:29.054392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.737 qpair failed and we were unable to recover it. 00:37:28.737 [2024-09-29 16:45:29.054522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.737 [2024-09-29 16:45:29.054561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.737 qpair failed and we were unable to recover it. 00:37:28.737 [2024-09-29 16:45:29.054728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.737 [2024-09-29 16:45:29.054765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.737 qpair failed and we were unable to recover it. 00:37:28.737 [2024-09-29 16:45:29.054923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.737 [2024-09-29 16:45:29.054957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.737 qpair failed and we were unable to recover it. 00:37:28.737 [2024-09-29 16:45:29.055068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.737 [2024-09-29 16:45:29.055104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.737 qpair failed and we were unable to recover it. 00:37:28.737 [2024-09-29 16:45:29.055316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.737 [2024-09-29 16:45:29.055374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.737 qpair failed and we were unable to recover it. 00:37:28.737 [2024-09-29 16:45:29.055550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.737 [2024-09-29 16:45:29.055592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.737 qpair failed and we were unable to recover it. 00:37:28.737 [2024-09-29 16:45:29.055787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.737 [2024-09-29 16:45:29.055840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.737 qpair failed and we were unable to recover it. 00:37:28.737 [2024-09-29 16:45:29.055993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.737 [2024-09-29 16:45:29.056035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.737 qpair failed and we were unable to recover it. 00:37:28.737 [2024-09-29 16:45:29.056194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.737 [2024-09-29 16:45:29.056263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.737 qpair failed and we were unable to recover it. 00:37:28.737 [2024-09-29 16:45:29.056465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.737 [2024-09-29 16:45:29.056503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.737 qpair failed and we were unable to recover it. 00:37:28.737 [2024-09-29 16:45:29.056665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.737 [2024-09-29 16:45:29.056727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.737 qpair failed and we were unable to recover it. 00:37:28.737 [2024-09-29 16:45:29.056886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.737 [2024-09-29 16:45:29.056933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.737 qpair failed and we were unable to recover it. 00:37:28.737 [2024-09-29 16:45:29.057108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.737 [2024-09-29 16:45:29.057180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.737 qpair failed and we were unable to recover it. 00:37:28.737 [2024-09-29 16:45:29.057395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.737 [2024-09-29 16:45:29.057454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.737 qpair failed and we were unable to recover it. 00:37:28.737 [2024-09-29 16:45:29.057615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.737 [2024-09-29 16:45:29.057649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.737 qpair failed and we were unable to recover it. 00:37:28.737 [2024-09-29 16:45:29.057776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.737 [2024-09-29 16:45:29.057810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.737 qpair failed and we were unable to recover it. 00:37:28.737 [2024-09-29 16:45:29.057956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.737 [2024-09-29 16:45:29.058009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.737 qpair failed and we were unable to recover it. 00:37:28.737 [2024-09-29 16:45:29.058157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.737 [2024-09-29 16:45:29.058194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.737 qpair failed and we were unable to recover it. 00:37:28.737 [2024-09-29 16:45:29.058347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.737 [2024-09-29 16:45:29.058386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.737 qpair failed and we were unable to recover it. 00:37:28.737 [2024-09-29 16:45:29.058543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.737 [2024-09-29 16:45:29.058580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.737 qpair failed and we were unable to recover it. 00:37:28.737 [2024-09-29 16:45:29.058750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.737 [2024-09-29 16:45:29.058799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.737 qpair failed and we were unable to recover it. 00:37:28.737 [2024-09-29 16:45:29.058963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.737 [2024-09-29 16:45:29.058999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.737 qpair failed and we were unable to recover it. 00:37:28.737 [2024-09-29 16:45:29.059110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.737 [2024-09-29 16:45:29.059144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.737 qpair failed and we were unable to recover it. 00:37:28.737 [2024-09-29 16:45:29.059286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.737 [2024-09-29 16:45:29.059320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.737 qpair failed and we were unable to recover it. 00:37:28.737 [2024-09-29 16:45:29.059499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.737 [2024-09-29 16:45:29.059570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.737 qpair failed and we were unable to recover it. 00:37:28.737 [2024-09-29 16:45:29.059734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.737 [2024-09-29 16:45:29.059782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.737 qpair failed and we were unable to recover it. 00:37:28.737 [2024-09-29 16:45:29.059903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.737 [2024-09-29 16:45:29.059937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.737 qpair failed and we were unable to recover it. 00:37:28.737 [2024-09-29 16:45:29.060083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.737 [2024-09-29 16:45:29.060117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.737 qpair failed and we were unable to recover it. 00:37:28.737 [2024-09-29 16:45:29.060295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.737 [2024-09-29 16:45:29.060333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.737 qpair failed and we were unable to recover it. 00:37:28.737 [2024-09-29 16:45:29.060459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.737 [2024-09-29 16:45:29.060496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.737 qpair failed and we were unable to recover it. 00:37:28.737 [2024-09-29 16:45:29.060667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.737 [2024-09-29 16:45:29.060709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.737 qpair failed and we were unable to recover it. 00:37:28.737 [2024-09-29 16:45:29.060844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.737 [2024-09-29 16:45:29.060892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.737 qpair failed and we were unable to recover it. 00:37:28.737 [2024-09-29 16:45:29.061035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.737 [2024-09-29 16:45:29.061076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.737 qpair failed and we were unable to recover it. 00:37:28.737 [2024-09-29 16:45:29.061226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.737 [2024-09-29 16:45:29.061264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.737 qpair failed and we were unable to recover it. 00:37:28.737 [2024-09-29 16:45:29.061410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.737 [2024-09-29 16:45:29.061443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.737 qpair failed and we were unable to recover it. 00:37:28.737 [2024-09-29 16:45:29.061632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.737 [2024-09-29 16:45:29.061665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.737 qpair failed and we were unable to recover it. 00:37:28.737 [2024-09-29 16:45:29.061787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.737 [2024-09-29 16:45:29.061820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.737 qpair failed and we were unable to recover it. 00:37:28.737 [2024-09-29 16:45:29.061963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.737 [2024-09-29 16:45:29.061998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.737 qpair failed and we were unable to recover it. 00:37:28.737 [2024-09-29 16:45:29.062161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.737 [2024-09-29 16:45:29.062220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.737 qpair failed and we were unable to recover it. 00:37:28.737 [2024-09-29 16:45:29.062360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.737 [2024-09-29 16:45:29.062414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.737 qpair failed and we were unable to recover it. 00:37:28.737 [2024-09-29 16:45:29.062539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.737 [2024-09-29 16:45:29.062574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.737 qpair failed and we were unable to recover it. 00:37:28.737 [2024-09-29 16:45:29.062715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.737 [2024-09-29 16:45:29.062751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.737 qpair failed and we were unable to recover it. 00:37:28.737 [2024-09-29 16:45:29.062867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.737 [2024-09-29 16:45:29.062903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.737 qpair failed and we were unable to recover it. 00:37:28.737 [2024-09-29 16:45:29.063047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.737 [2024-09-29 16:45:29.063082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.737 qpair failed and we were unable to recover it. 00:37:28.737 [2024-09-29 16:45:29.063197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.737 [2024-09-29 16:45:29.063231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.737 qpair failed and we were unable to recover it. 00:37:28.737 [2024-09-29 16:45:29.063339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.737 [2024-09-29 16:45:29.063373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.737 qpair failed and we were unable to recover it. 00:37:28.737 [2024-09-29 16:45:29.063505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.737 [2024-09-29 16:45:29.063559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.737 qpair failed and we were unable to recover it. 00:37:28.737 [2024-09-29 16:45:29.063699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.737 [2024-09-29 16:45:29.063735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.737 qpair failed and we were unable to recover it. 00:37:28.737 [2024-09-29 16:45:29.063880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.737 [2024-09-29 16:45:29.063915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.737 qpair failed and we were unable to recover it. 00:37:28.737 [2024-09-29 16:45:29.064031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.737 [2024-09-29 16:45:29.064066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.737 qpair failed and we were unable to recover it. 00:37:28.737 [2024-09-29 16:45:29.064174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.737 [2024-09-29 16:45:29.064207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.737 qpair failed and we were unable to recover it. 00:37:28.737 [2024-09-29 16:45:29.064341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.737 [2024-09-29 16:45:29.064374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.737 qpair failed and we were unable to recover it. 00:37:28.737 [2024-09-29 16:45:29.064520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.737 [2024-09-29 16:45:29.064555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.737 qpair failed and we were unable to recover it. 00:37:28.737 [2024-09-29 16:45:29.064732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.737 [2024-09-29 16:45:29.064780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.737 qpair failed and we were unable to recover it. 00:37:28.737 [2024-09-29 16:45:29.064947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.737 [2024-09-29 16:45:29.064995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.737 qpair failed and we were unable to recover it. 00:37:28.737 [2024-09-29 16:45:29.065120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.737 [2024-09-29 16:45:29.065154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.737 qpair failed and we were unable to recover it. 00:37:28.738 [2024-09-29 16:45:29.065298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.738 [2024-09-29 16:45:29.065331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.738 qpair failed and we were unable to recover it. 00:37:28.738 [2024-09-29 16:45:29.065473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.738 [2024-09-29 16:45:29.065506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.738 qpair failed and we were unable to recover it. 00:37:28.738 [2024-09-29 16:45:29.065628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.738 [2024-09-29 16:45:29.065661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.738 qpair failed and we were unable to recover it. 00:37:28.738 [2024-09-29 16:45:29.065818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.738 [2024-09-29 16:45:29.065853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.738 qpair failed and we were unable to recover it. 00:37:28.738 [2024-09-29 16:45:29.065982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.738 [2024-09-29 16:45:29.066016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.738 qpair failed and we were unable to recover it. 00:37:28.738 [2024-09-29 16:45:29.066137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.738 [2024-09-29 16:45:29.066170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.738 qpair failed and we were unable to recover it. 00:37:28.738 [2024-09-29 16:45:29.066309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.738 [2024-09-29 16:45:29.066343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.738 qpair failed and we were unable to recover it. 00:37:28.738 [2024-09-29 16:45:29.066514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.738 [2024-09-29 16:45:29.066548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.738 qpair failed and we were unable to recover it. 00:37:28.738 [2024-09-29 16:45:29.066668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.738 [2024-09-29 16:45:29.066710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.738 qpair failed and we were unable to recover it. 00:37:28.738 [2024-09-29 16:45:29.066856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.738 [2024-09-29 16:45:29.066890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.738 qpair failed and we were unable to recover it. 00:37:28.738 [2024-09-29 16:45:29.067031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.738 [2024-09-29 16:45:29.067065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.738 qpair failed and we were unable to recover it. 00:37:28.738 [2024-09-29 16:45:29.067178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.738 [2024-09-29 16:45:29.067211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.738 qpair failed and we were unable to recover it. 00:37:28.738 [2024-09-29 16:45:29.067366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.738 [2024-09-29 16:45:29.067399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.738 qpair failed and we were unable to recover it. 00:37:28.738 [2024-09-29 16:45:29.067513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.738 [2024-09-29 16:45:29.067547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.738 qpair failed and we were unable to recover it. 00:37:28.738 [2024-09-29 16:45:29.067657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.738 [2024-09-29 16:45:29.067698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.738 qpair failed and we were unable to recover it. 00:37:28.738 [2024-09-29 16:45:29.067865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.738 [2024-09-29 16:45:29.067900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.738 qpair failed and we were unable to recover it. 00:37:28.738 [2024-09-29 16:45:29.068024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.738 [2024-09-29 16:45:29.068059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.738 qpair failed and we were unable to recover it. 00:37:28.738 [2024-09-29 16:45:29.068210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.738 [2024-09-29 16:45:29.068244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.738 qpair failed and we were unable to recover it. 00:37:28.738 [2024-09-29 16:45:29.068362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.738 [2024-09-29 16:45:29.068396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.738 qpair failed and we were unable to recover it. 00:37:28.738 [2024-09-29 16:45:29.068524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.738 [2024-09-29 16:45:29.068571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.738 qpair failed and we were unable to recover it. 00:37:28.738 [2024-09-29 16:45:29.068717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.738 [2024-09-29 16:45:29.068765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.738 qpair failed and we were unable to recover it. 00:37:28.738 [2024-09-29 16:45:29.068921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.738 [2024-09-29 16:45:29.068956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.738 qpair failed and we were unable to recover it. 00:37:28.738 [2024-09-29 16:45:29.069098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.738 [2024-09-29 16:45:29.069133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.738 qpair failed and we were unable to recover it. 00:37:28.738 [2024-09-29 16:45:29.069246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.738 [2024-09-29 16:45:29.069280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.738 qpair failed and we were unable to recover it. 00:37:28.738 [2024-09-29 16:45:29.069427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.738 [2024-09-29 16:45:29.069462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.738 qpair failed and we were unable to recover it. 00:37:28.738 [2024-09-29 16:45:29.069587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.738 [2024-09-29 16:45:29.069622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.738 qpair failed and we were unable to recover it. 00:37:28.738 [2024-09-29 16:45:29.069765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.738 [2024-09-29 16:45:29.069814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.738 qpair failed and we were unable to recover it. 00:37:28.738 [2024-09-29 16:45:29.069942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.738 [2024-09-29 16:45:29.069977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.738 qpair failed and we were unable to recover it. 00:37:28.738 [2024-09-29 16:45:29.070132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.738 [2024-09-29 16:45:29.070168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.738 qpair failed and we were unable to recover it. 00:37:28.738 [2024-09-29 16:45:29.070313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.738 [2024-09-29 16:45:29.070347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.738 qpair failed and we were unable to recover it. 00:37:28.738 [2024-09-29 16:45:29.070456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.738 [2024-09-29 16:45:29.070495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.738 qpair failed and we were unable to recover it. 00:37:28.738 [2024-09-29 16:45:29.070615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.738 [2024-09-29 16:45:29.070650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.738 qpair failed and we were unable to recover it. 00:37:28.738 [2024-09-29 16:45:29.070810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.738 [2024-09-29 16:45:29.070846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.738 qpair failed and we were unable to recover it. 00:37:28.738 [2024-09-29 16:45:29.070964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.738 [2024-09-29 16:45:29.070998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.738 qpair failed and we were unable to recover it. 00:37:28.738 [2024-09-29 16:45:29.071135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.738 [2024-09-29 16:45:29.071169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.738 qpair failed and we were unable to recover it. 00:37:28.738 [2024-09-29 16:45:29.071289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.738 [2024-09-29 16:45:29.071323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.738 qpair failed and we were unable to recover it. 00:37:28.738 [2024-09-29 16:45:29.071452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.738 [2024-09-29 16:45:29.071489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.738 qpair failed and we were unable to recover it. 00:37:28.738 [2024-09-29 16:45:29.071605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.738 [2024-09-29 16:45:29.071639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.738 qpair failed and we were unable to recover it. 00:37:28.738 [2024-09-29 16:45:29.071765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.738 [2024-09-29 16:45:29.071801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.738 qpair failed and we were unable to recover it. 00:37:28.738 [2024-09-29 16:45:29.071962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.738 [2024-09-29 16:45:29.071996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.738 qpair failed and we were unable to recover it. 00:37:28.738 [2024-09-29 16:45:29.072151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.738 [2024-09-29 16:45:29.072184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.738 qpair failed and we were unable to recover it. 00:37:28.738 [2024-09-29 16:45:29.072293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.738 [2024-09-29 16:45:29.072326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.738 qpair failed and we were unable to recover it. 00:37:28.738 [2024-09-29 16:45:29.072468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.738 [2024-09-29 16:45:29.072502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.738 qpair failed and we were unable to recover it. 00:37:28.738 [2024-09-29 16:45:29.072621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.738 [2024-09-29 16:45:29.072654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.738 qpair failed and we were unable to recover it. 00:37:28.738 [2024-09-29 16:45:29.072799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.738 [2024-09-29 16:45:29.072847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.738 qpair failed and we were unable to recover it. 00:37:28.738 [2024-09-29 16:45:29.072974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.738 [2024-09-29 16:45:29.073009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.738 qpair failed and we were unable to recover it. 00:37:28.738 [2024-09-29 16:45:29.073131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.738 [2024-09-29 16:45:29.073165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.738 qpair failed and we were unable to recover it. 00:37:28.738 [2024-09-29 16:45:29.073281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.738 [2024-09-29 16:45:29.073314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.738 qpair failed and we were unable to recover it. 00:37:28.738 [2024-09-29 16:45:29.073475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.738 [2024-09-29 16:45:29.073511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.738 qpair failed and we were unable to recover it. 00:37:28.738 [2024-09-29 16:45:29.073650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.738 [2024-09-29 16:45:29.073704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.738 qpair failed and we were unable to recover it. 00:37:28.738 [2024-09-29 16:45:29.073869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.739 [2024-09-29 16:45:29.073905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.739 qpair failed and we were unable to recover it. 00:37:28.739 [2024-09-29 16:45:29.074024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.739 [2024-09-29 16:45:29.074057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.739 qpair failed and we were unable to recover it. 00:37:28.739 [2024-09-29 16:45:29.074198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.739 [2024-09-29 16:45:29.074232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.739 qpair failed and we were unable to recover it. 00:37:28.739 [2024-09-29 16:45:29.074404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.739 [2024-09-29 16:45:29.074438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.739 qpair failed and we were unable to recover it. 00:37:28.739 [2024-09-29 16:45:29.074572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.739 [2024-09-29 16:45:29.074620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.739 qpair failed and we were unable to recover it. 00:37:28.739 [2024-09-29 16:45:29.074764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.739 [2024-09-29 16:45:29.074825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.739 qpair failed and we were unable to recover it. 00:37:28.739 [2024-09-29 16:45:29.075004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.739 [2024-09-29 16:45:29.075051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.739 qpair failed and we were unable to recover it. 00:37:28.739 [2024-09-29 16:45:29.075207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.739 [2024-09-29 16:45:29.075243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.739 qpair failed and we were unable to recover it. 00:37:28.739 [2024-09-29 16:45:29.075387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.739 [2024-09-29 16:45:29.075421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.739 qpair failed and we were unable to recover it. 00:37:28.739 [2024-09-29 16:45:29.075543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.739 [2024-09-29 16:45:29.075577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.739 qpair failed and we were unable to recover it. 00:37:28.739 [2024-09-29 16:45:29.075703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.739 [2024-09-29 16:45:29.075739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.739 qpair failed and we were unable to recover it. 00:37:28.739 [2024-09-29 16:45:29.075849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.739 [2024-09-29 16:45:29.075882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.739 qpair failed and we were unable to recover it. 00:37:28.739 [2024-09-29 16:45:29.076023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.739 [2024-09-29 16:45:29.076057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.739 qpair failed and we were unable to recover it. 00:37:28.739 [2024-09-29 16:45:29.076208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.739 [2024-09-29 16:45:29.076241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.739 qpair failed and we were unable to recover it. 00:37:28.739 [2024-09-29 16:45:29.076365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.739 [2024-09-29 16:45:29.076399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.739 qpair failed and we were unable to recover it. 00:37:28.739 [2024-09-29 16:45:29.076546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.739 [2024-09-29 16:45:29.076591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.739 qpair failed and we were unable to recover it. 00:37:28.739 [2024-09-29 16:45:29.076794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.739 [2024-09-29 16:45:29.076842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.739 qpair failed and we were unable to recover it. 00:37:28.739 [2024-09-29 16:45:29.076976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.739 [2024-09-29 16:45:29.077014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.739 qpair failed and we were unable to recover it. 00:37:28.739 [2024-09-29 16:45:29.077130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.739 [2024-09-29 16:45:29.077164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.739 qpair failed and we were unable to recover it. 00:37:28.739 [2024-09-29 16:45:29.077274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.739 [2024-09-29 16:45:29.077307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.739 qpair failed and we were unable to recover it. 00:37:28.739 [2024-09-29 16:45:29.077423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.739 [2024-09-29 16:45:29.077463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.739 qpair failed and we were unable to recover it. 00:37:28.739 [2024-09-29 16:45:29.077623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.739 [2024-09-29 16:45:29.077680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.739 qpair failed and we were unable to recover it. 00:37:28.739 [2024-09-29 16:45:29.077827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.739 [2024-09-29 16:45:29.077862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.739 qpair failed and we were unable to recover it. 00:37:28.739 [2024-09-29 16:45:29.077984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.739 [2024-09-29 16:45:29.078022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.739 qpair failed and we were unable to recover it. 00:37:28.739 [2024-09-29 16:45:29.078159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.739 [2024-09-29 16:45:29.078192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.739 qpair failed and we were unable to recover it. 00:37:28.739 [2024-09-29 16:45:29.078326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.739 [2024-09-29 16:45:29.078361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.739 qpair failed and we were unable to recover it. 00:37:28.739 [2024-09-29 16:45:29.078510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.739 [2024-09-29 16:45:29.078543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.739 qpair failed and we were unable to recover it. 00:37:28.739 [2024-09-29 16:45:29.078717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.739 [2024-09-29 16:45:29.078765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.739 qpair failed and we were unable to recover it. 00:37:28.739 [2024-09-29 16:45:29.078930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.739 [2024-09-29 16:45:29.078966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.739 qpair failed and we were unable to recover it. 00:37:28.739 [2024-09-29 16:45:29.079111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.739 [2024-09-29 16:45:29.079145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.739 qpair failed and we were unable to recover it. 00:37:28.739 [2024-09-29 16:45:29.079295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.739 [2024-09-29 16:45:29.079328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.739 qpair failed and we were unable to recover it. 00:37:28.739 [2024-09-29 16:45:29.079466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.739 [2024-09-29 16:45:29.079499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.739 qpair failed and we were unable to recover it. 00:37:28.739 [2024-09-29 16:45:29.079633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.739 [2024-09-29 16:45:29.079690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.739 qpair failed and we were unable to recover it. 00:37:28.739 [2024-09-29 16:45:29.079860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.739 [2024-09-29 16:45:29.079895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.739 qpair failed and we were unable to recover it. 00:37:28.739 [2024-09-29 16:45:29.080047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.739 [2024-09-29 16:45:29.080080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.739 qpair failed and we were unable to recover it. 00:37:28.739 [2024-09-29 16:45:29.080191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.739 [2024-09-29 16:45:29.080225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.739 qpair failed and we were unable to recover it. 00:37:28.739 [2024-09-29 16:45:29.080350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.739 [2024-09-29 16:45:29.080384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.739 qpair failed and we were unable to recover it. 00:37:28.739 [2024-09-29 16:45:29.080497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.739 [2024-09-29 16:45:29.080530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.739 qpair failed and we were unable to recover it. 00:37:28.739 [2024-09-29 16:45:29.080648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.739 [2024-09-29 16:45:29.080695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.739 qpair failed and we were unable to recover it. 00:37:28.739 [2024-09-29 16:45:29.080867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.739 [2024-09-29 16:45:29.080915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.739 qpair failed and we were unable to recover it. 00:37:28.739 [2024-09-29 16:45:29.081065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.739 [2024-09-29 16:45:29.081101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.739 qpair failed and we were unable to recover it. 00:37:28.739 [2024-09-29 16:45:29.081253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.739 [2024-09-29 16:45:29.081288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.739 qpair failed and we were unable to recover it. 00:37:28.739 [2024-09-29 16:45:29.081436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.739 [2024-09-29 16:45:29.081471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.739 qpair failed and we were unable to recover it. 00:37:28.739 [2024-09-29 16:45:29.081601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.739 [2024-09-29 16:45:29.081634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.739 qpair failed and we were unable to recover it. 00:37:28.739 [2024-09-29 16:45:29.081765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.739 [2024-09-29 16:45:29.081800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.739 qpair failed and we were unable to recover it. 00:37:28.739 [2024-09-29 16:45:29.081938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.739 [2024-09-29 16:45:29.081985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.739 qpair failed and we were unable to recover it. 00:37:28.739 [2024-09-29 16:45:29.082139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.739 [2024-09-29 16:45:29.082175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.739 qpair failed and we were unable to recover it. 00:37:28.739 [2024-09-29 16:45:29.082300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.739 [2024-09-29 16:45:29.082336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.739 qpair failed and we were unable to recover it. 00:37:28.739 [2024-09-29 16:45:29.082479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.739 [2024-09-29 16:45:29.082511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.739 qpair failed and we were unable to recover it. 00:37:28.739 [2024-09-29 16:45:29.082653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.739 [2024-09-29 16:45:29.082692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.739 qpair failed and we were unable to recover it. 00:37:28.739 [2024-09-29 16:45:29.082804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.739 [2024-09-29 16:45:29.082838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.739 qpair failed and we were unable to recover it. 00:37:28.739 [2024-09-29 16:45:29.083006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.739 [2024-09-29 16:45:29.083054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.739 qpair failed and we were unable to recover it. 00:37:28.739 [2024-09-29 16:45:29.083212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.739 [2024-09-29 16:45:29.083249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.739 qpair failed and we were unable to recover it. 00:37:28.739 [2024-09-29 16:45:29.083424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.739 [2024-09-29 16:45:29.083459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.739 qpair failed and we were unable to recover it. 00:37:28.739 [2024-09-29 16:45:29.083612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.739 [2024-09-29 16:45:29.083646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.739 qpair failed and we were unable to recover it. 00:37:28.739 [2024-09-29 16:45:29.083816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.739 [2024-09-29 16:45:29.083851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.739 qpair failed and we were unable to recover it. 00:37:28.739 [2024-09-29 16:45:29.084004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.739 [2024-09-29 16:45:29.084038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.739 qpair failed and we were unable to recover it. 00:37:28.739 [2024-09-29 16:45:29.084158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.739 [2024-09-29 16:45:29.084190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.739 qpair failed and we were unable to recover it. 00:37:28.739 [2024-09-29 16:45:29.084310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.739 [2024-09-29 16:45:29.084343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.739 qpair failed and we were unable to recover it. 00:37:28.739 [2024-09-29 16:45:29.084450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.739 [2024-09-29 16:45:29.084484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.739 qpair failed and we were unable to recover it. 00:37:28.739 [2024-09-29 16:45:29.084635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.739 [2024-09-29 16:45:29.084681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.739 qpair failed and we were unable to recover it. 00:37:28.739 [2024-09-29 16:45:29.084827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.739 [2024-09-29 16:45:29.084861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.739 qpair failed and we were unable to recover it. 00:37:28.739 [2024-09-29 16:45:29.085008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.739 [2024-09-29 16:45:29.085041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.739 qpair failed and we were unable to recover it. 00:37:28.739 [2024-09-29 16:45:29.085154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.740 [2024-09-29 16:45:29.085187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.740 qpair failed and we were unable to recover it. 00:37:28.740 [2024-09-29 16:45:29.085317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.740 [2024-09-29 16:45:29.085366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.740 qpair failed and we were unable to recover it. 00:37:28.740 [2024-09-29 16:45:29.085524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.740 [2024-09-29 16:45:29.085561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.740 qpair failed and we were unable to recover it. 00:37:28.740 [2024-09-29 16:45:29.085720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.740 [2024-09-29 16:45:29.085755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.740 qpair failed and we were unable to recover it. 00:37:28.740 [2024-09-29 16:45:29.085870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.740 [2024-09-29 16:45:29.085905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.740 qpair failed and we were unable to recover it. 00:37:28.740 [2024-09-29 16:45:29.086055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.740 [2024-09-29 16:45:29.086089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.740 qpair failed and we were unable to recover it. 00:37:28.740 [2024-09-29 16:45:29.086217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.740 [2024-09-29 16:45:29.086252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.740 qpair failed and we were unable to recover it. 00:37:28.740 [2024-09-29 16:45:29.086375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.740 [2024-09-29 16:45:29.086411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.740 qpair failed and we were unable to recover it. 00:37:28.740 [2024-09-29 16:45:29.086524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.740 [2024-09-29 16:45:29.086560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.740 qpair failed and we were unable to recover it. 00:37:28.740 [2024-09-29 16:45:29.086700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.740 [2024-09-29 16:45:29.086735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.740 qpair failed and we were unable to recover it. 00:37:28.740 [2024-09-29 16:45:29.086850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.740 [2024-09-29 16:45:29.086884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.740 qpair failed and we were unable to recover it. 00:37:28.740 [2024-09-29 16:45:29.087057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.740 [2024-09-29 16:45:29.087105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.740 qpair failed and we were unable to recover it. 00:37:28.740 [2024-09-29 16:45:29.087257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.740 [2024-09-29 16:45:29.087293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.740 qpair failed and we were unable to recover it. 00:37:28.740 [2024-09-29 16:45:29.087443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.740 [2024-09-29 16:45:29.087477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.740 qpair failed and we were unable to recover it. 00:37:28.740 [2024-09-29 16:45:29.087638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.740 [2024-09-29 16:45:29.087680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.740 qpair failed and we were unable to recover it. 00:37:28.740 [2024-09-29 16:45:29.087813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.740 [2024-09-29 16:45:29.087849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.740 qpair failed and we were unable to recover it. 00:37:28.740 [2024-09-29 16:45:29.087975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.740 [2024-09-29 16:45:29.088011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.740 qpair failed and we were unable to recover it. 00:37:28.740 [2024-09-29 16:45:29.088149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.740 [2024-09-29 16:45:29.088183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.740 qpair failed and we were unable to recover it. 00:37:28.740 [2024-09-29 16:45:29.088295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.740 [2024-09-29 16:45:29.088328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.740 qpair failed and we were unable to recover it. 00:37:28.740 [2024-09-29 16:45:29.088445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.740 [2024-09-29 16:45:29.088479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.740 qpair failed and we were unable to recover it. 00:37:28.740 [2024-09-29 16:45:29.088629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.740 [2024-09-29 16:45:29.088662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.740 qpair failed and we were unable to recover it. 00:37:28.740 [2024-09-29 16:45:29.088798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.740 [2024-09-29 16:45:29.088831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.740 qpair failed and we were unable to recover it. 00:37:28.740 [2024-09-29 16:45:29.088954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.740 [2024-09-29 16:45:29.088990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.740 qpair failed and we were unable to recover it. 00:37:28.740 [2024-09-29 16:45:29.089111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.740 [2024-09-29 16:45:29.089148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.740 qpair failed and we were unable to recover it. 00:37:28.740 [2024-09-29 16:45:29.089303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.740 [2024-09-29 16:45:29.089338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.740 qpair failed and we were unable to recover it. 00:37:28.740 [2024-09-29 16:45:29.089464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.740 [2024-09-29 16:45:29.089499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.740 qpair failed and we were unable to recover it. 00:37:28.740 [2024-09-29 16:45:29.089614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.740 [2024-09-29 16:45:29.089649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.740 qpair failed and we were unable to recover it. 00:37:28.740 [2024-09-29 16:45:29.089777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.740 [2024-09-29 16:45:29.089812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.740 qpair failed and we were unable to recover it. 00:37:28.740 [2024-09-29 16:45:29.089939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.740 [2024-09-29 16:45:29.089973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.740 qpair failed and we were unable to recover it. 00:37:28.740 [2024-09-29 16:45:29.090082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.740 [2024-09-29 16:45:29.090117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.740 qpair failed and we were unable to recover it. 00:37:28.740 [2024-09-29 16:45:29.090267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.740 [2024-09-29 16:45:29.090301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.740 qpair failed and we were unable to recover it. 00:37:28.740 [2024-09-29 16:45:29.090450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.740 [2024-09-29 16:45:29.090484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.740 qpair failed and we were unable to recover it. 00:37:28.740 [2024-09-29 16:45:29.090597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.740 [2024-09-29 16:45:29.090630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.740 qpair failed and we were unable to recover it. 00:37:28.740 [2024-09-29 16:45:29.090804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.740 [2024-09-29 16:45:29.090853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.740 qpair failed and we were unable to recover it. 00:37:28.740 [2024-09-29 16:45:29.091013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.740 [2024-09-29 16:45:29.091048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.740 qpair failed and we were unable to recover it. 00:37:28.740 [2024-09-29 16:45:29.091168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.740 [2024-09-29 16:45:29.091202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.740 qpair failed and we were unable to recover it. 00:37:28.740 [2024-09-29 16:45:29.091325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.740 [2024-09-29 16:45:29.091359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.740 qpair failed and we were unable to recover it. 00:37:28.740 [2024-09-29 16:45:29.091484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.740 [2024-09-29 16:45:29.091524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.740 qpair failed and we were unable to recover it. 00:37:28.740 [2024-09-29 16:45:29.091669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.740 [2024-09-29 16:45:29.091709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.740 qpair failed and we were unable to recover it. 00:37:28.740 [2024-09-29 16:45:29.091831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.740 [2024-09-29 16:45:29.091866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.740 qpair failed and we were unable to recover it. 00:37:28.740 [2024-09-29 16:45:29.091986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.740 [2024-09-29 16:45:29.092024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.740 qpair failed and we were unable to recover it. 00:37:28.740 [2024-09-29 16:45:29.092178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.740 [2024-09-29 16:45:29.092213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.740 qpair failed and we were unable to recover it. 00:37:28.740 [2024-09-29 16:45:29.092355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.740 [2024-09-29 16:45:29.092391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.740 qpair failed and we were unable to recover it. 00:37:28.740 [2024-09-29 16:45:29.092509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.740 [2024-09-29 16:45:29.092543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.740 qpair failed and we were unable to recover it. 00:37:28.740 [2024-09-29 16:45:29.092680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.740 [2024-09-29 16:45:29.092722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.740 qpair failed and we were unable to recover it. 00:37:28.740 [2024-09-29 16:45:29.092834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.740 [2024-09-29 16:45:29.092867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.740 qpair failed and we were unable to recover it. 00:37:28.740 [2024-09-29 16:45:29.092987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.740 [2024-09-29 16:45:29.093020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.740 qpair failed and we were unable to recover it. 00:37:28.740 [2024-09-29 16:45:29.093142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.740 [2024-09-29 16:45:29.093176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.740 qpair failed and we were unable to recover it. 00:37:28.740 [2024-09-29 16:45:29.093342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.740 [2024-09-29 16:45:29.093376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.740 qpair failed and we were unable to recover it. 00:37:28.740 [2024-09-29 16:45:29.093489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.740 [2024-09-29 16:45:29.093525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.740 qpair failed and we were unable to recover it. 00:37:28.740 [2024-09-29 16:45:29.093659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.740 [2024-09-29 16:45:29.093720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.740 qpair failed and we were unable to recover it. 00:37:28.740 [2024-09-29 16:45:29.093852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.740 [2024-09-29 16:45:29.093888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.740 qpair failed and we were unable to recover it. 00:37:28.740 [2024-09-29 16:45:29.094004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.740 [2024-09-29 16:45:29.094044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.740 qpair failed and we were unable to recover it. 00:37:28.740 [2024-09-29 16:45:29.094190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.740 [2024-09-29 16:45:29.094223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.740 qpair failed and we were unable to recover it. 00:37:28.740 [2024-09-29 16:45:29.094350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.740 [2024-09-29 16:45:29.094386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.740 qpair failed and we were unable to recover it. 00:37:28.740 [2024-09-29 16:45:29.094504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.740 [2024-09-29 16:45:29.094538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.740 qpair failed and we were unable to recover it. 00:37:28.740 [2024-09-29 16:45:29.094689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.740 [2024-09-29 16:45:29.094736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.740 qpair failed and we were unable to recover it. 00:37:28.740 [2024-09-29 16:45:29.094844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.740 [2024-09-29 16:45:29.094878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.740 qpair failed and we were unable to recover it. 00:37:28.740 [2024-09-29 16:45:29.095028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.740 [2024-09-29 16:45:29.095062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.740 qpair failed and we were unable to recover it. 00:37:28.740 [2024-09-29 16:45:29.095173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.740 [2024-09-29 16:45:29.095208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.740 qpair failed and we were unable to recover it. 00:37:28.740 [2024-09-29 16:45:29.095377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.740 [2024-09-29 16:45:29.095412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.740 qpair failed and we were unable to recover it. 00:37:28.740 [2024-09-29 16:45:29.095526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.740 [2024-09-29 16:45:29.095560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.740 qpair failed and we were unable to recover it. 00:37:28.740 [2024-09-29 16:45:29.095681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.740 [2024-09-29 16:45:29.095719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.741 qpair failed and we were unable to recover it. 00:37:28.741 [2024-09-29 16:45:29.095887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.741 [2024-09-29 16:45:29.095934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.741 qpair failed and we were unable to recover it. 00:37:28.741 [2024-09-29 16:45:29.096067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.741 [2024-09-29 16:45:29.096103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.741 qpair failed and we were unable to recover it. 00:37:28.741 [2024-09-29 16:45:29.096244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.741 [2024-09-29 16:45:29.096278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.741 qpair failed and we were unable to recover it. 00:37:28.741 [2024-09-29 16:45:29.096402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.741 [2024-09-29 16:45:29.096436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.741 qpair failed and we were unable to recover it. 00:37:28.741 [2024-09-29 16:45:29.096579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.741 [2024-09-29 16:45:29.096612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.741 qpair failed and we were unable to recover it. 00:37:28.741 [2024-09-29 16:45:29.096747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.741 [2024-09-29 16:45:29.096783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.741 qpair failed and we were unable to recover it. 00:37:28.741 [2024-09-29 16:45:29.096941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.741 [2024-09-29 16:45:29.096976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.741 qpair failed and we were unable to recover it. 00:37:28.741 [2024-09-29 16:45:29.097088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.741 [2024-09-29 16:45:29.097121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.741 qpair failed and we were unable to recover it. 00:37:28.741 [2024-09-29 16:45:29.097239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.741 [2024-09-29 16:45:29.097272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.741 qpair failed and we were unable to recover it. 00:37:28.741 [2024-09-29 16:45:29.097413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.741 [2024-09-29 16:45:29.097447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.741 qpair failed and we were unable to recover it. 00:37:28.741 [2024-09-29 16:45:29.097591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.741 [2024-09-29 16:45:29.097624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.741 qpair failed and we were unable to recover it. 00:37:28.741 [2024-09-29 16:45:29.097742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.741 [2024-09-29 16:45:29.097776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.741 qpair failed and we were unable to recover it. 00:37:28.741 [2024-09-29 16:45:29.097939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.741 [2024-09-29 16:45:29.097974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.741 qpair failed and we were unable to recover it. 00:37:28.741 [2024-09-29 16:45:29.098091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.741 [2024-09-29 16:45:29.098126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.741 qpair failed and we were unable to recover it. 00:37:28.741 [2024-09-29 16:45:29.098265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.741 [2024-09-29 16:45:29.098304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.741 qpair failed and we were unable to recover it. 00:37:28.741 [2024-09-29 16:45:29.098420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.741 [2024-09-29 16:45:29.098453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.741 qpair failed and we were unable to recover it. 00:37:28.741 [2024-09-29 16:45:29.098621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.741 [2024-09-29 16:45:29.098654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.741 qpair failed and we were unable to recover it. 00:37:28.741 [2024-09-29 16:45:29.098806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.741 [2024-09-29 16:45:29.098853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.741 qpair failed and we were unable to recover it. 00:37:28.741 [2024-09-29 16:45:29.098983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.741 [2024-09-29 16:45:29.099017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.741 qpair failed and we were unable to recover it. 00:37:28.741 [2024-09-29 16:45:29.099127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.741 [2024-09-29 16:45:29.099161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.741 qpair failed and we were unable to recover it. 00:37:28.741 [2024-09-29 16:45:29.099283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.741 [2024-09-29 16:45:29.099316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.741 qpair failed and we were unable to recover it. 00:37:28.741 [2024-09-29 16:45:29.099441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.741 [2024-09-29 16:45:29.099474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.741 qpair failed and we were unable to recover it. 00:37:28.741 [2024-09-29 16:45:29.099618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.741 [2024-09-29 16:45:29.099652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.741 qpair failed and we were unable to recover it. 00:37:28.741 [2024-09-29 16:45:29.099787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.741 [2024-09-29 16:45:29.099822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.741 qpair failed and we were unable to recover it. 00:37:28.741 [2024-09-29 16:45:29.099952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.741 [2024-09-29 16:45:29.099988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.741 qpair failed and we were unable to recover it. 00:37:28.741 [2024-09-29 16:45:29.100109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.741 [2024-09-29 16:45:29.100143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.741 qpair failed and we were unable to recover it. 00:37:28.741 [2024-09-29 16:45:29.100287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.741 [2024-09-29 16:45:29.100319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.741 qpair failed and we were unable to recover it. 00:37:28.741 [2024-09-29 16:45:29.100435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.741 [2024-09-29 16:45:29.100469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.741 qpair failed and we were unable to recover it. 00:37:28.741 [2024-09-29 16:45:29.100594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.741 [2024-09-29 16:45:29.100628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.741 qpair failed and we were unable to recover it. 00:37:28.741 [2024-09-29 16:45:29.100750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.741 [2024-09-29 16:45:29.100785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.741 qpair failed and we were unable to recover it. 00:37:28.741 [2024-09-29 16:45:29.100904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.741 [2024-09-29 16:45:29.100939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.741 qpair failed and we were unable to recover it. 00:37:28.741 [2024-09-29 16:45:29.101086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.741 [2024-09-29 16:45:29.101121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.741 qpair failed and we were unable to recover it. 00:37:28.741 [2024-09-29 16:45:29.101263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.741 [2024-09-29 16:45:29.101297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.741 qpair failed and we were unable to recover it. 00:37:28.741 [2024-09-29 16:45:29.101443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.741 [2024-09-29 16:45:29.101478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.741 qpair failed and we were unable to recover it. 00:37:28.741 [2024-09-29 16:45:29.101636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.741 [2024-09-29 16:45:29.101693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.741 qpair failed and we were unable to recover it. 00:37:28.741 [2024-09-29 16:45:29.101833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.741 [2024-09-29 16:45:29.101868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.741 qpair failed and we were unable to recover it. 00:37:28.741 [2024-09-29 16:45:29.102011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.741 [2024-09-29 16:45:29.102044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.741 qpair failed and we were unable to recover it. 00:37:28.741 [2024-09-29 16:45:29.102162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.741 [2024-09-29 16:45:29.102195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.741 qpair failed and we were unable to recover it. 00:37:28.741 [2024-09-29 16:45:29.102310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.741 [2024-09-29 16:45:29.102342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.741 qpair failed and we were unable to recover it. 00:37:28.741 [2024-09-29 16:45:29.102466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.741 [2024-09-29 16:45:29.102501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.741 qpair failed and we were unable to recover it. 00:37:28.741 [2024-09-29 16:45:29.102615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.741 [2024-09-29 16:45:29.102651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.741 qpair failed and we were unable to recover it. 00:37:28.741 [2024-09-29 16:45:29.102805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.741 [2024-09-29 16:45:29.102854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.741 qpair failed and we were unable to recover it. 00:37:28.741 [2024-09-29 16:45:29.102986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.741 [2024-09-29 16:45:29.103022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.741 qpair failed and we were unable to recover it. 00:37:28.741 [2024-09-29 16:45:29.103184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.741 [2024-09-29 16:45:29.103218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.741 qpair failed and we were unable to recover it. 00:37:28.741 [2024-09-29 16:45:29.103329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.741 [2024-09-29 16:45:29.103361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.741 qpair failed and we were unable to recover it. 00:37:28.741 [2024-09-29 16:45:29.103507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.741 [2024-09-29 16:45:29.103541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.741 qpair failed and we were unable to recover it. 00:37:28.741 [2024-09-29 16:45:29.103696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.741 [2024-09-29 16:45:29.103738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.741 qpair failed and we were unable to recover it. 00:37:28.741 [2024-09-29 16:45:29.103877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.741 [2024-09-29 16:45:29.103909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.741 qpair failed and we were unable to recover it. 00:37:28.741 [2024-09-29 16:45:29.104034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.741 [2024-09-29 16:45:29.104067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.741 qpair failed and we were unable to recover it. 00:37:28.741 [2024-09-29 16:45:29.104180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.741 [2024-09-29 16:45:29.104213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.741 qpair failed and we were unable to recover it. 00:37:28.741 [2024-09-29 16:45:29.104370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.741 [2024-09-29 16:45:29.104403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.741 qpair failed and we were unable to recover it. 00:37:28.741 [2024-09-29 16:45:29.104517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.741 [2024-09-29 16:45:29.104550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.741 qpair failed and we were unable to recover it. 00:37:28.741 [2024-09-29 16:45:29.104694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.741 [2024-09-29 16:45:29.104727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.741 qpair failed and we were unable to recover it. 00:37:28.741 [2024-09-29 16:45:29.104857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.741 [2024-09-29 16:45:29.104904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.741 qpair failed and we were unable to recover it. 00:37:28.741 [2024-09-29 16:45:29.105053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.741 [2024-09-29 16:45:29.105094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.741 qpair failed and we were unable to recover it. 00:37:28.741 [2024-09-29 16:45:29.105230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.741 [2024-09-29 16:45:29.105278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.741 qpair failed and we were unable to recover it. 00:37:28.741 [2024-09-29 16:45:29.105404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.741 [2024-09-29 16:45:29.105437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.741 qpair failed and we were unable to recover it. 00:37:28.741 [2024-09-29 16:45:29.105545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.741 [2024-09-29 16:45:29.105578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.741 qpair failed and we were unable to recover it. 00:37:28.741 [2024-09-29 16:45:29.105706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.741 [2024-09-29 16:45:29.105749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.741 qpair failed and we were unable to recover it. 00:37:28.741 [2024-09-29 16:45:29.105869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.741 [2024-09-29 16:45:29.105902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.741 qpair failed and we were unable to recover it. 00:37:28.741 [2024-09-29 16:45:29.106025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.741 [2024-09-29 16:45:29.106057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.741 qpair failed and we were unable to recover it. 00:37:28.741 [2024-09-29 16:45:29.106195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.741 [2024-09-29 16:45:29.106228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.741 qpair failed and we were unable to recover it. 00:37:28.741 [2024-09-29 16:45:29.106362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.742 [2024-09-29 16:45:29.106395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.742 qpair failed and we were unable to recover it. 00:37:28.742 [2024-09-29 16:45:29.106519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.742 [2024-09-29 16:45:29.106551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.742 qpair failed and we were unable to recover it. 00:37:28.742 [2024-09-29 16:45:29.106695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.742 [2024-09-29 16:45:29.106738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.742 qpair failed and we were unable to recover it. 00:37:28.742 [2024-09-29 16:45:29.106841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.742 [2024-09-29 16:45:29.106874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.742 qpair failed and we were unable to recover it. 00:37:28.742 [2024-09-29 16:45:29.106999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.742 [2024-09-29 16:45:29.107034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.742 qpair failed and we were unable to recover it. 00:37:28.742 [2024-09-29 16:45:29.107179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.742 [2024-09-29 16:45:29.107211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.742 qpair failed and we were unable to recover it. 00:37:28.742 [2024-09-29 16:45:29.107360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.742 [2024-09-29 16:45:29.107393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.742 qpair failed and we were unable to recover it. 00:37:28.742 [2024-09-29 16:45:29.107524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.742 [2024-09-29 16:45:29.107571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.742 qpair failed and we were unable to recover it. 00:37:28.742 [2024-09-29 16:45:29.107729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.742 [2024-09-29 16:45:29.107778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.742 qpair failed and we were unable to recover it. 00:37:28.742 [2024-09-29 16:45:29.107903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.742 [2024-09-29 16:45:29.107940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.742 qpair failed and we were unable to recover it. 00:37:28.742 [2024-09-29 16:45:29.108091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.742 [2024-09-29 16:45:29.108126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.742 qpair failed and we were unable to recover it. 00:37:28.742 [2024-09-29 16:45:29.108243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.742 [2024-09-29 16:45:29.108286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.742 qpair failed and we were unable to recover it. 00:37:28.742 [2024-09-29 16:45:29.108426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.742 [2024-09-29 16:45:29.108474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.742 qpair failed and we were unable to recover it. 00:37:28.742 [2024-09-29 16:45:29.108626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.742 [2024-09-29 16:45:29.108660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.742 qpair failed and we were unable to recover it. 00:37:28.742 [2024-09-29 16:45:29.108800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.742 [2024-09-29 16:45:29.108833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.742 qpair failed and we were unable to recover it. 00:37:28.742 [2024-09-29 16:45:29.108952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.742 [2024-09-29 16:45:29.108985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.742 qpair failed and we were unable to recover it. 00:37:28.742 [2024-09-29 16:45:29.109100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.742 [2024-09-29 16:45:29.109133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.742 qpair failed and we were unable to recover it. 00:37:28.742 [2024-09-29 16:45:29.109240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.742 [2024-09-29 16:45:29.109273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.742 qpair failed and we were unable to recover it. 00:37:28.742 [2024-09-29 16:45:29.109421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.742 [2024-09-29 16:45:29.109455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.742 qpair failed and we were unable to recover it. 00:37:28.742 [2024-09-29 16:45:29.109618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.742 [2024-09-29 16:45:29.109666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.742 qpair failed and we were unable to recover it. 00:37:28.742 [2024-09-29 16:45:29.109828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.742 [2024-09-29 16:45:29.109876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.742 qpair failed and we were unable to recover it. 00:37:28.742 [2024-09-29 16:45:29.110049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.742 [2024-09-29 16:45:29.110085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.742 qpair failed and we were unable to recover it. 00:37:28.742 [2024-09-29 16:45:29.110206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.742 [2024-09-29 16:45:29.110241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.742 qpair failed and we were unable to recover it. 00:37:28.742 [2024-09-29 16:45:29.110387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.742 [2024-09-29 16:45:29.110421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.742 qpair failed and we were unable to recover it. 00:37:28.742 [2024-09-29 16:45:29.110592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.742 [2024-09-29 16:45:29.110626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.742 qpair failed and we were unable to recover it. 00:37:28.742 [2024-09-29 16:45:29.110770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.742 [2024-09-29 16:45:29.110817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.742 qpair failed and we were unable to recover it. 00:37:28.742 [2024-09-29 16:45:29.111018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.742 [2024-09-29 16:45:29.111056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.742 qpair failed and we were unable to recover it. 00:37:28.742 [2024-09-29 16:45:29.111204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.742 [2024-09-29 16:45:29.111238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.742 qpair failed and we were unable to recover it. 00:37:28.742 [2024-09-29 16:45:29.111386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.742 [2024-09-29 16:45:29.111419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.742 qpair failed and we were unable to recover it. 00:37:28.742 [2024-09-29 16:45:29.111536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.742 [2024-09-29 16:45:29.111570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.742 qpair failed and we were unable to recover it. 00:37:28.742 [2024-09-29 16:45:29.111696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.742 [2024-09-29 16:45:29.111733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.742 qpair failed and we were unable to recover it. 00:37:28.742 [2024-09-29 16:45:29.111872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.742 [2024-09-29 16:45:29.111905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.742 qpair failed and we were unable to recover it. 00:37:28.742 [2024-09-29 16:45:29.112051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.742 [2024-09-29 16:45:29.112089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.742 qpair failed and we were unable to recover it. 00:37:28.742 [2024-09-29 16:45:29.112209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.742 [2024-09-29 16:45:29.112242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.742 qpair failed and we were unable to recover it. 00:37:28.742 [2024-09-29 16:45:29.112359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.742 [2024-09-29 16:45:29.112393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.742 qpair failed and we were unable to recover it. 00:37:28.742 [2024-09-29 16:45:29.112550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.742 [2024-09-29 16:45:29.112588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.742 qpair failed and we were unable to recover it. 00:37:28.742 [2024-09-29 16:45:29.112734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.742 [2024-09-29 16:45:29.112783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.742 qpair failed and we were unable to recover it. 00:37:28.742 [2024-09-29 16:45:29.112904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.742 [2024-09-29 16:45:29.112939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.742 qpair failed and we were unable to recover it. 00:37:28.742 [2024-09-29 16:45:29.113086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.742 [2024-09-29 16:45:29.113120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.742 qpair failed and we were unable to recover it. 00:37:28.742 [2024-09-29 16:45:29.113234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.742 [2024-09-29 16:45:29.113268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.742 qpair failed and we were unable to recover it. 00:37:28.742 [2024-09-29 16:45:29.113410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.742 [2024-09-29 16:45:29.113444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.742 qpair failed and we were unable to recover it. 00:37:28.742 [2024-09-29 16:45:29.113561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.742 [2024-09-29 16:45:29.113597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.742 qpair failed and we were unable to recover it. 00:37:28.742 [2024-09-29 16:45:29.113747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.742 [2024-09-29 16:45:29.113794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.742 qpair failed and we were unable to recover it. 00:37:28.742 [2024-09-29 16:45:29.113927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.742 [2024-09-29 16:45:29.113969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.742 qpair failed and we were unable to recover it. 00:37:28.742 [2024-09-29 16:45:29.114122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.742 [2024-09-29 16:45:29.114168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.742 qpair failed and we were unable to recover it. 00:37:28.742 [2024-09-29 16:45:29.114288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.742 [2024-09-29 16:45:29.114321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.742 qpair failed and we were unable to recover it. 00:37:28.742 [2024-09-29 16:45:29.114442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.742 [2024-09-29 16:45:29.114478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.742 qpair failed and we were unable to recover it. 00:37:28.742 [2024-09-29 16:45:29.114594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.742 [2024-09-29 16:45:29.114627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.742 qpair failed and we were unable to recover it. 00:37:28.742 [2024-09-29 16:45:29.114785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.742 [2024-09-29 16:45:29.114820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.742 qpair failed and we were unable to recover it. 00:37:28.742 [2024-09-29 16:45:29.114940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.742 [2024-09-29 16:45:29.114973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.742 qpair failed and we were unable to recover it. 00:37:28.742 [2024-09-29 16:45:29.115106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.742 [2024-09-29 16:45:29.115140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.742 qpair failed and we were unable to recover it. 00:37:28.742 [2024-09-29 16:45:29.115284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.742 [2024-09-29 16:45:29.115318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.742 qpair failed and we were unable to recover it. 00:37:28.742 [2024-09-29 16:45:29.115449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.742 [2024-09-29 16:45:29.115497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.742 qpair failed and we were unable to recover it. 00:37:28.742 [2024-09-29 16:45:29.115667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.742 [2024-09-29 16:45:29.115713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.742 qpair failed and we were unable to recover it. 00:37:28.742 [2024-09-29 16:45:29.115874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.742 [2024-09-29 16:45:29.115908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.742 qpair failed and we were unable to recover it. 00:37:28.742 [2024-09-29 16:45:29.116076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.742 [2024-09-29 16:45:29.116109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.742 qpair failed and we were unable to recover it. 00:37:28.742 [2024-09-29 16:45:29.116251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.742 [2024-09-29 16:45:29.116284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.742 qpair failed and we were unable to recover it. 00:37:28.742 [2024-09-29 16:45:29.116396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.742 [2024-09-29 16:45:29.116431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.742 qpair failed and we were unable to recover it. 00:37:28.742 [2024-09-29 16:45:29.116565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.742 [2024-09-29 16:45:29.116613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.742 qpair failed and we were unable to recover it. 00:37:28.742 [2024-09-29 16:45:29.116757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.742 [2024-09-29 16:45:29.116805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.742 qpair failed and we were unable to recover it. 00:37:28.742 [2024-09-29 16:45:29.116938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.742 [2024-09-29 16:45:29.116973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.742 qpair failed and we were unable to recover it. 00:37:28.742 [2024-09-29 16:45:29.117121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.742 [2024-09-29 16:45:29.117154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.742 qpair failed and we were unable to recover it. 00:37:28.742 [2024-09-29 16:45:29.117331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.742 [2024-09-29 16:45:29.117363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.742 qpair failed and we were unable to recover it. 00:37:28.742 [2024-09-29 16:45:29.117475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.742 [2024-09-29 16:45:29.117508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.742 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.117666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.117729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.117899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.117946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.118090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.118125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.118265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.118299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.118417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.118451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.118577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.118625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.118764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.118798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.118981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.119014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.119164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.119202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.119322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.119354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.119496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.119529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.119645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.119684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.119822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.119870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.120067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.120116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.120275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.120311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.120427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.120460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.120577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.120611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.120766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.120815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.120963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.121003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.121124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.121157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.121302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.121336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.121477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.121510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.121668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.121737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.121888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.121936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.122105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.122166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.122326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.122365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.122518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.122555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.122729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.122763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.122923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.122980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.123179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.123244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.123409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.123463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.123630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.123665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.123822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.123856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.124021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.124069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.124197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.124244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.124395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.124442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.124592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.124627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.124812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.124861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.124999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.125043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.125192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.125266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.125448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.125486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.125667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.125710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.125835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.125869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.126012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.126060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.126238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.126274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.126510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.126569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.126787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.126821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.126971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.127004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.127142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.127180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.127327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.127361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.127508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.127541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.127658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.127700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.127828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.127862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.128047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.128095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.128255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.128291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.128411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.128446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.128560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.128595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.128730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.128765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.128887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.128921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.129066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.129101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.129265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.129313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.129439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.129475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.129617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.129652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.129803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.129848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.129964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.129998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.130127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.130162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.130285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.130319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.130476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.130510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.130629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.130662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.130798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.130831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.130977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.131011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.131189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.131224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.131340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.131374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.131574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.131623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.131761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.131795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.743 [2024-09-29 16:45:29.131910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.743 [2024-09-29 16:45:29.131948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.743 qpair failed and we were unable to recover it. 00:37:28.744 [2024-09-29 16:45:29.132061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.744 [2024-09-29 16:45:29.132094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.744 qpair failed and we were unable to recover it. 00:37:28.744 [2024-09-29 16:45:29.132206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.744 [2024-09-29 16:45:29.132239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.744 qpair failed and we were unable to recover it. 00:37:28.744 [2024-09-29 16:45:29.132409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.744 [2024-09-29 16:45:29.132442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.744 qpair failed and we were unable to recover it. 00:37:28.744 [2024-09-29 16:45:29.132556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.744 [2024-09-29 16:45:29.132590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.744 qpair failed and we were unable to recover it. 00:37:28.744 [2024-09-29 16:45:29.132722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.744 [2024-09-29 16:45:29.132760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.744 qpair failed and we were unable to recover it. 00:37:28.744 [2024-09-29 16:45:29.132953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.744 [2024-09-29 16:45:29.133000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.744 qpair failed and we were unable to recover it. 00:37:28.744 [2024-09-29 16:45:29.133126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.744 [2024-09-29 16:45:29.133161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.744 qpair failed and we were unable to recover it. 00:37:28.744 [2024-09-29 16:45:29.133273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.744 [2024-09-29 16:45:29.133306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.744 qpair failed and we were unable to recover it. 00:37:28.744 [2024-09-29 16:45:29.133413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.744 [2024-09-29 16:45:29.133446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.744 qpair failed and we were unable to recover it. 00:37:28.744 [2024-09-29 16:45:29.133580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.744 [2024-09-29 16:45:29.133628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.744 qpair failed and we were unable to recover it. 00:37:28.744 [2024-09-29 16:45:29.133792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.744 [2024-09-29 16:45:29.133828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.744 qpair failed and we were unable to recover it. 00:37:28.744 [2024-09-29 16:45:29.133951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.744 [2024-09-29 16:45:29.133988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.744 qpair failed and we were unable to recover it. 00:37:28.744 [2024-09-29 16:45:29.134155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.744 [2024-09-29 16:45:29.134190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.744 qpair failed and we were unable to recover it. 00:37:28.744 [2024-09-29 16:45:29.134309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.744 [2024-09-29 16:45:29.134342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.744 qpair failed and we were unable to recover it. 00:37:28.744 [2024-09-29 16:45:29.134485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.744 [2024-09-29 16:45:29.134518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.744 qpair failed and we were unable to recover it. 00:37:28.744 [2024-09-29 16:45:29.134638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.744 [2024-09-29 16:45:29.134683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.744 qpair failed and we were unable to recover it. 00:37:28.744 [2024-09-29 16:45:29.134828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.744 [2024-09-29 16:45:29.134866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.744 qpair failed and we were unable to recover it. 00:37:28.744 [2024-09-29 16:45:29.135029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.744 [2024-09-29 16:45:29.135077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.744 qpair failed and we were unable to recover it. 00:37:28.744 [2024-09-29 16:45:29.135224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.744 [2024-09-29 16:45:29.135260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.744 qpair failed and we were unable to recover it. 00:37:28.744 [2024-09-29 16:45:29.135415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.744 [2024-09-29 16:45:29.135449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.744 qpair failed and we were unable to recover it. 00:37:28.744 [2024-09-29 16:45:29.135569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.744 [2024-09-29 16:45:29.135602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.744 qpair failed and we were unable to recover it. 00:37:28.744 [2024-09-29 16:45:29.135773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.744 [2024-09-29 16:45:29.135821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.744 qpair failed and we were unable to recover it. 00:37:28.744 [2024-09-29 16:45:29.135973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.744 [2024-09-29 16:45:29.136008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.744 qpair failed and we were unable to recover it. 00:37:28.744 [2024-09-29 16:45:29.136124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.744 [2024-09-29 16:45:29.136158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.744 qpair failed and we were unable to recover it. 00:37:28.744 [2024-09-29 16:45:29.136296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.744 [2024-09-29 16:45:29.136330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.744 qpair failed and we were unable to recover it. 00:37:28.744 [2024-09-29 16:45:29.136449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.744 [2024-09-29 16:45:29.136483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.744 qpair failed and we were unable to recover it. 00:37:28.744 [2024-09-29 16:45:29.136633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.744 [2024-09-29 16:45:29.136677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.744 qpair failed and we were unable to recover it. 00:37:28.744 [2024-09-29 16:45:29.136822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.744 [2024-09-29 16:45:29.136857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.744 qpair failed and we were unable to recover it. 00:37:28.744 [2024-09-29 16:45:29.136975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.744 [2024-09-29 16:45:29.137009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.744 qpair failed and we were unable to recover it. 00:37:28.744 [2024-09-29 16:45:29.137151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.744 [2024-09-29 16:45:29.137185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.744 qpair failed and we were unable to recover it. 00:37:28.744 [2024-09-29 16:45:29.137321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.744 [2024-09-29 16:45:29.137354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.744 qpair failed and we were unable to recover it. 00:37:28.744 [2024-09-29 16:45:29.137464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.744 [2024-09-29 16:45:29.137497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.744 qpair failed and we were unable to recover it. 00:37:28.744 [2024-09-29 16:45:29.137681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.744 [2024-09-29 16:45:29.137726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.744 qpair failed and we were unable to recover it. 00:37:28.744 [2024-09-29 16:45:29.137852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.744 [2024-09-29 16:45:29.137889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.744 qpair failed and we were unable to recover it. 00:37:28.744 [2024-09-29 16:45:29.138045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.744 [2024-09-29 16:45:29.138080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.744 qpair failed and we were unable to recover it. 00:37:28.744 [2024-09-29 16:45:29.138222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.744 [2024-09-29 16:45:29.138256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.744 qpair failed and we were unable to recover it. 00:37:28.744 [2024-09-29 16:45:29.138424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.744 [2024-09-29 16:45:29.138458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.744 qpair failed and we were unable to recover it. 00:37:28.744 [2024-09-29 16:45:29.138600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.744 [2024-09-29 16:45:29.138633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.744 qpair failed and we were unable to recover it. 00:37:28.744 [2024-09-29 16:45:29.138763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.744 [2024-09-29 16:45:29.138798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.744 qpair failed and we were unable to recover it. 00:37:28.744 [2024-09-29 16:45:29.138947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.744 [2024-09-29 16:45:29.138987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.744 qpair failed and we were unable to recover it. 00:37:28.744 [2024-09-29 16:45:29.139136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.744 [2024-09-29 16:45:29.139169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.744 qpair failed and we were unable to recover it. 00:37:28.744 [2024-09-29 16:45:29.139310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.744 [2024-09-29 16:45:29.139343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.744 qpair failed and we were unable to recover it. 00:37:28.744 [2024-09-29 16:45:29.139461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.744 [2024-09-29 16:45:29.139494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.744 qpair failed and we were unable to recover it. 00:37:28.744 [2024-09-29 16:45:29.139632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.744 [2024-09-29 16:45:29.139684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.744 qpair failed and we were unable to recover it. 00:37:28.744 [2024-09-29 16:45:29.139877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.744 [2024-09-29 16:45:29.139911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.744 qpair failed and we were unable to recover it. 00:37:28.744 [2024-09-29 16:45:29.140026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.744 [2024-09-29 16:45:29.140061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.744 qpair failed and we were unable to recover it. 00:37:28.744 [2024-09-29 16:45:29.140203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.744 [2024-09-29 16:45:29.140238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.744 qpair failed and we were unable to recover it. 00:37:28.744 [2024-09-29 16:45:29.140412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.744 [2024-09-29 16:45:29.140446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.744 qpair failed and we were unable to recover it. 00:37:28.744 [2024-09-29 16:45:29.140557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.744 [2024-09-29 16:45:29.140591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.744 qpair failed and we were unable to recover it. 00:37:28.744 [2024-09-29 16:45:29.140715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.744 [2024-09-29 16:45:29.140749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.744 qpair failed and we were unable to recover it. 00:37:28.744 [2024-09-29 16:45:29.140889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.744 [2024-09-29 16:45:29.140936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.744 qpair failed and we were unable to recover it. 00:37:28.744 [2024-09-29 16:45:29.141084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.744 [2024-09-29 16:45:29.141119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.744 qpair failed and we were unable to recover it. 00:37:28.744 [2024-09-29 16:45:29.141232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.744 [2024-09-29 16:45:29.141266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.744 qpair failed and we were unable to recover it. 00:37:28.744 [2024-09-29 16:45:29.141388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.744 [2024-09-29 16:45:29.141422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.744 qpair failed and we were unable to recover it. 00:37:28.744 [2024-09-29 16:45:29.141560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.744 [2024-09-29 16:45:29.141594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.744 qpair failed and we were unable to recover it. 00:37:28.744 [2024-09-29 16:45:29.141706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.744 [2024-09-29 16:45:29.141740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.744 qpair failed and we were unable to recover it. 00:37:28.744 [2024-09-29 16:45:29.141850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.744 [2024-09-29 16:45:29.141883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.744 qpair failed and we were unable to recover it. 00:37:28.744 [2024-09-29 16:45:29.142054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.744 [2024-09-29 16:45:29.142101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.744 qpair failed and we were unable to recover it. 00:37:28.744 [2024-09-29 16:45:29.142229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.744 [2024-09-29 16:45:29.142265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.744 qpair failed and we were unable to recover it. 00:37:28.744 [2024-09-29 16:45:29.142387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.744 [2024-09-29 16:45:29.142423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.744 qpair failed and we were unable to recover it. 00:37:28.744 [2024-09-29 16:45:29.142533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.744 [2024-09-29 16:45:29.142567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.744 qpair failed and we were unable to recover it. 00:37:28.744 [2024-09-29 16:45:29.142717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.744 [2024-09-29 16:45:29.142752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.744 qpair failed and we were unable to recover it. 00:37:28.744 [2024-09-29 16:45:29.142868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.744 [2024-09-29 16:45:29.142903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.744 qpair failed and we were unable to recover it. 00:37:28.744 [2024-09-29 16:45:29.143016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.744 [2024-09-29 16:45:29.143050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.744 qpair failed and we were unable to recover it. 00:37:28.744 [2024-09-29 16:45:29.143164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.744 [2024-09-29 16:45:29.143198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.744 qpair failed and we were unable to recover it. 00:37:28.744 [2024-09-29 16:45:29.143343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.744 [2024-09-29 16:45:29.143377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.744 qpair failed and we were unable to recover it. 00:37:28.744 [2024-09-29 16:45:29.143497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.744 [2024-09-29 16:45:29.143531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.744 qpair failed and we were unable to recover it. 00:37:28.744 [2024-09-29 16:45:29.143663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.744 [2024-09-29 16:45:29.143724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.744 qpair failed and we were unable to recover it. 00:37:28.744 [2024-09-29 16:45:29.143857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.744 [2024-09-29 16:45:29.143893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.744 qpair failed and we were unable to recover it. 00:37:28.744 [2024-09-29 16:45:29.144022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.744 [2024-09-29 16:45:29.144058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.744 qpair failed and we were unable to recover it. 00:37:28.744 [2024-09-29 16:45:29.144196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.744 [2024-09-29 16:45:29.144230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.744 qpair failed and we were unable to recover it. 00:37:28.744 [2024-09-29 16:45:29.144373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.744 [2024-09-29 16:45:29.144409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.744 qpair failed and we were unable to recover it. 00:37:28.744 [2024-09-29 16:45:29.144578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.144612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.144734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.144768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.144943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.144981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.145098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.145131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.145279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.145314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.145465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.145500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.145636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.145678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.145831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.145870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.146061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.146095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.146318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.146352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.146491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.146525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.146633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.146667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.146803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.146838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.146980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.147014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.147152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.147202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.147368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.147403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.147581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.147617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.147759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.147807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.147937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.147973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.148113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.148147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.148264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.148298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.148438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.148493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.148633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.148666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.148820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.148853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.148992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.149043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.149284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.149320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.149444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.149481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.149668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.149732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.149896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.149934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.150079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.150113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.150261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.150295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.150434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.150481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.150598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.150634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.150773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.150821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.150977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.151012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.151151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.151185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.151351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.151384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.151603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.151642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.151796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.151830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.151940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.151974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.152128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.152161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.152277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.152310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.152457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.152494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.152622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.152658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.152822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.152857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.153014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.153048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.153206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.153244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.153457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.153499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.153686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.153748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.153875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.153910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.154035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.154070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.154225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.154277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.154495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.154528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.154669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.154722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.154856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.154892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.155052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.155100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.155334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.155392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.155554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.155592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.155742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.155776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.155945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.155997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.156135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.156182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.156470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.156511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.156682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.156737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.156881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.156916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.157068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.157106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.157401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.157459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.157600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.157634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.157775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.157809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.157970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.158007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.158203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.158240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.158464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.158518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.158688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.158731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.158842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.158876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.159034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.159095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.159279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.159316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.159486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.159524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.159707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.159743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.745 [2024-09-29 16:45:29.159864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.745 [2024-09-29 16:45:29.159898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.745 qpair failed and we were unable to recover it. 00:37:28.746 [2024-09-29 16:45:29.160069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.746 [2024-09-29 16:45:29.160134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.746 qpair failed and we were unable to recover it. 00:37:28.746 [2024-09-29 16:45:29.160307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.746 [2024-09-29 16:45:29.160362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.746 qpair failed and we were unable to recover it. 00:37:28.746 [2024-09-29 16:45:29.160473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.746 [2024-09-29 16:45:29.160509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.746 qpair failed and we were unable to recover it. 00:37:28.746 [2024-09-29 16:45:29.160651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.746 [2024-09-29 16:45:29.160693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.746 qpair failed and we were unable to recover it. 00:37:28.746 [2024-09-29 16:45:29.160890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.746 [2024-09-29 16:45:29.160942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.746 qpair failed and we were unable to recover it. 00:37:28.746 [2024-09-29 16:45:29.161182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.746 [2024-09-29 16:45:29.161242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.746 qpair failed and we were unable to recover it. 00:37:28.746 [2024-09-29 16:45:29.161447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.746 [2024-09-29 16:45:29.161503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.746 qpair failed and we were unable to recover it. 00:37:28.746 [2024-09-29 16:45:29.161666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.746 [2024-09-29 16:45:29.161706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.746 qpair failed and we were unable to recover it. 00:37:28.746 [2024-09-29 16:45:29.161882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.746 [2024-09-29 16:45:29.161917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.746 qpair failed and we were unable to recover it. 00:37:28.746 [2024-09-29 16:45:29.162061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.746 [2024-09-29 16:45:29.162101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.746 qpair failed and we were unable to recover it. 00:37:28.746 [2024-09-29 16:45:29.162256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.746 [2024-09-29 16:45:29.162312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.746 qpair failed and we were unable to recover it. 00:37:28.746 [2024-09-29 16:45:29.162495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.746 [2024-09-29 16:45:29.162533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.746 qpair failed and we were unable to recover it. 00:37:28.746 [2024-09-29 16:45:29.162728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.746 [2024-09-29 16:45:29.162775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.746 qpair failed and we were unable to recover it. 00:37:28.746 [2024-09-29 16:45:29.162934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.746 [2024-09-29 16:45:29.162971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.746 qpair failed and we were unable to recover it. 00:37:28.746 [2024-09-29 16:45:29.163111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.746 [2024-09-29 16:45:29.163183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.746 qpair failed and we were unable to recover it. 00:37:28.746 [2024-09-29 16:45:29.163395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.746 [2024-09-29 16:45:29.163451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.746 qpair failed and we were unable to recover it. 00:37:28.746 [2024-09-29 16:45:29.163617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.746 [2024-09-29 16:45:29.163650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.746 qpair failed and we were unable to recover it. 00:37:28.746 [2024-09-29 16:45:29.163792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.746 [2024-09-29 16:45:29.163840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.746 qpair failed and we were unable to recover it. 00:37:28.746 [2024-09-29 16:45:29.164035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.746 [2024-09-29 16:45:29.164075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.746 qpair failed and we were unable to recover it. 00:37:28.746 [2024-09-29 16:45:29.164349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.746 [2024-09-29 16:45:29.164411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.746 qpair failed and we were unable to recover it. 00:37:28.746 [2024-09-29 16:45:29.164526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.746 [2024-09-29 16:45:29.164560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.746 qpair failed and we were unable to recover it. 00:37:28.746 [2024-09-29 16:45:29.164728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.746 [2024-09-29 16:45:29.164763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.746 qpair failed and we were unable to recover it. 00:37:28.746 [2024-09-29 16:45:29.164889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.746 [2024-09-29 16:45:29.164943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.746 qpair failed and we were unable to recover it. 00:37:28.746 [2024-09-29 16:45:29.165142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.746 [2024-09-29 16:45:29.165176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.746 qpair failed and we were unable to recover it. 00:37:28.746 [2024-09-29 16:45:29.165329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.746 [2024-09-29 16:45:29.165364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.746 qpair failed and we were unable to recover it. 00:37:28.746 [2024-09-29 16:45:29.165484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.746 [2024-09-29 16:45:29.165519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.746 qpair failed and we were unable to recover it. 00:37:28.746 [2024-09-29 16:45:29.165692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.746 [2024-09-29 16:45:29.165740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.746 qpair failed and we were unable to recover it. 00:37:28.746 [2024-09-29 16:45:29.165862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.746 [2024-09-29 16:45:29.165898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.746 qpair failed and we were unable to recover it. 00:37:28.746 [2024-09-29 16:45:29.166056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.746 [2024-09-29 16:45:29.166089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.746 qpair failed and we were unable to recover it. 00:37:28.746 [2024-09-29 16:45:29.166236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.746 [2024-09-29 16:45:29.166269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.746 qpair failed and we were unable to recover it. 00:37:28.746 [2024-09-29 16:45:29.166405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.746 [2024-09-29 16:45:29.166442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.746 qpair failed and we were unable to recover it. 00:37:28.746 [2024-09-29 16:45:29.166569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.746 [2024-09-29 16:45:29.166603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.746 qpair failed and we were unable to recover it. 00:37:28.746 [2024-09-29 16:45:29.166781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.746 [2024-09-29 16:45:29.166817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.746 qpair failed and we were unable to recover it. 00:37:28.746 [2024-09-29 16:45:29.166960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.746 [2024-09-29 16:45:29.166993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.746 qpair failed and we were unable to recover it. 00:37:28.746 [2024-09-29 16:45:29.167177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.746 [2024-09-29 16:45:29.167211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.746 qpair failed and we were unable to recover it. 00:37:28.746 [2024-09-29 16:45:29.167386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.746 [2024-09-29 16:45:29.167424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.746 qpair failed and we were unable to recover it. 00:37:28.746 [2024-09-29 16:45:29.167582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.746 [2024-09-29 16:45:29.167620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.746 qpair failed and we were unable to recover it. 00:37:28.746 [2024-09-29 16:45:29.167788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.746 [2024-09-29 16:45:29.167823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.746 qpair failed and we were unable to recover it. 00:37:28.746 [2024-09-29 16:45:29.167963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.746 [2024-09-29 16:45:29.167997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.746 qpair failed and we were unable to recover it. 00:37:28.746 [2024-09-29 16:45:29.168170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.746 [2024-09-29 16:45:29.168203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.746 qpair failed and we were unable to recover it. 00:37:28.746 [2024-09-29 16:45:29.168357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.746 [2024-09-29 16:45:29.168409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.746 qpair failed and we were unable to recover it. 00:37:28.746 [2024-09-29 16:45:29.168588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.746 [2024-09-29 16:45:29.168625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.746 qpair failed and we were unable to recover it. 00:37:28.746 [2024-09-29 16:45:29.168775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.746 [2024-09-29 16:45:29.168809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.746 qpair failed and we were unable to recover it. 00:37:28.746 [2024-09-29 16:45:29.168919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.746 [2024-09-29 16:45:29.168952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.746 qpair failed and we were unable to recover it. 00:37:28.746 [2024-09-29 16:45:29.169145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.746 [2024-09-29 16:45:29.169200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.746 qpair failed and we were unable to recover it. 00:37:28.746 [2024-09-29 16:45:29.169383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.746 [2024-09-29 16:45:29.169449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.746 qpair failed and we were unable to recover it. 00:37:28.746 [2024-09-29 16:45:29.169606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.746 [2024-09-29 16:45:29.169638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.746 qpair failed and we were unable to recover it. 00:37:28.746 [2024-09-29 16:45:29.169800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.746 [2024-09-29 16:45:29.169835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.746 qpair failed and we were unable to recover it. 00:37:28.746 [2024-09-29 16:45:29.169941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.746 [2024-09-29 16:45:29.169973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.746 qpair failed and we were unable to recover it. 00:37:28.746 [2024-09-29 16:45:29.170157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.746 [2024-09-29 16:45:29.170230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.746 qpair failed and we were unable to recover it. 00:37:28.746 [2024-09-29 16:45:29.170401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.746 [2024-09-29 16:45:29.170456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.746 qpair failed and we were unable to recover it. 00:37:28.746 [2024-09-29 16:45:29.170570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.746 [2024-09-29 16:45:29.170605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.746 qpair failed and we were unable to recover it. 00:37:28.746 [2024-09-29 16:45:29.170727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.746 [2024-09-29 16:45:29.170763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.746 qpair failed and we were unable to recover it. 00:37:28.746 [2024-09-29 16:45:29.170953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.746 [2024-09-29 16:45:29.171015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.746 qpair failed and we were unable to recover it. 00:37:28.746 [2024-09-29 16:45:29.171302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.746 [2024-09-29 16:45:29.171344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.746 qpair failed and we were unable to recover it. 00:37:28.746 [2024-09-29 16:45:29.171507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.746 [2024-09-29 16:45:29.171545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.746 qpair failed and we were unable to recover it. 00:37:28.746 [2024-09-29 16:45:29.171722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.746 [2024-09-29 16:45:29.171758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.746 qpair failed and we were unable to recover it. 00:37:28.746 [2024-09-29 16:45:29.171904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.746 [2024-09-29 16:45:29.171939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.746 qpair failed and we were unable to recover it. 00:37:28.746 [2024-09-29 16:45:29.172073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.746 [2024-09-29 16:45:29.172111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.746 qpair failed and we were unable to recover it. 00:37:28.746 [2024-09-29 16:45:29.172268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.746 [2024-09-29 16:45:29.172305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.746 qpair failed and we were unable to recover it. 00:37:28.746 [2024-09-29 16:45:29.172459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.746 [2024-09-29 16:45:29.172497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.746 qpair failed and we were unable to recover it. 00:37:28.746 [2024-09-29 16:45:29.172729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.746 [2024-09-29 16:45:29.172777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.746 qpair failed and we were unable to recover it. 00:37:28.747 [2024-09-29 16:45:29.172902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.747 [2024-09-29 16:45:29.172938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.747 qpair failed and we were unable to recover it. 00:37:28.747 [2024-09-29 16:45:29.173135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.747 [2024-09-29 16:45:29.173186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.747 qpair failed and we were unable to recover it. 00:37:28.747 [2024-09-29 16:45:29.173309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.747 [2024-09-29 16:45:29.173343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.747 qpair failed and we were unable to recover it. 00:37:28.747 [2024-09-29 16:45:29.173498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.747 [2024-09-29 16:45:29.173532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.747 qpair failed and we were unable to recover it. 00:37:28.747 [2024-09-29 16:45:29.173699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.747 [2024-09-29 16:45:29.173752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.747 qpair failed and we were unable to recover it. 00:37:28.747 [2024-09-29 16:45:29.173868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.747 [2024-09-29 16:45:29.173903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.747 qpair failed and we were unable to recover it. 00:37:28.747 [2024-09-29 16:45:29.174031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.747 [2024-09-29 16:45:29.174065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.747 qpair failed and we were unable to recover it. 00:37:28.747 [2024-09-29 16:45:29.174204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.747 [2024-09-29 16:45:29.174238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.747 qpair failed and we were unable to recover it. 00:37:28.747 [2024-09-29 16:45:29.174385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.747 [2024-09-29 16:45:29.174419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.747 qpair failed and we were unable to recover it. 00:37:28.747 [2024-09-29 16:45:29.174532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.747 [2024-09-29 16:45:29.174566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.747 qpair failed and we were unable to recover it. 00:37:28.747 [2024-09-29 16:45:29.174714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.747 [2024-09-29 16:45:29.174762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.747 qpair failed and we were unable to recover it. 00:37:28.747 [2024-09-29 16:45:29.174907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.747 [2024-09-29 16:45:29.174987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.747 qpair failed and we were unable to recover it. 00:37:28.747 [2024-09-29 16:45:29.175131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.747 [2024-09-29 16:45:29.175182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.747 qpair failed and we were unable to recover it. 00:37:28.747 [2024-09-29 16:45:29.175361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.747 [2024-09-29 16:45:29.175407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.747 qpair failed and we were unable to recover it. 00:37:28.747 [2024-09-29 16:45:29.175584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.747 [2024-09-29 16:45:29.175618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.747 qpair failed and we were unable to recover it. 00:37:28.747 [2024-09-29 16:45:29.175779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.747 [2024-09-29 16:45:29.175818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.747 qpair failed and we were unable to recover it. 00:37:28.747 [2024-09-29 16:45:29.175980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.747 [2024-09-29 16:45:29.176017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.747 qpair failed and we were unable to recover it. 00:37:28.747 [2024-09-29 16:45:29.176141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.747 [2024-09-29 16:45:29.176178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.747 qpair failed and we were unable to recover it. 00:37:28.747 [2024-09-29 16:45:29.176361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.747 [2024-09-29 16:45:29.176398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.747 qpair failed and we were unable to recover it. 00:37:28.747 [2024-09-29 16:45:29.176565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.747 [2024-09-29 16:45:29.176602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.747 qpair failed and we were unable to recover it. 00:37:28.747 [2024-09-29 16:45:29.176759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.747 [2024-09-29 16:45:29.176806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.747 qpair failed and we were unable to recover it. 00:37:28.747 [2024-09-29 16:45:29.176957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.747 [2024-09-29 16:45:29.177010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.747 qpair failed and we were unable to recover it. 00:37:28.747 [2024-09-29 16:45:29.177147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.747 [2024-09-29 16:45:29.177186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.747 qpair failed and we were unable to recover it. 00:37:28.747 [2024-09-29 16:45:29.177467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.747 [2024-09-29 16:45:29.177534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.747 qpair failed and we were unable to recover it. 00:37:28.747 [2024-09-29 16:45:29.177697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.747 [2024-09-29 16:45:29.177753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.747 qpair failed and we were unable to recover it. 00:37:28.747 [2024-09-29 16:45:29.177931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.747 [2024-09-29 16:45:29.177982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.747 qpair failed and we were unable to recover it. 00:37:28.747 [2024-09-29 16:45:29.178179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.747 [2024-09-29 16:45:29.178238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.747 qpair failed and we were unable to recover it. 00:37:28.747 [2024-09-29 16:45:29.178395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.747 [2024-09-29 16:45:29.178457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.747 qpair failed and we were unable to recover it. 00:37:28.747 [2024-09-29 16:45:29.178592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.747 [2024-09-29 16:45:29.178629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.747 qpair failed and we were unable to recover it. 00:37:28.747 [2024-09-29 16:45:29.178836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.747 [2024-09-29 16:45:29.178885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.747 qpair failed and we were unable to recover it. 00:37:28.747 [2024-09-29 16:45:29.179106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.747 [2024-09-29 16:45:29.179164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.747 qpair failed and we were unable to recover it. 00:37:28.747 [2024-09-29 16:45:29.179379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.747 [2024-09-29 16:45:29.179433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.747 qpair failed and we were unable to recover it. 00:37:28.747 [2024-09-29 16:45:29.179600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.747 [2024-09-29 16:45:29.179635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.747 qpair failed and we were unable to recover it. 00:37:28.747 [2024-09-29 16:45:29.179803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.747 [2024-09-29 16:45:29.179838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.747 qpair failed and we were unable to recover it. 00:37:28.747 [2024-09-29 16:45:29.179975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.747 [2024-09-29 16:45:29.180027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.747 qpair failed and we were unable to recover it. 00:37:28.747 [2024-09-29 16:45:29.180159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.747 [2024-09-29 16:45:29.180211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.747 qpair failed and we were unable to recover it. 00:37:28.747 [2024-09-29 16:45:29.180450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.747 [2024-09-29 16:45:29.180506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.747 qpair failed and we were unable to recover it. 00:37:28.747 [2024-09-29 16:45:29.180667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.747 [2024-09-29 16:45:29.180730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.747 qpair failed and we were unable to recover it. 00:37:28.747 [2024-09-29 16:45:29.180863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.747 [2024-09-29 16:45:29.180915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.747 qpair failed and we were unable to recover it. 00:37:28.747 [2024-09-29 16:45:29.181039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.747 [2024-09-29 16:45:29.181072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.747 qpair failed and we were unable to recover it. 00:37:28.747 [2024-09-29 16:45:29.181242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.747 [2024-09-29 16:45:29.181277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.747 qpair failed and we were unable to recover it. 00:37:28.747 [2024-09-29 16:45:29.181424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.747 [2024-09-29 16:45:29.181459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.747 qpair failed and we were unable to recover it. 00:37:28.747 [2024-09-29 16:45:29.181641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.747 [2024-09-29 16:45:29.181689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.747 qpair failed and we were unable to recover it. 00:37:28.747 [2024-09-29 16:45:29.181847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.747 [2024-09-29 16:45:29.181881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.747 qpair failed and we were unable to recover it. 00:37:28.747 [2024-09-29 16:45:29.182046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.747 [2024-09-29 16:45:29.182080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.747 qpair failed and we were unable to recover it. 00:37:28.747 [2024-09-29 16:45:29.182238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.747 [2024-09-29 16:45:29.182286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.747 qpair failed and we were unable to recover it. 00:37:28.747 [2024-09-29 16:45:29.182438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.747 [2024-09-29 16:45:29.182473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.747 qpair failed and we were unable to recover it. 00:37:28.747 [2024-09-29 16:45:29.182629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.747 [2024-09-29 16:45:29.182663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.747 qpair failed and we were unable to recover it. 00:37:28.747 [2024-09-29 16:45:29.182820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.747 [2024-09-29 16:45:29.182876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.747 qpair failed and we were unable to recover it. 00:37:28.747 [2024-09-29 16:45:29.183094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.747 [2024-09-29 16:45:29.183157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.747 qpair failed and we were unable to recover it. 00:37:28.747 [2024-09-29 16:45:29.183401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.747 [2024-09-29 16:45:29.183463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.747 qpair failed and we were unable to recover it. 00:37:28.747 [2024-09-29 16:45:29.183628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.747 [2024-09-29 16:45:29.183663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.747 qpair failed and we were unable to recover it. 00:37:28.747 [2024-09-29 16:45:29.183861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.747 [2024-09-29 16:45:29.183898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.747 qpair failed and we were unable to recover it. 00:37:28.747 [2024-09-29 16:45:29.184079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.747 [2024-09-29 16:45:29.184117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.747 qpair failed and we were unable to recover it. 00:37:28.747 [2024-09-29 16:45:29.184284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.747 [2024-09-29 16:45:29.184336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.747 qpair failed and we were unable to recover it. 00:37:28.747 [2024-09-29 16:45:29.184511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.747 [2024-09-29 16:45:29.184548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.747 qpair failed and we were unable to recover it. 00:37:28.747 [2024-09-29 16:45:29.184739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.747 [2024-09-29 16:45:29.184792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.747 qpair failed and we were unable to recover it. 00:37:28.747 [2024-09-29 16:45:29.184954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.747 [2024-09-29 16:45:29.185020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.747 qpair failed and we were unable to recover it. 00:37:28.747 [2024-09-29 16:45:29.185216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.747 [2024-09-29 16:45:29.185279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.747 qpair failed and we were unable to recover it. 00:37:28.747 [2024-09-29 16:45:29.185417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.747 [2024-09-29 16:45:29.185451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.747 qpair failed and we were unable to recover it. 00:37:28.747 [2024-09-29 16:45:29.185568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.747 [2024-09-29 16:45:29.185601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.747 qpair failed and we were unable to recover it. 00:37:28.747 [2024-09-29 16:45:29.185760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.747 [2024-09-29 16:45:29.185812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.747 qpair failed and we were unable to recover it. 00:37:28.747 [2024-09-29 16:45:29.185949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.747 [2024-09-29 16:45:29.185987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.747 qpair failed and we were unable to recover it. 00:37:28.747 [2024-09-29 16:45:29.186248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.747 [2024-09-29 16:45:29.186325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.747 qpair failed and we were unable to recover it. 00:37:28.747 [2024-09-29 16:45:29.186493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.747 [2024-09-29 16:45:29.186531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.747 qpair failed and we were unable to recover it. 00:37:28.747 [2024-09-29 16:45:29.186687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.747 [2024-09-29 16:45:29.186726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.747 qpair failed and we were unable to recover it. 00:37:28.747 [2024-09-29 16:45:29.186898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.747 [2024-09-29 16:45:29.186951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.747 qpair failed and we were unable to recover it. 00:37:28.747 [2024-09-29 16:45:29.187109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.747 [2024-09-29 16:45:29.187188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.747 qpair failed and we were unable to recover it. 00:37:28.747 [2024-09-29 16:45:29.187354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.747 [2024-09-29 16:45:29.187413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.747 qpair failed and we were unable to recover it. 00:37:28.747 [2024-09-29 16:45:29.187544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.747 [2024-09-29 16:45:29.187581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.747 qpair failed and we were unable to recover it. 00:37:28.747 [2024-09-29 16:45:29.187783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.747 [2024-09-29 16:45:29.187817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.747 qpair failed and we were unable to recover it. 00:37:28.747 [2024-09-29 16:45:29.187953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.747 [2024-09-29 16:45:29.187990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.747 qpair failed and we were unable to recover it. 00:37:28.747 [2024-09-29 16:45:29.188146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.747 [2024-09-29 16:45:29.188182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.747 qpair failed and we were unable to recover it. 00:37:28.748 [2024-09-29 16:45:29.188329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.748 [2024-09-29 16:45:29.188366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.748 qpair failed and we were unable to recover it. 00:37:28.748 [2024-09-29 16:45:29.188534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.748 [2024-09-29 16:45:29.188570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.748 qpair failed and we were unable to recover it. 00:37:28.748 [2024-09-29 16:45:29.188695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.748 [2024-09-29 16:45:29.188731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.748 qpair failed and we were unable to recover it. 00:37:28.748 [2024-09-29 16:45:29.188886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.748 [2024-09-29 16:45:29.188920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.748 qpair failed and we were unable to recover it. 00:37:28.748 [2024-09-29 16:45:29.189061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.748 [2024-09-29 16:45:29.189096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.748 qpair failed and we were unable to recover it. 00:37:28.748 [2024-09-29 16:45:29.189239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.748 [2024-09-29 16:45:29.189276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.748 qpair failed and we were unable to recover it. 00:37:28.748 [2024-09-29 16:45:29.189413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.748 [2024-09-29 16:45:29.189468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.748 qpair failed and we were unable to recover it. 00:37:28.748 [2024-09-29 16:45:29.189654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.748 [2024-09-29 16:45:29.189703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.748 qpair failed and we were unable to recover it. 00:37:28.748 [2024-09-29 16:45:29.189843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.748 [2024-09-29 16:45:29.189878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.748 qpair failed and we were unable to recover it. 00:37:28.748 [2024-09-29 16:45:29.190044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.748 [2024-09-29 16:45:29.190078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.748 qpair failed and we were unable to recover it. 00:37:28.748 [2024-09-29 16:45:29.190212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.748 [2024-09-29 16:45:29.190245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.748 qpair failed and we were unable to recover it. 00:37:28.748 [2024-09-29 16:45:29.190397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.748 [2024-09-29 16:45:29.190457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.748 qpair failed and we were unable to recover it. 00:37:28.748 [2024-09-29 16:45:29.190599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.748 [2024-09-29 16:45:29.190633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.748 qpair failed and we were unable to recover it. 00:37:28.748 [2024-09-29 16:45:29.190753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.748 [2024-09-29 16:45:29.190788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.748 qpair failed and we were unable to recover it. 00:37:28.748 [2024-09-29 16:45:29.190925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.748 [2024-09-29 16:45:29.190978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.748 qpair failed and we were unable to recover it. 00:37:28.748 [2024-09-29 16:45:29.191104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.748 [2024-09-29 16:45:29.191157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.748 qpair failed and we were unable to recover it. 00:37:28.748 [2024-09-29 16:45:29.191304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.748 [2024-09-29 16:45:29.191338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.748 qpair failed and we were unable to recover it. 00:37:28.748 [2024-09-29 16:45:29.191488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.748 [2024-09-29 16:45:29.191524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.748 qpair failed and we were unable to recover it. 00:37:28.748 [2024-09-29 16:45:29.191669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.748 [2024-09-29 16:45:29.191709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.748 qpair failed and we were unable to recover it. 00:37:28.748 [2024-09-29 16:45:29.191888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.748 [2024-09-29 16:45:29.191941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.748 qpair failed and we were unable to recover it. 00:37:28.748 [2024-09-29 16:45:29.192228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.748 [2024-09-29 16:45:29.192288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.748 qpair failed and we were unable to recover it. 00:37:28.748 [2024-09-29 16:45:29.192523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.748 [2024-09-29 16:45:29.192579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.748 qpair failed and we were unable to recover it. 00:37:28.748 [2024-09-29 16:45:29.192715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.748 [2024-09-29 16:45:29.192768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.748 qpair failed and we were unable to recover it. 00:37:28.748 [2024-09-29 16:45:29.192926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.748 [2024-09-29 16:45:29.192963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.748 qpair failed and we were unable to recover it. 00:37:28.748 [2024-09-29 16:45:29.193112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.748 [2024-09-29 16:45:29.193149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.748 qpair failed and we were unable to recover it. 00:37:28.748 [2024-09-29 16:45:29.193316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.748 [2024-09-29 16:45:29.193349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.748 qpair failed and we were unable to recover it. 00:37:28.748 [2024-09-29 16:45:29.193493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.748 [2024-09-29 16:45:29.193529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.748 qpair failed and we were unable to recover it. 00:37:28.748 [2024-09-29 16:45:29.193707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.748 [2024-09-29 16:45:29.193771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.748 qpair failed and we were unable to recover it. 00:37:28.748 [2024-09-29 16:45:29.193921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.748 [2024-09-29 16:45:29.193979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.748 qpair failed and we were unable to recover it. 00:37:28.748 [2024-09-29 16:45:29.194157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.748 [2024-09-29 16:45:29.194191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.748 qpair failed and we were unable to recover it. 00:37:28.748 [2024-09-29 16:45:29.194402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.748 [2024-09-29 16:45:29.194459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.748 qpair failed and we were unable to recover it. 00:37:28.748 [2024-09-29 16:45:29.194646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.748 [2024-09-29 16:45:29.194690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.748 qpair failed and we were unable to recover it. 00:37:28.748 [2024-09-29 16:45:29.194816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.748 [2024-09-29 16:45:29.194850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.748 qpair failed and we were unable to recover it. 00:37:28.748 [2024-09-29 16:45:29.194970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.748 [2024-09-29 16:45:29.195004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.748 qpair failed and we were unable to recover it. 00:37:28.748 [2024-09-29 16:45:29.195200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.748 [2024-09-29 16:45:29.195244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.748 qpair failed and we were unable to recover it. 00:37:28.748 [2024-09-29 16:45:29.195374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.748 [2024-09-29 16:45:29.195425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.748 qpair failed and we were unable to recover it. 00:37:28.748 [2024-09-29 16:45:29.195585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.748 [2024-09-29 16:45:29.195624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.748 qpair failed and we were unable to recover it. 00:37:28.748 [2024-09-29 16:45:29.195766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.748 [2024-09-29 16:45:29.195800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.748 qpair failed and we were unable to recover it. 00:37:28.748 [2024-09-29 16:45:29.195938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.748 [2024-09-29 16:45:29.195971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.748 qpair failed and we were unable to recover it. 00:37:28.748 [2024-09-29 16:45:29.196088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.748 [2024-09-29 16:45:29.196146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.748 qpair failed and we were unable to recover it. 00:37:28.748 [2024-09-29 16:45:29.196363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.748 [2024-09-29 16:45:29.196418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.748 qpair failed and we were unable to recover it. 00:37:28.748 [2024-09-29 16:45:29.196577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.748 [2024-09-29 16:45:29.196610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.748 qpair failed and we were unable to recover it. 00:37:28.748 [2024-09-29 16:45:29.196750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.748 [2024-09-29 16:45:29.196784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.748 qpair failed and we were unable to recover it. 00:37:28.748 [2024-09-29 16:45:29.196907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.748 [2024-09-29 16:45:29.196958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.748 qpair failed and we were unable to recover it. 00:37:28.748 [2024-09-29 16:45:29.197172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.748 [2024-09-29 16:45:29.197209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.748 qpair failed and we were unable to recover it. 00:37:28.748 [2024-09-29 16:45:29.197370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.748 [2024-09-29 16:45:29.197414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.748 qpair failed and we were unable to recover it. 00:37:28.748 [2024-09-29 16:45:29.197574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.748 [2024-09-29 16:45:29.197610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.748 qpair failed and we were unable to recover it. 00:37:28.748 [2024-09-29 16:45:29.197811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.748 [2024-09-29 16:45:29.197860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.748 qpair failed and we were unable to recover it. 00:37:28.748 [2024-09-29 16:45:29.198023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.748 [2024-09-29 16:45:29.198077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.748 qpair failed and we were unable to recover it. 00:37:28.748 [2024-09-29 16:45:29.198238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.748 [2024-09-29 16:45:29.198293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.748 qpair failed and we were unable to recover it. 00:37:28.748 [2024-09-29 16:45:29.198419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.748 [2024-09-29 16:45:29.198457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.748 qpair failed and we were unable to recover it. 00:37:28.748 [2024-09-29 16:45:29.198614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.748 [2024-09-29 16:45:29.198648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.748 qpair failed and we were unable to recover it. 00:37:28.748 [2024-09-29 16:45:29.198840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.748 [2024-09-29 16:45:29.198888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.748 qpair failed and we were unable to recover it. 00:37:28.748 [2024-09-29 16:45:29.199037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.748 [2024-09-29 16:45:29.199110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.748 qpair failed and we were unable to recover it. 00:37:28.748 [2024-09-29 16:45:29.199332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.748 [2024-09-29 16:45:29.199365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.748 qpair failed and we were unable to recover it. 00:37:28.748 [2024-09-29 16:45:29.199599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.748 [2024-09-29 16:45:29.199637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.748 qpair failed and we were unable to recover it. 00:37:28.748 [2024-09-29 16:45:29.199809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.748 [2024-09-29 16:45:29.199843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.748 qpair failed and we were unable to recover it. 00:37:28.748 [2024-09-29 16:45:29.200005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.748 [2024-09-29 16:45:29.200041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.748 qpair failed and we were unable to recover it. 00:37:28.748 [2024-09-29 16:45:29.200294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.748 [2024-09-29 16:45:29.200330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.748 qpair failed and we were unable to recover it. 00:37:28.748 [2024-09-29 16:45:29.200497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.748 [2024-09-29 16:45:29.200534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.748 qpair failed and we were unable to recover it. 00:37:28.748 [2024-09-29 16:45:29.200696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.748 [2024-09-29 16:45:29.200749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.748 qpair failed and we were unable to recover it. 00:37:28.748 [2024-09-29 16:45:29.200907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.748 [2024-09-29 16:45:29.200956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.748 qpair failed and we were unable to recover it. 00:37:28.748 [2024-09-29 16:45:29.201100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.748 [2024-09-29 16:45:29.201140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.748 qpair failed and we were unable to recover it. 00:37:28.748 [2024-09-29 16:45:29.201390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.748 [2024-09-29 16:45:29.201449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.748 qpair failed and we were unable to recover it. 00:37:28.748 [2024-09-29 16:45:29.201622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.748 [2024-09-29 16:45:29.201656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.748 qpair failed and we were unable to recover it. 00:37:28.748 [2024-09-29 16:45:29.201796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.748 [2024-09-29 16:45:29.201831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.748 qpair failed and we were unable to recover it. 00:37:28.748 [2024-09-29 16:45:29.201992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.748 [2024-09-29 16:45:29.202039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.748 qpair failed and we were unable to recover it. 00:37:28.748 [2024-09-29 16:45:29.202160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.748 [2024-09-29 16:45:29.202195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.748 qpair failed and we were unable to recover it. 00:37:28.748 [2024-09-29 16:45:29.202305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.748 [2024-09-29 16:45:29.202338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.748 qpair failed and we were unable to recover it. 00:37:28.748 [2024-09-29 16:45:29.202483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.748 [2024-09-29 16:45:29.202516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.748 qpair failed and we were unable to recover it. 00:37:28.748 [2024-09-29 16:45:29.202664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.748 [2024-09-29 16:45:29.202707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.748 qpair failed and we were unable to recover it. 00:37:28.748 [2024-09-29 16:45:29.202858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.748 [2024-09-29 16:45:29.202891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.748 qpair failed and we were unable to recover it. 00:37:28.748 [2024-09-29 16:45:29.203035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.748 [2024-09-29 16:45:29.203073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.748 qpair failed and we were unable to recover it. 00:37:28.748 [2024-09-29 16:45:29.203261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.749 [2024-09-29 16:45:29.203298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.749 qpair failed and we were unable to recover it. 00:37:28.749 [2024-09-29 16:45:29.203587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.749 [2024-09-29 16:45:29.203654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.749 qpair failed and we were unable to recover it. 00:37:28.749 [2024-09-29 16:45:29.203838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.749 [2024-09-29 16:45:29.203871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.749 qpair failed and we were unable to recover it. 00:37:28.749 [2024-09-29 16:45:29.204056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.749 [2024-09-29 16:45:29.204098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.749 qpair failed and we were unable to recover it. 00:37:28.749 [2024-09-29 16:45:29.204376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.749 [2024-09-29 16:45:29.204434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.749 qpair failed and we were unable to recover it. 00:37:28.749 [2024-09-29 16:45:29.204595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.749 [2024-09-29 16:45:29.204627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.749 qpair failed and we were unable to recover it. 00:37:28.749 [2024-09-29 16:45:29.204763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.749 [2024-09-29 16:45:29.204796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.749 qpair failed and we were unable to recover it. 00:37:28.749 [2024-09-29 16:45:29.204957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.749 [2024-09-29 16:45:29.204993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.749 qpair failed and we were unable to recover it. 00:37:28.749 [2024-09-29 16:45:29.205176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.749 [2024-09-29 16:45:29.205212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.749 qpair failed and we were unable to recover it. 00:37:28.749 [2024-09-29 16:45:29.205457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.749 [2024-09-29 16:45:29.205525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.749 qpair failed and we were unable to recover it. 00:37:28.749 [2024-09-29 16:45:29.205641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.749 [2024-09-29 16:45:29.205686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.749 qpair failed and we were unable to recover it. 00:37:28.749 [2024-09-29 16:45:29.205878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.749 [2024-09-29 16:45:29.205911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.749 qpair failed and we were unable to recover it. 00:37:28.749 [2024-09-29 16:45:29.206115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.749 [2024-09-29 16:45:29.206174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.749 qpair failed and we were unable to recover it. 00:37:28.749 [2024-09-29 16:45:29.206401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.749 [2024-09-29 16:45:29.206458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.749 qpair failed and we were unable to recover it. 00:37:28.749 [2024-09-29 16:45:29.206629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.749 [2024-09-29 16:45:29.206661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.749 qpair failed and we were unable to recover it. 00:37:28.749 [2024-09-29 16:45:29.206818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.749 [2024-09-29 16:45:29.206865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.749 qpair failed and we were unable to recover it. 00:37:28.749 [2024-09-29 16:45:29.207023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.749 [2024-09-29 16:45:29.207077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.749 qpair failed and we were unable to recover it. 00:37:28.749 [2024-09-29 16:45:29.207218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.749 [2024-09-29 16:45:29.207315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.749 qpair failed and we were unable to recover it. 00:37:28.749 [2024-09-29 16:45:29.207509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.749 [2024-09-29 16:45:29.207564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.749 qpair failed and we were unable to recover it. 00:37:28.749 [2024-09-29 16:45:29.207707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.749 [2024-09-29 16:45:29.207741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.749 qpair failed and we were unable to recover it. 00:37:28.749 [2024-09-29 16:45:29.207864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.749 [2024-09-29 16:45:29.207898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.749 qpair failed and we were unable to recover it. 00:37:28.749 [2024-09-29 16:45:29.208083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.749 [2024-09-29 16:45:29.208120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.749 qpair failed and we were unable to recover it. 00:37:28.749 [2024-09-29 16:45:29.208254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.749 [2024-09-29 16:45:29.208291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.749 qpair failed and we were unable to recover it. 00:37:28.749 [2024-09-29 16:45:29.208509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.749 [2024-09-29 16:45:29.208546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.749 qpair failed and we were unable to recover it. 00:37:28.749 [2024-09-29 16:45:29.208745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.749 [2024-09-29 16:45:29.208781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.749 qpair failed and we were unable to recover it. 00:37:28.749 [2024-09-29 16:45:29.208899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.749 [2024-09-29 16:45:29.208933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.749 qpair failed and we were unable to recover it. 00:37:28.749 [2024-09-29 16:45:29.209139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.749 [2024-09-29 16:45:29.209172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.749 qpair failed and we were unable to recover it. 00:37:28.749 [2024-09-29 16:45:29.209370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.749 [2024-09-29 16:45:29.209433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.749 qpair failed and we were unable to recover it. 00:37:28.749 [2024-09-29 16:45:29.209604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.749 [2024-09-29 16:45:29.209640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.749 qpair failed and we were unable to recover it. 00:37:28.749 [2024-09-29 16:45:29.209847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.749 [2024-09-29 16:45:29.209882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.749 qpair failed and we were unable to recover it. 00:37:28.749 [2024-09-29 16:45:29.210012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.749 [2024-09-29 16:45:29.210048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.749 qpair failed and we were unable to recover it. 00:37:28.749 [2024-09-29 16:45:29.210205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.749 [2024-09-29 16:45:29.210242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.749 qpair failed and we were unable to recover it. 00:37:28.749 [2024-09-29 16:45:29.210444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.749 [2024-09-29 16:45:29.210504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.749 qpair failed and we were unable to recover it. 00:37:28.749 [2024-09-29 16:45:29.210652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.749 [2024-09-29 16:45:29.210696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.749 qpair failed and we were unable to recover it. 00:37:28.749 [2024-09-29 16:45:29.210829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.749 [2024-09-29 16:45:29.210862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.749 qpair failed and we were unable to recover it. 00:37:28.749 [2024-09-29 16:45:29.211036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.749 [2024-09-29 16:45:29.211070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.749 qpair failed and we were unable to recover it. 00:37:28.749 [2024-09-29 16:45:29.211202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.749 [2024-09-29 16:45:29.211239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.749 qpair failed and we were unable to recover it. 00:37:28.749 [2024-09-29 16:45:29.211416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.749 [2024-09-29 16:45:29.211452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.749 qpair failed and we were unable to recover it. 00:37:28.749 [2024-09-29 16:45:29.211636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.749 [2024-09-29 16:45:29.211696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.749 qpair failed and we were unable to recover it. 00:37:28.749 [2024-09-29 16:45:29.211813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.749 [2024-09-29 16:45:29.211846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.749 qpair failed and we were unable to recover it. 00:37:28.749 [2024-09-29 16:45:29.211956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.749 [2024-09-29 16:45:29.211989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.749 qpair failed and we were unable to recover it. 00:37:28.749 [2024-09-29 16:45:29.212183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.749 [2024-09-29 16:45:29.212259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.749 qpair failed and we were unable to recover it. 00:37:28.749 [2024-09-29 16:45:29.212422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.749 [2024-09-29 16:45:29.212459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.749 qpair failed and we were unable to recover it. 00:37:28.749 [2024-09-29 16:45:29.212586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.749 [2024-09-29 16:45:29.212623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.749 qpair failed and we were unable to recover it. 00:37:28.749 [2024-09-29 16:45:29.212815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.749 [2024-09-29 16:45:29.212863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.749 qpair failed and we were unable to recover it. 00:37:28.749 [2024-09-29 16:45:29.212980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.749 [2024-09-29 16:45:29.213015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.749 qpair failed and we were unable to recover it. 00:37:28.749 [2024-09-29 16:45:29.213156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.749 [2024-09-29 16:45:29.213197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.749 qpair failed and we were unable to recover it. 00:37:28.749 [2024-09-29 16:45:29.213342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.749 [2024-09-29 16:45:29.213376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.749 qpair failed and we were unable to recover it. 00:37:28.749 [2024-09-29 16:45:29.213520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.749 [2024-09-29 16:45:29.213553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.749 qpair failed and we were unable to recover it. 00:37:28.749 [2024-09-29 16:45:29.213714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.749 [2024-09-29 16:45:29.213762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.749 qpair failed and we were unable to recover it. 00:37:28.749 [2024-09-29 16:45:29.213908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.749 [2024-09-29 16:45:29.213942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.749 qpair failed and we were unable to recover it. 00:37:28.749 [2024-09-29 16:45:29.214077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.749 [2024-09-29 16:45:29.214110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.749 qpair failed and we were unable to recover it. 00:37:28.749 [2024-09-29 16:45:29.214251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.749 [2024-09-29 16:45:29.214285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.749 qpair failed and we were unable to recover it. 00:37:28.749 [2024-09-29 16:45:29.214432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.749 [2024-09-29 16:45:29.214465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.749 qpair failed and we were unable to recover it. 00:37:28.749 [2024-09-29 16:45:29.214624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.749 [2024-09-29 16:45:29.214661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.749 qpair failed and we were unable to recover it. 00:37:28.749 [2024-09-29 16:45:29.214824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.749 [2024-09-29 16:45:29.214863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.749 qpair failed and we were unable to recover it. 00:37:28.749 [2024-09-29 16:45:29.215035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.749 [2024-09-29 16:45:29.215087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.749 qpair failed and we were unable to recover it. 00:37:28.749 [2024-09-29 16:45:29.215252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.749 [2024-09-29 16:45:29.215304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.749 qpair failed and we were unable to recover it. 00:37:28.749 [2024-09-29 16:45:29.215418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.749 [2024-09-29 16:45:29.215452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.749 qpair failed and we were unable to recover it. 00:37:28.749 [2024-09-29 16:45:29.215591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.749 [2024-09-29 16:45:29.215626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.749 qpair failed and we were unable to recover it. 00:37:28.749 [2024-09-29 16:45:29.215779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.749 [2024-09-29 16:45:29.215813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.749 qpair failed and we were unable to recover it. 00:37:28.749 [2024-09-29 16:45:29.215933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.749 [2024-09-29 16:45:29.215967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.749 qpair failed and we were unable to recover it. 00:37:28.749 [2024-09-29 16:45:29.216104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.749 [2024-09-29 16:45:29.216137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.749 qpair failed and we were unable to recover it. 00:37:28.749 [2024-09-29 16:45:29.216249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.749 [2024-09-29 16:45:29.216281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.749 qpair failed and we were unable to recover it. 00:37:28.749 [2024-09-29 16:45:29.216388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.749 [2024-09-29 16:45:29.216422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.749 qpair failed and we were unable to recover it. 00:37:28.749 [2024-09-29 16:45:29.216537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.749 [2024-09-29 16:45:29.216569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.749 qpair failed and we were unable to recover it. 00:37:28.749 [2024-09-29 16:45:29.216732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.749 [2024-09-29 16:45:29.216765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.749 qpair failed and we were unable to recover it. 00:37:28.749 [2024-09-29 16:45:29.216902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.749 [2024-09-29 16:45:29.216960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.749 qpair failed and we were unable to recover it. 00:37:28.749 [2024-09-29 16:45:29.217106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.749 [2024-09-29 16:45:29.217159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.749 qpair failed and we were unable to recover it. 00:37:28.749 [2024-09-29 16:45:29.217350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.749 [2024-09-29 16:45:29.217402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.749 qpair failed and we were unable to recover it. 00:37:28.749 [2024-09-29 16:45:29.217519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.749 [2024-09-29 16:45:29.217553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.749 qpair failed and we were unable to recover it. 00:37:28.749 [2024-09-29 16:45:29.217696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.749 [2024-09-29 16:45:29.217731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.749 qpair failed and we were unable to recover it. 00:37:28.749 [2024-09-29 16:45:29.217921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.749 [2024-09-29 16:45:29.217971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.749 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.218150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.218183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.218323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.218356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.218467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.218499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.218669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.218723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.218878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.218928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.219123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.219158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.219280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.219316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.219446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.219484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.219631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.219683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.219824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.219860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.220008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.220044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.220191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.220228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.220453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.220507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.220638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.220682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.220844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.220906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.221020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.221053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.221209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.221262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.221400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.221452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.221573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.221607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.221751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.221785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.221951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.221984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.222130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.222163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.222309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.222343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.222520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.222554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.222695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.222742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.222938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.222986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.223106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.223142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.223258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.223292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.223434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.223468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.223652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.223694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.223819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.223864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.224026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.224065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.224266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.224303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.224432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.224470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.224607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.224646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.224846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.224880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.225131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.225169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.225332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.225400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.225538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.225576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.225732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.225769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.225939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.225973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.226103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.226157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.226282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.226335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.226446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.226479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.226599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.226633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.226760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.226796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.226908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.226941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.227096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.227130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.227268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.227306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.227425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.227458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.227597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.227632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.227815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.227849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.227993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.228026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.228200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.228234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.228395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.228447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.228586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.228620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.228767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.228801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.228931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.228985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.229176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.229228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.229355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.229408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.229550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.229584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.229699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.229734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.229906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.229954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.230073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.230109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.230247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.230281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.230425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.230460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.230610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.230644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.230812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.230859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.231030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.231070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.231202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.231239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.231390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.231427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.231595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.231630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.231790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.231824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.231956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.232008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.232168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.232219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.232422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.232470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.232595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.232631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.232776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.232812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.232952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.232985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.233126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.750 [2024-09-29 16:45:29.233161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.750 qpair failed and we were unable to recover it. 00:37:28.750 [2024-09-29 16:45:29.233274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.751 [2024-09-29 16:45:29.233309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.751 qpair failed and we were unable to recover it. 00:37:28.751 [2024-09-29 16:45:29.233502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.751 [2024-09-29 16:45:29.233555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.751 qpair failed and we were unable to recover it. 00:37:28.751 [2024-09-29 16:45:29.233680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.751 [2024-09-29 16:45:29.233716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.751 qpair failed and we were unable to recover it. 00:37:28.751 [2024-09-29 16:45:29.233903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.751 [2024-09-29 16:45:29.233955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.751 qpair failed and we were unable to recover it. 00:37:28.751 [2024-09-29 16:45:29.234109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.751 [2024-09-29 16:45:29.234160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.751 qpair failed and we were unable to recover it. 00:37:28.751 [2024-09-29 16:45:29.234290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.751 [2024-09-29 16:45:29.234329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.751 qpair failed and we were unable to recover it. 00:37:28.751 [2024-09-29 16:45:29.234485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.751 [2024-09-29 16:45:29.234518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.751 qpair failed and we were unable to recover it. 00:37:28.751 [2024-09-29 16:45:29.234665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.751 [2024-09-29 16:45:29.234709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.751 qpair failed and we were unable to recover it. 00:37:28.751 [2024-09-29 16:45:29.234845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.751 [2024-09-29 16:45:29.234897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.751 qpair failed and we were unable to recover it. 00:37:28.751 [2024-09-29 16:45:29.235013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.751 [2024-09-29 16:45:29.235046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.751 qpair failed and we were unable to recover it. 00:37:28.751 [2024-09-29 16:45:29.235166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.751 [2024-09-29 16:45:29.235201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.751 qpair failed and we were unable to recover it. 00:37:28.751 [2024-09-29 16:45:29.235360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.751 [2024-09-29 16:45:29.235409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.751 qpair failed and we were unable to recover it. 00:37:28.751 [2024-09-29 16:45:29.235558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.751 [2024-09-29 16:45:29.235594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.751 qpair failed and we were unable to recover it. 00:37:28.751 [2024-09-29 16:45:29.235742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.751 [2024-09-29 16:45:29.235777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.751 qpair failed and we were unable to recover it. 00:37:28.751 [2024-09-29 16:45:29.235965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.751 [2024-09-29 16:45:29.236003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.751 qpair failed and we were unable to recover it. 00:37:28.751 [2024-09-29 16:45:29.236154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.751 [2024-09-29 16:45:29.236203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.751 qpair failed and we were unable to recover it. 00:37:28.751 [2024-09-29 16:45:29.236366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.751 [2024-09-29 16:45:29.236403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.751 qpair failed and we were unable to recover it. 00:37:28.751 [2024-09-29 16:45:29.236567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.751 [2024-09-29 16:45:29.236600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.751 qpair failed and we were unable to recover it. 00:37:28.751 [2024-09-29 16:45:29.236720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.751 [2024-09-29 16:45:29.236755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.751 qpair failed and we were unable to recover it. 00:37:28.751 [2024-09-29 16:45:29.236863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.751 [2024-09-29 16:45:29.236897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.751 qpair failed and we were unable to recover it. 00:37:28.751 [2024-09-29 16:45:29.237083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.751 [2024-09-29 16:45:29.237119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.751 qpair failed and we were unable to recover it. 00:37:28.751 [2024-09-29 16:45:29.237375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.751 [2024-09-29 16:45:29.237427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.751 qpair failed and we were unable to recover it. 00:37:28.751 [2024-09-29 16:45:29.237597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.751 [2024-09-29 16:45:29.237637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.751 qpair failed and we were unable to recover it. 00:37:28.751 [2024-09-29 16:45:29.237818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.751 [2024-09-29 16:45:29.237854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.751 qpair failed and we were unable to recover it. 00:37:28.751 [2024-09-29 16:45:29.238049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.751 [2024-09-29 16:45:29.238109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.751 qpair failed and we were unable to recover it. 00:37:28.751 [2024-09-29 16:45:29.238318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.751 [2024-09-29 16:45:29.238371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.751 qpair failed and we were unable to recover it. 00:37:28.751 [2024-09-29 16:45:29.238489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.751 [2024-09-29 16:45:29.238524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.751 qpair failed and we were unable to recover it. 00:37:28.751 [2024-09-29 16:45:29.238668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.751 [2024-09-29 16:45:29.238709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.751 qpair failed and we were unable to recover it. 00:37:28.751 [2024-09-29 16:45:29.238840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.751 [2024-09-29 16:45:29.238875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.751 qpair failed and we were unable to recover it. 00:37:28.751 [2024-09-29 16:45:29.239031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.751 [2024-09-29 16:45:29.239068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.751 qpair failed and we were unable to recover it. 00:37:28.751 [2024-09-29 16:45:29.239211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.751 [2024-09-29 16:45:29.239244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.751 qpair failed and we were unable to recover it. 00:37:28.751 [2024-09-29 16:45:29.239533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.751 [2024-09-29 16:45:29.239590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.751 qpair failed and we were unable to recover it. 00:37:28.751 [2024-09-29 16:45:29.239761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.751 [2024-09-29 16:45:29.239795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.751 qpair failed and we were unable to recover it. 00:37:28.751 [2024-09-29 16:45:29.239987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.751 [2024-09-29 16:45:29.240025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.751 qpair failed and we were unable to recover it. 00:37:28.751 [2024-09-29 16:45:29.240185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.751 [2024-09-29 16:45:29.240222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.751 qpair failed and we were unable to recover it. 00:37:28.751 [2024-09-29 16:45:29.240477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.751 [2024-09-29 16:45:29.240538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.751 qpair failed and we were unable to recover it. 00:37:28.752 [2024-09-29 16:45:29.240761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.752 [2024-09-29 16:45:29.240796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.752 qpair failed and we were unable to recover it. 00:37:28.752 [2024-09-29 16:45:29.240938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.752 [2024-09-29 16:45:29.240991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.752 qpair failed and we were unable to recover it. 00:37:28.752 [2024-09-29 16:45:29.241193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.752 [2024-09-29 16:45:29.241256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.752 qpair failed and we were unable to recover it. 00:37:28.752 [2024-09-29 16:45:29.241469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.752 [2024-09-29 16:45:29.241528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.752 qpair failed and we were unable to recover it. 00:37:28.752 [2024-09-29 16:45:29.241693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.752 [2024-09-29 16:45:29.241727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.752 qpair failed and we were unable to recover it. 00:37:28.752 [2024-09-29 16:45:29.241848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.752 [2024-09-29 16:45:29.241881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.752 qpair failed and we were unable to recover it. 00:37:28.752 [2024-09-29 16:45:29.242064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.752 [2024-09-29 16:45:29.242101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.752 qpair failed and we were unable to recover it. 00:37:28.752 [2024-09-29 16:45:29.242262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.752 [2024-09-29 16:45:29.242298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.752 qpair failed and we were unable to recover it. 00:37:28.752 [2024-09-29 16:45:29.242527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.752 [2024-09-29 16:45:29.242584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.752 qpair failed and we were unable to recover it. 00:37:28.752 [2024-09-29 16:45:29.242768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.752 [2024-09-29 16:45:29.242816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.752 qpair failed and we were unable to recover it. 00:37:28.752 [2024-09-29 16:45:29.242986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.752 [2024-09-29 16:45:29.243044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.752 qpair failed and we were unable to recover it. 00:37:28.752 [2024-09-29 16:45:29.243212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.752 [2024-09-29 16:45:29.243264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.752 qpair failed and we were unable to recover it. 00:37:28.752 [2024-09-29 16:45:29.243542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.752 [2024-09-29 16:45:29.243608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.752 qpair failed and we were unable to recover it. 00:37:28.752 [2024-09-29 16:45:29.243767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.752 [2024-09-29 16:45:29.243819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.752 qpair failed and we were unable to recover it. 00:37:28.752 [2024-09-29 16:45:29.243985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.752 [2024-09-29 16:45:29.244038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.752 qpair failed and we were unable to recover it. 00:37:28.752 [2024-09-29 16:45:29.244198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.752 [2024-09-29 16:45:29.244250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.752 qpair failed and we were unable to recover it. 00:37:28.752 [2024-09-29 16:45:29.244475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.752 [2024-09-29 16:45:29.244530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.752 qpair failed and we were unable to recover it. 00:37:28.752 [2024-09-29 16:45:29.244687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.752 [2024-09-29 16:45:29.244721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.752 qpair failed and we were unable to recover it. 00:37:28.752 [2024-09-29 16:45:29.244850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.752 [2024-09-29 16:45:29.244898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.752 qpair failed and we were unable to recover it. 00:37:28.752 [2024-09-29 16:45:29.245062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.752 [2024-09-29 16:45:29.245117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.752 qpair failed and we were unable to recover it. 00:37:28.752 [2024-09-29 16:45:29.245347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.752 [2024-09-29 16:45:29.245424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.752 qpair failed and we were unable to recover it. 00:37:28.752 [2024-09-29 16:45:29.245597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.752 [2024-09-29 16:45:29.245630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.752 qpair failed and we were unable to recover it. 00:37:28.752 [2024-09-29 16:45:29.245787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.752 [2024-09-29 16:45:29.245821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.752 qpair failed and we were unable to recover it. 00:37:28.752 [2024-09-29 16:45:29.245940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.752 [2024-09-29 16:45:29.245974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.752 qpair failed and we were unable to recover it. 00:37:28.752 [2024-09-29 16:45:29.246158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.752 [2024-09-29 16:45:29.246230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.752 qpair failed and we were unable to recover it. 00:37:28.752 [2024-09-29 16:45:29.246464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.752 [2024-09-29 16:45:29.246523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.752 qpair failed and we were unable to recover it. 00:37:28.752 [2024-09-29 16:45:29.246743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.752 [2024-09-29 16:45:29.246776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.752 qpair failed and we were unable to recover it. 00:37:28.752 [2024-09-29 16:45:29.246933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.752 [2024-09-29 16:45:29.246987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.752 qpair failed and we were unable to recover it. 00:37:28.752 [2024-09-29 16:45:29.247108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.752 [2024-09-29 16:45:29.247145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.752 qpair failed and we were unable to recover it. 00:37:28.752 [2024-09-29 16:45:29.247326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.752 [2024-09-29 16:45:29.247364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.752 qpair failed and we were unable to recover it. 00:37:28.752 [2024-09-29 16:45:29.247528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.752 [2024-09-29 16:45:29.247567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.752 qpair failed and we were unable to recover it. 00:37:28.752 [2024-09-29 16:45:29.247723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.752 [2024-09-29 16:45:29.247757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.752 qpair failed and we were unable to recover it. 00:37:28.752 [2024-09-29 16:45:29.247872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.752 [2024-09-29 16:45:29.247905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.752 qpair failed and we were unable to recover it. 00:37:28.752 [2024-09-29 16:45:29.248121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.752 [2024-09-29 16:45:29.248179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.752 qpair failed and we were unable to recover it. 00:37:28.752 [2024-09-29 16:45:29.248315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.752 [2024-09-29 16:45:29.248365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.752 qpair failed and we were unable to recover it. 00:37:28.752 [2024-09-29 16:45:29.248511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.752 [2024-09-29 16:45:29.248548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.752 qpair failed and we were unable to recover it. 00:37:28.752 [2024-09-29 16:45:29.248685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.752 [2024-09-29 16:45:29.248748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.752 qpair failed and we were unable to recover it. 00:37:28.752 [2024-09-29 16:45:29.248873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.752 [2024-09-29 16:45:29.248907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.752 qpair failed and we were unable to recover it. 00:37:28.752 [2024-09-29 16:45:29.249101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.753 [2024-09-29 16:45:29.249167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.753 qpair failed and we were unable to recover it. 00:37:28.753 [2024-09-29 16:45:29.249366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.753 [2024-09-29 16:45:29.249425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.753 qpair failed and we were unable to recover it. 00:37:28.753 [2024-09-29 16:45:29.249600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.753 [2024-09-29 16:45:29.249635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.753 qpair failed and we were unable to recover it. 00:37:28.753 [2024-09-29 16:45:29.249837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.753 [2024-09-29 16:45:29.249891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.753 qpair failed and we were unable to recover it. 00:37:28.753 [2024-09-29 16:45:29.250103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.753 [2024-09-29 16:45:29.250137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.753 qpair failed and we were unable to recover it. 00:37:28.753 [2024-09-29 16:45:29.250281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.753 [2024-09-29 16:45:29.250315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.753 qpair failed and we were unable to recover it. 00:37:28.753 [2024-09-29 16:45:29.250461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.753 [2024-09-29 16:45:29.250496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.753 qpair failed and we were unable to recover it. 00:37:28.753 [2024-09-29 16:45:29.250644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.753 [2024-09-29 16:45:29.250684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.753 qpair failed and we were unable to recover it. 00:37:28.753 [2024-09-29 16:45:29.250832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.753 [2024-09-29 16:45:29.250865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.753 qpair failed and we were unable to recover it. 00:37:28.753 [2024-09-29 16:45:29.251031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.753 [2024-09-29 16:45:29.251070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.753 qpair failed and we were unable to recover it. 00:37:28.753 [2024-09-29 16:45:29.251222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.753 [2024-09-29 16:45:29.251260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.753 qpair failed and we were unable to recover it. 00:37:28.753 [2024-09-29 16:45:29.251442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.753 [2024-09-29 16:45:29.251479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.753 qpair failed and we were unable to recover it. 00:37:28.753 [2024-09-29 16:45:29.251639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.753 [2024-09-29 16:45:29.251681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.753 qpair failed and we were unable to recover it. 00:37:28.753 [2024-09-29 16:45:29.251838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.753 [2024-09-29 16:45:29.251886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.753 qpair failed and we were unable to recover it. 00:37:28.753 [2024-09-29 16:45:29.252026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.753 [2024-09-29 16:45:29.252065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.753 qpair failed and we were unable to recover it. 00:37:28.753 [2024-09-29 16:45:29.252199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.753 [2024-09-29 16:45:29.252236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.753 qpair failed and we were unable to recover it. 00:37:28.753 [2024-09-29 16:45:29.252421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.753 [2024-09-29 16:45:29.252458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.753 qpair failed and we were unable to recover it. 00:37:28.753 [2024-09-29 16:45:29.252608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.753 [2024-09-29 16:45:29.252644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.753 qpair failed and we were unable to recover it. 00:37:28.753 [2024-09-29 16:45:29.252803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.753 [2024-09-29 16:45:29.252838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.753 qpair failed and we were unable to recover it. 00:37:28.753 [2024-09-29 16:45:29.253022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.753 [2024-09-29 16:45:29.253092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.753 qpair failed and we were unable to recover it. 00:37:28.753 [2024-09-29 16:45:29.253266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.753 [2024-09-29 16:45:29.253320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.753 qpair failed and we were unable to recover it. 00:37:28.753 [2024-09-29 16:45:29.253448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.753 [2024-09-29 16:45:29.253486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.753 qpair failed and we were unable to recover it. 00:37:28.753 [2024-09-29 16:45:29.253639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.753 [2024-09-29 16:45:29.253682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.753 qpair failed and we were unable to recover it. 00:37:28.753 [2024-09-29 16:45:29.253836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.753 [2024-09-29 16:45:29.253870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.753 qpair failed and we were unable to recover it. 00:37:28.753 [2024-09-29 16:45:29.254065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.753 [2024-09-29 16:45:29.254104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.753 qpair failed and we were unable to recover it. 00:37:28.753 [2024-09-29 16:45:29.254338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.753 [2024-09-29 16:45:29.254396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.753 qpair failed and we were unable to recover it. 00:37:28.753 [2024-09-29 16:45:29.254520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.753 [2024-09-29 16:45:29.254556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.753 qpair failed and we were unable to recover it. 00:37:28.753 [2024-09-29 16:45:29.254692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.753 [2024-09-29 16:45:29.254728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.753 qpair failed and we were unable to recover it. 00:37:28.753 [2024-09-29 16:45:29.254902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.753 [2024-09-29 16:45:29.254935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.753 qpair failed and we were unable to recover it. 00:37:28.753 [2024-09-29 16:45:29.255094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.753 [2024-09-29 16:45:29.255131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.753 qpair failed and we were unable to recover it. 00:37:28.753 [2024-09-29 16:45:29.255311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.753 [2024-09-29 16:45:29.255348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.753 qpair failed and we were unable to recover it. 00:37:28.753 [2024-09-29 16:45:29.255501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.753 [2024-09-29 16:45:29.255537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.753 qpair failed and we were unable to recover it. 00:37:28.753 [2024-09-29 16:45:29.255708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.753 [2024-09-29 16:45:29.255758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.753 qpair failed and we were unable to recover it. 00:37:28.753 [2024-09-29 16:45:29.255881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.753 [2024-09-29 16:45:29.255914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.753 qpair failed and we were unable to recover it. 00:37:28.753 [2024-09-29 16:45:29.256047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.753 [2024-09-29 16:45:29.256099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.753 qpair failed and we were unable to recover it. 00:37:28.753 [2024-09-29 16:45:29.256264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.753 [2024-09-29 16:45:29.256314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.753 qpair failed and we were unable to recover it. 00:37:28.753 [2024-09-29 16:45:29.256529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.753 [2024-09-29 16:45:29.256566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.754 qpair failed and we were unable to recover it. 00:37:28.754 [2024-09-29 16:45:29.256761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.754 [2024-09-29 16:45:29.256795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.754 qpair failed and we were unable to recover it. 00:37:28.754 [2024-09-29 16:45:29.256908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.754 [2024-09-29 16:45:29.256943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.754 qpair failed and we were unable to recover it. 00:37:28.754 [2024-09-29 16:45:29.257167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.754 [2024-09-29 16:45:29.257204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.754 qpair failed and we were unable to recover it. 00:37:28.754 [2024-09-29 16:45:29.257402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.754 [2024-09-29 16:45:29.257440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.754 qpair failed and we were unable to recover it. 00:37:28.754 [2024-09-29 16:45:29.257613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.754 [2024-09-29 16:45:29.257651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.754 qpair failed and we were unable to recover it. 00:37:28.754 [2024-09-29 16:45:29.257808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.754 [2024-09-29 16:45:29.257841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.754 qpair failed and we were unable to recover it. 00:37:28.754 [2024-09-29 16:45:29.257982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.754 [2024-09-29 16:45:29.258019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.754 qpair failed and we were unable to recover it. 00:37:28.754 [2024-09-29 16:45:29.258195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.754 [2024-09-29 16:45:29.258232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.754 qpair failed and we were unable to recover it. 00:37:28.754 [2024-09-29 16:45:29.258366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.754 [2024-09-29 16:45:29.258418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.754 qpair failed and we were unable to recover it. 00:37:28.754 [2024-09-29 16:45:29.258576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.754 [2024-09-29 16:45:29.258612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.754 qpair failed and we were unable to recover it. 00:37:28.754 [2024-09-29 16:45:29.258763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.754 [2024-09-29 16:45:29.258796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.754 qpair failed and we were unable to recover it. 00:37:28.754 [2024-09-29 16:45:29.258963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.754 [2024-09-29 16:45:29.259000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.754 qpair failed and we were unable to recover it. 00:37:28.754 [2024-09-29 16:45:29.259192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.754 [2024-09-29 16:45:29.259230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.754 qpair failed and we were unable to recover it. 00:37:28.754 [2024-09-29 16:45:29.259360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.754 [2024-09-29 16:45:29.259397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.754 qpair failed and we were unable to recover it. 00:37:28.754 [2024-09-29 16:45:29.259533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.754 [2024-09-29 16:45:29.259571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.754 qpair failed and we were unable to recover it. 00:37:28.754 [2024-09-29 16:45:29.259700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.754 [2024-09-29 16:45:29.259754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.754 qpair failed and we were unable to recover it. 00:37:28.754 [2024-09-29 16:45:29.259895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.754 [2024-09-29 16:45:29.259935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.754 qpair failed and we were unable to recover it. 00:37:28.754 [2024-09-29 16:45:29.260096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.754 [2024-09-29 16:45:29.260147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.754 qpair failed and we were unable to recover it. 00:37:28.754 [2024-09-29 16:45:29.260334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.754 [2024-09-29 16:45:29.260371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.754 qpair failed and we were unable to recover it. 00:37:28.754 [2024-09-29 16:45:29.260524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.754 [2024-09-29 16:45:29.260561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:28.754 qpair failed and we were unable to recover it. 00:37:28.754 [2024-09-29 16:45:29.260722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.754 [2024-09-29 16:45:29.260769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:28.754 qpair failed and we were unable to recover it. 00:37:28.754 [2024-09-29 16:45:29.260945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.754 [2024-09-29 16:45:29.260992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:28.754 qpair failed and we were unable to recover it. 00:37:28.754 [2024-09-29 16:45:29.261201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.754 [2024-09-29 16:45:29.261254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.754 qpair failed and we were unable to recover it. 00:37:28.754 [2024-09-29 16:45:29.261499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.754 [2024-09-29 16:45:29.261560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.754 qpair failed and we were unable to recover it. 00:37:28.754 [2024-09-29 16:45:29.261703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.754 [2024-09-29 16:45:29.261742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.754 qpair failed and we were unable to recover it. 00:37:28.754 [2024-09-29 16:45:29.261897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.754 [2024-09-29 16:45:29.261936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.754 qpair failed and we were unable to recover it. 00:37:28.754 [2024-09-29 16:45:29.262089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.754 [2024-09-29 16:45:29.262124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.754 qpair failed and we were unable to recover it. 00:37:28.754 [2024-09-29 16:45:29.262302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.754 [2024-09-29 16:45:29.262336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.754 qpair failed and we were unable to recover it. 00:37:28.754 [2024-09-29 16:45:29.262516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.754 [2024-09-29 16:45:29.262551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.754 qpair failed and we were unable to recover it. 00:37:28.754 [2024-09-29 16:45:29.262727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.754 [2024-09-29 16:45:29.262762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.754 qpair failed and we were unable to recover it. 00:37:28.754 [2024-09-29 16:45:29.262908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.754 [2024-09-29 16:45:29.262948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.754 qpair failed and we were unable to recover it. 00:37:28.754 [2024-09-29 16:45:29.263096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.754 [2024-09-29 16:45:29.263130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.754 qpair failed and we were unable to recover it. 00:37:28.754 [2024-09-29 16:45:29.263276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.754 [2024-09-29 16:45:29.263311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:28.754 qpair failed and we were unable to recover it. 00:37:29.034 [2024-09-29 16:45:29.263484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.034 [2024-09-29 16:45:29.263523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.034 qpair failed and we were unable to recover it. 00:37:29.034 [2024-09-29 16:45:29.263688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.034 [2024-09-29 16:45:29.263731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.034 qpair failed and we were unable to recover it. 00:37:29.034 [2024-09-29 16:45:29.263864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.034 [2024-09-29 16:45:29.263910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.034 qpair failed and we were unable to recover it. 00:37:29.034 [2024-09-29 16:45:29.264063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.034 [2024-09-29 16:45:29.264103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.034 qpair failed and we were unable to recover it. 00:37:29.034 [2024-09-29 16:45:29.264291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.034 [2024-09-29 16:45:29.264330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.034 qpair failed and we were unable to recover it. 00:37:29.034 [2024-09-29 16:45:29.264514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.034 [2024-09-29 16:45:29.264566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.034 qpair failed and we were unable to recover it. 00:37:29.034 [2024-09-29 16:45:29.264763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.034 [2024-09-29 16:45:29.264810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.034 qpair failed and we were unable to recover it. 00:37:29.034 [2024-09-29 16:45:29.264942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.034 [2024-09-29 16:45:29.264978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.034 qpair failed and we were unable to recover it. 00:37:29.034 [2024-09-29 16:45:29.265214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.034 [2024-09-29 16:45:29.265268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.034 qpair failed and we were unable to recover it. 00:37:29.034 [2024-09-29 16:45:29.265429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.034 [2024-09-29 16:45:29.265463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.034 qpair failed and we were unable to recover it. 00:37:29.034 [2024-09-29 16:45:29.265579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.034 [2024-09-29 16:45:29.265626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.034 qpair failed and we were unable to recover it. 00:37:29.034 [2024-09-29 16:45:29.265793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.034 [2024-09-29 16:45:29.265835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.034 qpair failed and we were unable to recover it. 00:37:29.034 [2024-09-29 16:45:29.265954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.034 [2024-09-29 16:45:29.265988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.034 qpair failed and we were unable to recover it. 00:37:29.034 [2024-09-29 16:45:29.266145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.034 [2024-09-29 16:45:29.266179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.034 qpair failed and we were unable to recover it. 00:37:29.034 [2024-09-29 16:45:29.266318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.034 [2024-09-29 16:45:29.266353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.034 qpair failed and we were unable to recover it. 00:37:29.034 [2024-09-29 16:45:29.266503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.034 [2024-09-29 16:45:29.266537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.034 qpair failed and we were unable to recover it. 00:37:29.034 [2024-09-29 16:45:29.266685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.034 [2024-09-29 16:45:29.266728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.034 qpair failed and we were unable to recover it. 00:37:29.035 [2024-09-29 16:45:29.266845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.035 [2024-09-29 16:45:29.266880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.035 qpair failed and we were unable to recover it. 00:37:29.035 [2024-09-29 16:45:29.267039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.035 [2024-09-29 16:45:29.267073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.035 qpair failed and we were unable to recover it. 00:37:29.035 [2024-09-29 16:45:29.267235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.035 [2024-09-29 16:45:29.267286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.035 qpair failed and we were unable to recover it. 00:37:29.035 [2024-09-29 16:45:29.267428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.035 [2024-09-29 16:45:29.267462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.035 qpair failed and we were unable to recover it. 00:37:29.035 [2024-09-29 16:45:29.267606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.035 [2024-09-29 16:45:29.267641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.035 qpair failed and we were unable to recover it. 00:37:29.035 [2024-09-29 16:45:29.267790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.035 [2024-09-29 16:45:29.267838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.035 qpair failed and we were unable to recover it. 00:37:29.035 [2024-09-29 16:45:29.267970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.035 [2024-09-29 16:45:29.268005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.035 qpair failed and we were unable to recover it. 00:37:29.035 [2024-09-29 16:45:29.268193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.035 [2024-09-29 16:45:29.268227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.035 qpair failed and we were unable to recover it. 00:37:29.035 [2024-09-29 16:45:29.268344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.035 [2024-09-29 16:45:29.268378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.035 qpair failed and we were unable to recover it. 00:37:29.035 [2024-09-29 16:45:29.268500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.035 [2024-09-29 16:45:29.268534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.035 qpair failed and we were unable to recover it. 00:37:29.035 [2024-09-29 16:45:29.268646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.035 [2024-09-29 16:45:29.268688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.035 qpair failed and we were unable to recover it. 00:37:29.035 [2024-09-29 16:45:29.268849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.035 [2024-09-29 16:45:29.268884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.035 qpair failed and we were unable to recover it. 00:37:29.035 [2024-09-29 16:45:29.269034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.035 [2024-09-29 16:45:29.269068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.035 qpair failed and we were unable to recover it. 00:37:29.035 [2024-09-29 16:45:29.269212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.035 [2024-09-29 16:45:29.269246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.035 qpair failed and we were unable to recover it. 00:37:29.035 [2024-09-29 16:45:29.269390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.035 [2024-09-29 16:45:29.269425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.035 qpair failed and we were unable to recover it. 00:37:29.035 [2024-09-29 16:45:29.269588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.035 [2024-09-29 16:45:29.269636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.035 qpair failed and we were unable to recover it. 00:37:29.035 [2024-09-29 16:45:29.269844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.035 [2024-09-29 16:45:29.269893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.035 qpair failed and we were unable to recover it. 00:37:29.035 [2024-09-29 16:45:29.270057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.035 [2024-09-29 16:45:29.270093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.035 qpair failed and we were unable to recover it. 00:37:29.035 [2024-09-29 16:45:29.270210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.035 [2024-09-29 16:45:29.270244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.035 qpair failed and we were unable to recover it. 00:37:29.035 [2024-09-29 16:45:29.270412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.035 [2024-09-29 16:45:29.270446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.035 qpair failed and we were unable to recover it. 00:37:29.035 [2024-09-29 16:45:29.270591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.035 [2024-09-29 16:45:29.270627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.035 qpair failed and we were unable to recover it. 00:37:29.035 [2024-09-29 16:45:29.270774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.035 [2024-09-29 16:45:29.270810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.035 qpair failed and we were unable to recover it. 00:37:29.035 [2024-09-29 16:45:29.270988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.035 [2024-09-29 16:45:29.271041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.035 qpair failed and we were unable to recover it. 00:37:29.035 [2024-09-29 16:45:29.271290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.035 [2024-09-29 16:45:29.271331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.035 qpair failed and we were unable to recover it. 00:37:29.035 [2024-09-29 16:45:29.271553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.035 [2024-09-29 16:45:29.271592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.035 qpair failed and we were unable to recover it. 00:37:29.035 [2024-09-29 16:45:29.271761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.035 [2024-09-29 16:45:29.271795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.035 qpair failed and we were unable to recover it. 00:37:29.035 [2024-09-29 16:45:29.271941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.035 [2024-09-29 16:45:29.271980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.035 qpair failed and we were unable to recover it. 00:37:29.035 [2024-09-29 16:45:29.272142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.035 [2024-09-29 16:45:29.272179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.035 qpair failed and we were unable to recover it. 00:37:29.035 [2024-09-29 16:45:29.272331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.035 [2024-09-29 16:45:29.272368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.035 qpair failed and we were unable to recover it. 00:37:29.035 [2024-09-29 16:45:29.272517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.035 [2024-09-29 16:45:29.272554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.035 qpair failed and we were unable to recover it. 00:37:29.035 [2024-09-29 16:45:29.272708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.035 [2024-09-29 16:45:29.272761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.036 qpair failed and we were unable to recover it. 00:37:29.036 [2024-09-29 16:45:29.272883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.036 [2024-09-29 16:45:29.272916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.036 qpair failed and we were unable to recover it. 00:37:29.036 [2024-09-29 16:45:29.273040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.036 [2024-09-29 16:45:29.273092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.036 qpair failed and we were unable to recover it. 00:37:29.036 [2024-09-29 16:45:29.273219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.036 [2024-09-29 16:45:29.273256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.036 qpair failed and we were unable to recover it. 00:37:29.036 [2024-09-29 16:45:29.273464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.036 [2024-09-29 16:45:29.273507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.036 qpair failed and we were unable to recover it. 00:37:29.036 [2024-09-29 16:45:29.273639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.036 [2024-09-29 16:45:29.273692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.036 qpair failed and we were unable to recover it. 00:37:29.036 [2024-09-29 16:45:29.273860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.036 [2024-09-29 16:45:29.273893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.036 qpair failed and we were unable to recover it. 00:37:29.036 [2024-09-29 16:45:29.274066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.036 [2024-09-29 16:45:29.274099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.036 qpair failed and we were unable to recover it. 00:37:29.036 [2024-09-29 16:45:29.274233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.036 [2024-09-29 16:45:29.274269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.036 qpair failed and we were unable to recover it. 00:37:29.036 [2024-09-29 16:45:29.274435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.036 [2024-09-29 16:45:29.274472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.036 qpair failed and we were unable to recover it. 00:37:29.036 [2024-09-29 16:45:29.274606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.036 [2024-09-29 16:45:29.274639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.036 qpair failed and we were unable to recover it. 00:37:29.036 [2024-09-29 16:45:29.274814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.036 [2024-09-29 16:45:29.274862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.036 qpair failed and we were unable to recover it. 00:37:29.036 [2024-09-29 16:45:29.275037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.036 [2024-09-29 16:45:29.275090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.036 qpair failed and we were unable to recover it. 00:37:29.036 [2024-09-29 16:45:29.275252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.036 [2024-09-29 16:45:29.275292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.036 qpair failed and we were unable to recover it. 00:37:29.036 [2024-09-29 16:45:29.275448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.036 [2024-09-29 16:45:29.275487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.036 qpair failed and we were unable to recover it. 00:37:29.036 [2024-09-29 16:45:29.275642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.036 [2024-09-29 16:45:29.275686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.036 qpair failed and we were unable to recover it. 00:37:29.036 [2024-09-29 16:45:29.275855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.036 [2024-09-29 16:45:29.275889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.036 qpair failed and we were unable to recover it. 00:37:29.036 [2024-09-29 16:45:29.276041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.036 [2024-09-29 16:45:29.276075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.036 qpair failed and we were unable to recover it. 00:37:29.036 [2024-09-29 16:45:29.276209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.036 [2024-09-29 16:45:29.276244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.036 qpair failed and we were unable to recover it. 00:37:29.036 [2024-09-29 16:45:29.276402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.036 [2024-09-29 16:45:29.276439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.036 qpair failed and we were unable to recover it. 00:37:29.036 [2024-09-29 16:45:29.276597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.036 [2024-09-29 16:45:29.276634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.036 qpair failed and we were unable to recover it. 00:37:29.036 [2024-09-29 16:45:29.276780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.036 [2024-09-29 16:45:29.276816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.036 qpair failed and we were unable to recover it. 00:37:29.036 [2024-09-29 16:45:29.276946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.036 [2024-09-29 16:45:29.276997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.036 qpair failed and we were unable to recover it. 00:37:29.036 [2024-09-29 16:45:29.277150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.036 [2024-09-29 16:45:29.277187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.036 qpair failed and we were unable to recover it. 00:37:29.036 [2024-09-29 16:45:29.277368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.036 [2024-09-29 16:45:29.277408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.036 qpair failed and we were unable to recover it. 00:37:29.036 [2024-09-29 16:45:29.277539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.036 [2024-09-29 16:45:29.277578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.036 qpair failed and we were unable to recover it. 00:37:29.036 [2024-09-29 16:45:29.277767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.036 [2024-09-29 16:45:29.277802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.036 qpair failed and we were unable to recover it. 00:37:29.036 [2024-09-29 16:45:29.277972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.036 [2024-09-29 16:45:29.278017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.036 qpair failed and we were unable to recover it. 00:37:29.036 [2024-09-29 16:45:29.278203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.036 [2024-09-29 16:45:29.278267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.036 qpair failed and we were unable to recover it. 00:37:29.036 [2024-09-29 16:45:29.278416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.036 [2024-09-29 16:45:29.278461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.036 qpair failed and we were unable to recover it. 00:37:29.036 [2024-09-29 16:45:29.278592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.036 [2024-09-29 16:45:29.278630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.036 qpair failed and we were unable to recover it. 00:37:29.036 [2024-09-29 16:45:29.278812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.037 [2024-09-29 16:45:29.278861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.037 qpair failed and we were unable to recover it. 00:37:29.037 [2024-09-29 16:45:29.279033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.037 [2024-09-29 16:45:29.279085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.037 qpair failed and we were unable to recover it. 00:37:29.037 [2024-09-29 16:45:29.279316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.037 [2024-09-29 16:45:29.279373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.037 qpair failed and we were unable to recover it. 00:37:29.037 [2024-09-29 16:45:29.279545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.037 [2024-09-29 16:45:29.279583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.037 qpair failed and we were unable to recover it. 00:37:29.037 [2024-09-29 16:45:29.279755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.037 [2024-09-29 16:45:29.279789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.037 qpair failed and we were unable to recover it. 00:37:29.037 [2024-09-29 16:45:29.279939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.037 [2024-09-29 16:45:29.279993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.037 qpair failed and we were unable to recover it. 00:37:29.037 [2024-09-29 16:45:29.280131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.037 [2024-09-29 16:45:29.280166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.037 qpair failed and we were unable to recover it. 00:37:29.037 [2024-09-29 16:45:29.280431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.037 [2024-09-29 16:45:29.280486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.037 qpair failed and we were unable to recover it. 00:37:29.037 [2024-09-29 16:45:29.280636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.037 [2024-09-29 16:45:29.280679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.037 qpair failed and we were unable to recover it. 00:37:29.037 [2024-09-29 16:45:29.280822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.037 [2024-09-29 16:45:29.280855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.037 qpair failed and we were unable to recover it. 00:37:29.037 [2024-09-29 16:45:29.281028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.037 [2024-09-29 16:45:29.281065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.037 qpair failed and we were unable to recover it. 00:37:29.037 [2024-09-29 16:45:29.281229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.037 [2024-09-29 16:45:29.281306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.037 qpair failed and we were unable to recover it. 00:37:29.037 [2024-09-29 16:45:29.281461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.037 [2024-09-29 16:45:29.281497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.037 qpair failed and we were unable to recover it. 00:37:29.037 [2024-09-29 16:45:29.281691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.037 [2024-09-29 16:45:29.281741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.037 qpair failed and we were unable to recover it. 00:37:29.037 [2024-09-29 16:45:29.281853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.037 [2024-09-29 16:45:29.281886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.037 qpair failed and we were unable to recover it. 00:37:29.037 [2024-09-29 16:45:29.282061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.037 [2024-09-29 16:45:29.282111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.037 qpair failed and we were unable to recover it. 00:37:29.037 [2024-09-29 16:45:29.282369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.037 [2024-09-29 16:45:29.282425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.037 qpair failed and we were unable to recover it. 00:37:29.037 [2024-09-29 16:45:29.282576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.037 [2024-09-29 16:45:29.282613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.037 qpair failed and we were unable to recover it. 00:37:29.037 [2024-09-29 16:45:29.282781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.037 [2024-09-29 16:45:29.282814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.037 qpair failed and we were unable to recover it. 00:37:29.037 [2024-09-29 16:45:29.282955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.037 [2024-09-29 16:45:29.282988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.037 qpair failed and we were unable to recover it. 00:37:29.037 [2024-09-29 16:45:29.283100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.037 [2024-09-29 16:45:29.283151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.037 qpair failed and we were unable to recover it. 00:37:29.037 [2024-09-29 16:45:29.283308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.037 [2024-09-29 16:45:29.283345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.037 qpair failed and we were unable to recover it. 00:37:29.037 [2024-09-29 16:45:29.283484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.037 [2024-09-29 16:45:29.283521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.037 qpair failed and we were unable to recover it. 00:37:29.037 [2024-09-29 16:45:29.283655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.037 [2024-09-29 16:45:29.283700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.037 qpair failed and we were unable to recover it. 00:37:29.037 [2024-09-29 16:45:29.283855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.037 [2024-09-29 16:45:29.283888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.037 qpair failed and we were unable to recover it. 00:37:29.037 [2024-09-29 16:45:29.284092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.037 [2024-09-29 16:45:29.284126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.037 qpair failed and we were unable to recover it. 00:37:29.037 [2024-09-29 16:45:29.284272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.037 [2024-09-29 16:45:29.284309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.037 qpair failed and we were unable to recover it. 00:37:29.037 [2024-09-29 16:45:29.284476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.037 [2024-09-29 16:45:29.284514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.037 qpair failed and we were unable to recover it. 00:37:29.037 [2024-09-29 16:45:29.284652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.037 [2024-09-29 16:45:29.284694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.037 qpair failed and we were unable to recover it. 00:37:29.037 [2024-09-29 16:45:29.284830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.037 [2024-09-29 16:45:29.284863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.037 qpair failed and we were unable to recover it. 00:37:29.037 [2024-09-29 16:45:29.285032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.037 [2024-09-29 16:45:29.285068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.037 qpair failed and we were unable to recover it. 00:37:29.037 [2024-09-29 16:45:29.285241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.037 [2024-09-29 16:45:29.285277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.037 qpair failed and we were unable to recover it. 00:37:29.037 [2024-09-29 16:45:29.285429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.037 [2024-09-29 16:45:29.285466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.037 qpair failed and we were unable to recover it. 00:37:29.038 [2024-09-29 16:45:29.285583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.038 [2024-09-29 16:45:29.285620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.038 qpair failed and we were unable to recover it. 00:37:29.038 [2024-09-29 16:45:29.285770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.038 [2024-09-29 16:45:29.285803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.038 qpair failed and we were unable to recover it. 00:37:29.038 [2024-09-29 16:45:29.285942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.038 [2024-09-29 16:45:29.285976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.038 qpair failed and we were unable to recover it. 00:37:29.038 [2024-09-29 16:45:29.286139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.038 [2024-09-29 16:45:29.286175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.038 qpair failed and we were unable to recover it. 00:37:29.038 [2024-09-29 16:45:29.286313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.038 [2024-09-29 16:45:29.286365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.038 qpair failed and we were unable to recover it. 00:37:29.038 [2024-09-29 16:45:29.286496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.038 [2024-09-29 16:45:29.286534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.038 qpair failed and we were unable to recover it. 00:37:29.038 [2024-09-29 16:45:29.286687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.038 [2024-09-29 16:45:29.286739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.038 qpair failed and we were unable to recover it. 00:37:29.038 [2024-09-29 16:45:29.286853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.038 [2024-09-29 16:45:29.286886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.038 qpair failed and we were unable to recover it. 00:37:29.038 [2024-09-29 16:45:29.287036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.038 [2024-09-29 16:45:29.287088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.038 qpair failed and we were unable to recover it. 00:37:29.038 [2024-09-29 16:45:29.287236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.038 [2024-09-29 16:45:29.287273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.038 qpair failed and we were unable to recover it. 00:37:29.038 [2024-09-29 16:45:29.287447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.038 [2024-09-29 16:45:29.287484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.038 qpair failed and we were unable to recover it. 00:37:29.038 [2024-09-29 16:45:29.287634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.038 [2024-09-29 16:45:29.287679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.038 qpair failed and we were unable to recover it. 00:37:29.038 [2024-09-29 16:45:29.287847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.038 [2024-09-29 16:45:29.287880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.038 qpair failed and we were unable to recover it. 00:37:29.038 [2024-09-29 16:45:29.288003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.038 [2024-09-29 16:45:29.288036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.038 qpair failed and we were unable to recover it. 00:37:29.038 [2024-09-29 16:45:29.288151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.038 [2024-09-29 16:45:29.288184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.038 qpair failed and we were unable to recover it. 00:37:29.038 [2024-09-29 16:45:29.288296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.038 [2024-09-29 16:45:29.288330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.038 qpair failed and we were unable to recover it. 00:37:29.038 [2024-09-29 16:45:29.288490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.038 [2024-09-29 16:45:29.288527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.038 qpair failed and we were unable to recover it. 00:37:29.038 [2024-09-29 16:45:29.288647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.038 [2024-09-29 16:45:29.288693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.038 qpair failed and we were unable to recover it. 00:37:29.038 [2024-09-29 16:45:29.288822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.038 [2024-09-29 16:45:29.288856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.038 qpair failed and we were unable to recover it. 00:37:29.038 [2024-09-29 16:45:29.288999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.038 [2024-09-29 16:45:29.289032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.038 qpair failed and we were unable to recover it. 00:37:29.038 [2024-09-29 16:45:29.289156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.038 [2024-09-29 16:45:29.289198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.038 qpair failed and we were unable to recover it. 00:37:29.038 [2024-09-29 16:45:29.289358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.038 [2024-09-29 16:45:29.289395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.038 qpair failed and we were unable to recover it. 00:37:29.038 [2024-09-29 16:45:29.289513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.038 [2024-09-29 16:45:29.289550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.038 qpair failed and we were unable to recover it. 00:37:29.038 [2024-09-29 16:45:29.289705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.038 [2024-09-29 16:45:29.289757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.038 qpair failed and we were unable to recover it. 00:37:29.038 [2024-09-29 16:45:29.289893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.038 [2024-09-29 16:45:29.289927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.038 qpair failed and we were unable to recover it. 00:37:29.038 [2024-09-29 16:45:29.290073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.038 [2024-09-29 16:45:29.290124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.038 qpair failed and we were unable to recover it. 00:37:29.038 [2024-09-29 16:45:29.290271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.038 [2024-09-29 16:45:29.290307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.038 qpair failed and we were unable to recover it. 00:37:29.038 [2024-09-29 16:45:29.290458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.038 [2024-09-29 16:45:29.290495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.038 qpair failed and we were unable to recover it. 00:37:29.038 [2024-09-29 16:45:29.290627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.038 [2024-09-29 16:45:29.290660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.038 qpair failed and we were unable to recover it. 00:37:29.038 [2024-09-29 16:45:29.290806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.038 [2024-09-29 16:45:29.290839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.038 qpair failed and we were unable to recover it. 00:37:29.038 [2024-09-29 16:45:29.291002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.038 [2024-09-29 16:45:29.291055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.038 qpair failed and we were unable to recover it. 00:37:29.038 [2024-09-29 16:45:29.291271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.038 [2024-09-29 16:45:29.291311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.038 qpair failed and we were unable to recover it. 00:37:29.038 [2024-09-29 16:45:29.291456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.038 [2024-09-29 16:45:29.291507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.038 qpair failed and we were unable to recover it. 00:37:29.039 [2024-09-29 16:45:29.291648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.039 [2024-09-29 16:45:29.291690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.039 qpair failed and we were unable to recover it. 00:37:29.039 [2024-09-29 16:45:29.291823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.039 [2024-09-29 16:45:29.291858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.039 qpair failed and we were unable to recover it. 00:37:29.039 [2024-09-29 16:45:29.292040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.039 [2024-09-29 16:45:29.292074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.039 qpair failed and we were unable to recover it. 00:37:29.039 [2024-09-29 16:45:29.292181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.039 [2024-09-29 16:45:29.292215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.039 qpair failed and we were unable to recover it. 00:37:29.039 [2024-09-29 16:45:29.292377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.039 [2024-09-29 16:45:29.292415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.039 qpair failed and we were unable to recover it. 00:37:29.039 [2024-09-29 16:45:29.292550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.039 [2024-09-29 16:45:29.292585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.039 qpair failed and we were unable to recover it. 00:37:29.039 [2024-09-29 16:45:29.292728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.039 [2024-09-29 16:45:29.292762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.039 qpair failed and we were unable to recover it. 00:37:29.039 [2024-09-29 16:45:29.292952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.039 [2024-09-29 16:45:29.292988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.039 qpair failed and we were unable to recover it. 00:37:29.039 [2024-09-29 16:45:29.293155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.039 [2024-09-29 16:45:29.293193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.039 qpair failed and we were unable to recover it. 00:37:29.039 [2024-09-29 16:45:29.293324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.039 [2024-09-29 16:45:29.293363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.039 qpair failed and we were unable to recover it. 00:37:29.039 [2024-09-29 16:45:29.293536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.039 [2024-09-29 16:45:29.293571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.039 qpair failed and we were unable to recover it. 00:37:29.039 [2024-09-29 16:45:29.293707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.039 [2024-09-29 16:45:29.293741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.039 qpair failed and we were unable to recover it. 00:37:29.039 [2024-09-29 16:45:29.293903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.039 [2024-09-29 16:45:29.293967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.039 qpair failed and we were unable to recover it. 00:37:29.039 [2024-09-29 16:45:29.294138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.039 [2024-09-29 16:45:29.294177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.039 qpair failed and we were unable to recover it. 00:37:29.039 [2024-09-29 16:45:29.294337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.039 [2024-09-29 16:45:29.294374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.039 qpair failed and we were unable to recover it. 00:37:29.039 [2024-09-29 16:45:29.294497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.039 [2024-09-29 16:45:29.294534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.039 qpair failed and we were unable to recover it. 00:37:29.039 [2024-09-29 16:45:29.294705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.039 [2024-09-29 16:45:29.294740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.039 qpair failed and we were unable to recover it. 00:37:29.039 [2024-09-29 16:45:29.294919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.039 [2024-09-29 16:45:29.294972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.039 qpair failed and we were unable to recover it. 00:37:29.039 [2024-09-29 16:45:29.295135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.039 [2024-09-29 16:45:29.295174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.039 qpair failed and we were unable to recover it. 00:37:29.039 [2024-09-29 16:45:29.295358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.039 [2024-09-29 16:45:29.295396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.039 qpair failed and we were unable to recover it. 00:37:29.039 [2024-09-29 16:45:29.295554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.039 [2024-09-29 16:45:29.295591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.039 qpair failed and we were unable to recover it. 00:37:29.039 [2024-09-29 16:45:29.295743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.039 [2024-09-29 16:45:29.295777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.039 qpair failed and we were unable to recover it. 00:37:29.039 [2024-09-29 16:45:29.295890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.039 [2024-09-29 16:45:29.295923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.039 qpair failed and we were unable to recover it. 00:37:29.039 [2024-09-29 16:45:29.296037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.039 [2024-09-29 16:45:29.296070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.039 qpair failed and we were unable to recover it. 00:37:29.039 [2024-09-29 16:45:29.296216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.039 [2024-09-29 16:45:29.296254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.039 qpair failed and we were unable to recover it. 00:37:29.039 [2024-09-29 16:45:29.296432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.039 [2024-09-29 16:45:29.296468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.039 qpair failed and we were unable to recover it. 00:37:29.039 [2024-09-29 16:45:29.296597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.039 [2024-09-29 16:45:29.296634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.039 qpair failed and we were unable to recover it. 00:37:29.039 [2024-09-29 16:45:29.296775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.040 [2024-09-29 16:45:29.296814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.040 qpair failed and we were unable to recover it. 00:37:29.040 [2024-09-29 16:45:29.297000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.040 [2024-09-29 16:45:29.297037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.040 qpair failed and we were unable to recover it. 00:37:29.040 [2024-09-29 16:45:29.297215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.040 [2024-09-29 16:45:29.297252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.040 qpair failed and we were unable to recover it. 00:37:29.040 [2024-09-29 16:45:29.297434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.040 [2024-09-29 16:45:29.297471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.040 qpair failed and we were unable to recover it. 00:37:29.040 [2024-09-29 16:45:29.297635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.040 [2024-09-29 16:45:29.297667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.040 qpair failed and we were unable to recover it. 00:37:29.040 [2024-09-29 16:45:29.297807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.040 [2024-09-29 16:45:29.297840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.040 qpair failed and we were unable to recover it. 00:37:29.040 [2024-09-29 16:45:29.298000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.040 [2024-09-29 16:45:29.298037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.040 qpair failed and we were unable to recover it. 00:37:29.040 [2024-09-29 16:45:29.298214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.040 [2024-09-29 16:45:29.298251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.040 qpair failed and we were unable to recover it. 00:37:29.040 [2024-09-29 16:45:29.298402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.040 [2024-09-29 16:45:29.298438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.040 qpair failed and we were unable to recover it. 00:37:29.040 [2024-09-29 16:45:29.298580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.040 [2024-09-29 16:45:29.298613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.040 qpair failed and we were unable to recover it. 00:37:29.040 [2024-09-29 16:45:29.298728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.040 [2024-09-29 16:45:29.298762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.040 qpair failed and we were unable to recover it. 00:37:29.040 [2024-09-29 16:45:29.298938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.040 [2024-09-29 16:45:29.298971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.040 qpair failed and we were unable to recover it. 00:37:29.040 [2024-09-29 16:45:29.299119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.040 [2024-09-29 16:45:29.299157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.040 qpair failed and we were unable to recover it. 00:37:29.040 [2024-09-29 16:45:29.299295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.040 [2024-09-29 16:45:29.299329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.040 qpair failed and we were unable to recover it. 00:37:29.040 [2024-09-29 16:45:29.299476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.040 [2024-09-29 16:45:29.299528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.040 qpair failed and we were unable to recover it. 00:37:29.040 [2024-09-29 16:45:29.299716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.040 [2024-09-29 16:45:29.299783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.040 qpair failed and we were unable to recover it. 00:37:29.040 [2024-09-29 16:45:29.299921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.040 [2024-09-29 16:45:29.299957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.040 qpair failed and we were unable to recover it. 00:37:29.040 [2024-09-29 16:45:29.300075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.040 [2024-09-29 16:45:29.300109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.040 qpair failed and we were unable to recover it. 00:37:29.040 [2024-09-29 16:45:29.300221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.040 [2024-09-29 16:45:29.300255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.040 qpair failed and we were unable to recover it. 00:37:29.040 [2024-09-29 16:45:29.300425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.040 [2024-09-29 16:45:29.300459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.040 qpair failed and we were unable to recover it. 00:37:29.040 [2024-09-29 16:45:29.300605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.040 [2024-09-29 16:45:29.300639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.040 qpair failed and we were unable to recover it. 00:37:29.040 [2024-09-29 16:45:29.300804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.040 [2024-09-29 16:45:29.300842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.040 qpair failed and we were unable to recover it. 00:37:29.040 [2024-09-29 16:45:29.300974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.040 [2024-09-29 16:45:29.301008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.040 qpair failed and we were unable to recover it. 00:37:29.040 [2024-09-29 16:45:29.301148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.040 [2024-09-29 16:45:29.301181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.040 qpair failed and we were unable to recover it. 00:37:29.040 [2024-09-29 16:45:29.301383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.040 [2024-09-29 16:45:29.301448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.040 qpair failed and we were unable to recover it. 00:37:29.040 [2024-09-29 16:45:29.301580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.040 [2024-09-29 16:45:29.301613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.040 qpair failed and we were unable to recover it. 00:37:29.040 [2024-09-29 16:45:29.301759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.040 [2024-09-29 16:45:29.301810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.040 qpair failed and we were unable to recover it. 00:37:29.040 [2024-09-29 16:45:29.302072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.040 [2024-09-29 16:45:29.302136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.040 qpair failed and we were unable to recover it. 00:37:29.040 [2024-09-29 16:45:29.302311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.040 [2024-09-29 16:45:29.302346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.040 qpair failed and we were unable to recover it. 00:37:29.040 [2024-09-29 16:45:29.302463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.040 [2024-09-29 16:45:29.302497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.040 qpair failed and we were unable to recover it. 00:37:29.040 [2024-09-29 16:45:29.302669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.041 [2024-09-29 16:45:29.302709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.041 qpair failed and we were unable to recover it. 00:37:29.041 [2024-09-29 16:45:29.302855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.041 [2024-09-29 16:45:29.302889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.041 qpair failed and we were unable to recover it. 00:37:29.041 [2024-09-29 16:45:29.303028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.041 [2024-09-29 16:45:29.303062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.041 qpair failed and we were unable to recover it. 00:37:29.041 [2024-09-29 16:45:29.303213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.041 [2024-09-29 16:45:29.303247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.041 qpair failed and we were unable to recover it. 00:37:29.041 [2024-09-29 16:45:29.303450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.041 [2024-09-29 16:45:29.303498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.041 qpair failed and we were unable to recover it. 00:37:29.041 [2024-09-29 16:45:29.303639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.041 [2024-09-29 16:45:29.303705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.041 qpair failed and we were unable to recover it. 00:37:29.041 [2024-09-29 16:45:29.303837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.041 [2024-09-29 16:45:29.303873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.041 qpair failed and we were unable to recover it. 00:37:29.041 [2024-09-29 16:45:29.304020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.041 [2024-09-29 16:45:29.304055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.041 qpair failed and we were unable to recover it. 00:37:29.041 [2024-09-29 16:45:29.304221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.041 [2024-09-29 16:45:29.304272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.041 qpair failed and we were unable to recover it. 00:37:29.041 [2024-09-29 16:45:29.304432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.041 [2024-09-29 16:45:29.304490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.041 qpair failed and we were unable to recover it. 00:37:29.041 [2024-09-29 16:45:29.304605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.041 [2024-09-29 16:45:29.304644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.041 qpair failed and we were unable to recover it. 00:37:29.041 [2024-09-29 16:45:29.304776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.041 [2024-09-29 16:45:29.304810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.041 qpair failed and we were unable to recover it. 00:37:29.041 [2024-09-29 16:45:29.305005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.041 [2024-09-29 16:45:29.305053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.041 qpair failed and we were unable to recover it. 00:37:29.041 [2024-09-29 16:45:29.305297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.041 [2024-09-29 16:45:29.305331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.041 qpair failed and we were unable to recover it. 00:37:29.041 [2024-09-29 16:45:29.305511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.041 [2024-09-29 16:45:29.305544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.041 qpair failed and we were unable to recover it. 00:37:29.041 [2024-09-29 16:45:29.305687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.041 [2024-09-29 16:45:29.305722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.041 qpair failed and we were unable to recover it. 00:37:29.041 [2024-09-29 16:45:29.305850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.041 [2024-09-29 16:45:29.305886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.041 qpair failed and we were unable to recover it. 00:37:29.041 [2024-09-29 16:45:29.306066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.041 [2024-09-29 16:45:29.306101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.041 qpair failed and we were unable to recover it. 00:37:29.041 [2024-09-29 16:45:29.306297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.041 [2024-09-29 16:45:29.306350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.041 qpair failed and we were unable to recover it. 00:37:29.041 [2024-09-29 16:45:29.306493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.041 [2024-09-29 16:45:29.306526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.041 qpair failed and we were unable to recover it. 00:37:29.041 [2024-09-29 16:45:29.306679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.041 [2024-09-29 16:45:29.306714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.041 qpair failed and we were unable to recover it. 00:37:29.041 [2024-09-29 16:45:29.306886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.041 [2024-09-29 16:45:29.306939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.041 qpair failed and we were unable to recover it. 00:37:29.041 [2024-09-29 16:45:29.307213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.041 [2024-09-29 16:45:29.307253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.041 qpair failed and we were unable to recover it. 00:37:29.041 [2024-09-29 16:45:29.307379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.041 [2024-09-29 16:45:29.307417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.041 qpair failed and we were unable to recover it. 00:37:29.041 [2024-09-29 16:45:29.307585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.041 [2024-09-29 16:45:29.307619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.041 qpair failed and we were unable to recover it. 00:37:29.041 [2024-09-29 16:45:29.307779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.041 [2024-09-29 16:45:29.307813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.041 qpair failed and we were unable to recover it. 00:37:29.041 [2024-09-29 16:45:29.307944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.041 [2024-09-29 16:45:29.307979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.041 qpair failed and we were unable to recover it. 00:37:29.041 [2024-09-29 16:45:29.308108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.041 [2024-09-29 16:45:29.308145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.041 qpair failed and we were unable to recover it. 00:37:29.041 [2024-09-29 16:45:29.308307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.041 [2024-09-29 16:45:29.308346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.041 qpair failed and we were unable to recover it. 00:37:29.041 [2024-09-29 16:45:29.308465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.041 [2024-09-29 16:45:29.308502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.041 qpair failed and we were unable to recover it. 00:37:29.041 [2024-09-29 16:45:29.308669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.041 [2024-09-29 16:45:29.308711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.041 qpair failed and we were unable to recover it. 00:37:29.041 [2024-09-29 16:45:29.308877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.041 [2024-09-29 16:45:29.308924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.041 qpair failed and we were unable to recover it. 00:37:29.041 [2024-09-29 16:45:29.309065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.041 [2024-09-29 16:45:29.309104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.041 qpair failed and we were unable to recover it. 00:37:29.041 [2024-09-29 16:45:29.309289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.041 [2024-09-29 16:45:29.309327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.041 qpair failed and we were unable to recover it. 00:37:29.041 [2024-09-29 16:45:29.309482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.042 [2024-09-29 16:45:29.309519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.042 qpair failed and we were unable to recover it. 00:37:29.042 [2024-09-29 16:45:29.309684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.042 [2024-09-29 16:45:29.309740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.042 qpair failed and we were unable to recover it. 00:37:29.042 [2024-09-29 16:45:29.309881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.042 [2024-09-29 16:45:29.309914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.042 qpair failed and we were unable to recover it. 00:37:29.042 [2024-09-29 16:45:29.310105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.042 [2024-09-29 16:45:29.310142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.042 qpair failed and we were unable to recover it. 00:37:29.042 [2024-09-29 16:45:29.310280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.042 [2024-09-29 16:45:29.310319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.042 qpair failed and we were unable to recover it. 00:37:29.042 [2024-09-29 16:45:29.310445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.042 [2024-09-29 16:45:29.310481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.042 qpair failed and we were unable to recover it. 00:37:29.042 [2024-09-29 16:45:29.310615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.042 [2024-09-29 16:45:29.310648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.042 qpair failed and we were unable to recover it. 00:37:29.042 [2024-09-29 16:45:29.310769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.042 [2024-09-29 16:45:29.310805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.042 qpair failed and we were unable to recover it. 00:37:29.042 [2024-09-29 16:45:29.310932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.042 [2024-09-29 16:45:29.310968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.042 qpair failed and we were unable to recover it. 00:37:29.042 [2024-09-29 16:45:29.311150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.042 [2024-09-29 16:45:29.311215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.042 qpair failed and we were unable to recover it. 00:37:29.042 [2024-09-29 16:45:29.311392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.042 [2024-09-29 16:45:29.311446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.042 qpair failed and we were unable to recover it. 00:37:29.042 [2024-09-29 16:45:29.311590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.042 [2024-09-29 16:45:29.311625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.042 qpair failed and we were unable to recover it. 00:37:29.042 [2024-09-29 16:45:29.311802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.042 [2024-09-29 16:45:29.311839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.042 qpair failed and we were unable to recover it. 00:37:29.042 [2024-09-29 16:45:29.311969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.042 [2024-09-29 16:45:29.312021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.042 qpair failed and we were unable to recover it. 00:37:29.042 [2024-09-29 16:45:29.312139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.042 [2024-09-29 16:45:29.312173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.042 qpair failed and we were unable to recover it. 00:37:29.042 [2024-09-29 16:45:29.312340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.042 [2024-09-29 16:45:29.312372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.042 qpair failed and we were unable to recover it. 00:37:29.042 [2024-09-29 16:45:29.312518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.042 [2024-09-29 16:45:29.312557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.042 qpair failed and we were unable to recover it. 00:37:29.042 [2024-09-29 16:45:29.312728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.042 [2024-09-29 16:45:29.312763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.042 qpair failed and we were unable to recover it. 00:37:29.042 [2024-09-29 16:45:29.312938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.042 [2024-09-29 16:45:29.312970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.042 qpair failed and we were unable to recover it. 00:37:29.042 [2024-09-29 16:45:29.313109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.042 [2024-09-29 16:45:29.313142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.042 qpair failed and we were unable to recover it. 00:37:29.042 [2024-09-29 16:45:29.313320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.042 [2024-09-29 16:45:29.313355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.042 qpair failed and we were unable to recover it. 00:37:29.042 [2024-09-29 16:45:29.313473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.042 [2024-09-29 16:45:29.313508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.042 qpair failed and we were unable to recover it. 00:37:29.042 [2024-09-29 16:45:29.313776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.042 [2024-09-29 16:45:29.313811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.042 qpair failed and we were unable to recover it. 00:37:29.042 [2024-09-29 16:45:29.313950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.042 [2024-09-29 16:45:29.314004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.042 qpair failed and we were unable to recover it. 00:37:29.042 [2024-09-29 16:45:29.314176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.042 [2024-09-29 16:45:29.314230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.042 qpair failed and we were unable to recover it. 00:37:29.042 [2024-09-29 16:45:29.314417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.042 [2024-09-29 16:45:29.314470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.042 qpair failed and we were unable to recover it. 00:37:29.042 [2024-09-29 16:45:29.314612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.042 [2024-09-29 16:45:29.314645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.042 qpair failed and we were unable to recover it. 00:37:29.042 [2024-09-29 16:45:29.314816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.042 [2024-09-29 16:45:29.314872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.042 qpair failed and we were unable to recover it. 00:37:29.042 [2024-09-29 16:45:29.315024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.042 [2024-09-29 16:45:29.315062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.042 qpair failed and we were unable to recover it. 00:37:29.042 [2024-09-29 16:45:29.315215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.042 [2024-09-29 16:45:29.315252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.042 qpair failed and we were unable to recover it. 00:37:29.042 [2024-09-29 16:45:29.315383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.042 [2024-09-29 16:45:29.315421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.042 qpair failed and we were unable to recover it. 00:37:29.042 [2024-09-29 16:45:29.315581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.042 [2024-09-29 16:45:29.315617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.042 qpair failed and we were unable to recover it. 00:37:29.042 [2024-09-29 16:45:29.315797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.042 [2024-09-29 16:45:29.315831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.043 qpair failed and we were unable to recover it. 00:37:29.043 [2024-09-29 16:45:29.315972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.043 [2024-09-29 16:45:29.316010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.043 qpair failed and we were unable to recover it. 00:37:29.043 [2024-09-29 16:45:29.316142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.043 [2024-09-29 16:45:29.316178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.043 qpair failed and we were unable to recover it. 00:37:29.043 [2024-09-29 16:45:29.316305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.043 [2024-09-29 16:45:29.316342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.043 qpair failed and we were unable to recover it. 00:37:29.043 [2024-09-29 16:45:29.316493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.043 [2024-09-29 16:45:29.316530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.043 qpair failed and we were unable to recover it. 00:37:29.043 [2024-09-29 16:45:29.316725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.043 [2024-09-29 16:45:29.316766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.043 qpair failed and we were unable to recover it. 00:37:29.043 [2024-09-29 16:45:29.316935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.043 [2024-09-29 16:45:29.316969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.043 qpair failed and we were unable to recover it. 00:37:29.043 [2024-09-29 16:45:29.317159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.043 [2024-09-29 16:45:29.317196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.043 qpair failed and we were unable to recover it. 00:37:29.043 [2024-09-29 16:45:29.317351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.043 [2024-09-29 16:45:29.317387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.043 qpair failed and we were unable to recover it. 00:37:29.043 [2024-09-29 16:45:29.317603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.043 [2024-09-29 16:45:29.317640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.043 qpair failed and we were unable to recover it. 00:37:29.043 [2024-09-29 16:45:29.317834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.043 [2024-09-29 16:45:29.317882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.043 qpair failed and we were unable to recover it. 00:37:29.043 [2024-09-29 16:45:29.318084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.043 [2024-09-29 16:45:29.318137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.043 qpair failed and we were unable to recover it. 00:37:29.043 [2024-09-29 16:45:29.318361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.043 [2024-09-29 16:45:29.318416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.043 qpair failed and we were unable to recover it. 00:37:29.043 [2024-09-29 16:45:29.318575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.043 [2024-09-29 16:45:29.318629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.043 qpair failed and we were unable to recover it. 00:37:29.043 [2024-09-29 16:45:29.318848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.043 [2024-09-29 16:45:29.318883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.043 qpair failed and we were unable to recover it. 00:37:29.043 [2024-09-29 16:45:29.319010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.043 [2024-09-29 16:45:29.319044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.043 qpair failed and we were unable to recover it. 00:37:29.043 [2024-09-29 16:45:29.319187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.043 [2024-09-29 16:45:29.319240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.043 qpair failed and we were unable to recover it. 00:37:29.043 [2024-09-29 16:45:29.319409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.043 [2024-09-29 16:45:29.319468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.043 qpair failed and we were unable to recover it. 00:37:29.043 [2024-09-29 16:45:29.319650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.043 [2024-09-29 16:45:29.319710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.043 qpair failed and we were unable to recover it. 00:37:29.043 [2024-09-29 16:45:29.319888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.043 [2024-09-29 16:45:29.319928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.043 qpair failed and we were unable to recover it. 00:37:29.043 [2024-09-29 16:45:29.320103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.043 [2024-09-29 16:45:29.320142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.043 qpair failed and we were unable to recover it. 00:37:29.043 [2024-09-29 16:45:29.320283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.043 [2024-09-29 16:45:29.320321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.043 qpair failed and we were unable to recover it. 00:37:29.043 [2024-09-29 16:45:29.320441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.043 [2024-09-29 16:45:29.320477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.043 qpair failed and we were unable to recover it. 00:37:29.043 [2024-09-29 16:45:29.320647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.043 [2024-09-29 16:45:29.320708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.043 qpair failed and we were unable to recover it. 00:37:29.043 [2024-09-29 16:45:29.320854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.043 [2024-09-29 16:45:29.320897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.043 qpair failed and we were unable to recover it. 00:37:29.043 [2024-09-29 16:45:29.321020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.043 [2024-09-29 16:45:29.321056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.043 qpair failed and we were unable to recover it. 00:37:29.043 [2024-09-29 16:45:29.321192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.043 [2024-09-29 16:45:29.321230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.043 qpair failed and we were unable to recover it. 00:37:29.043 [2024-09-29 16:45:29.321499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.043 [2024-09-29 16:45:29.321556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.043 qpair failed and we were unable to recover it. 00:37:29.043 [2024-09-29 16:45:29.321725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.043 [2024-09-29 16:45:29.321760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.043 qpair failed and we were unable to recover it. 00:37:29.043 [2024-09-29 16:45:29.321924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.043 [2024-09-29 16:45:29.321975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.043 qpair failed and we were unable to recover it. 00:37:29.043 [2024-09-29 16:45:29.322106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.043 [2024-09-29 16:45:29.322158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.043 qpair failed and we were unable to recover it. 00:37:29.043 [2024-09-29 16:45:29.322321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.043 [2024-09-29 16:45:29.322358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.043 qpair failed and we were unable to recover it. 00:37:29.043 [2024-09-29 16:45:29.322499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.043 [2024-09-29 16:45:29.322553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.043 qpair failed and we were unable to recover it. 00:37:29.043 [2024-09-29 16:45:29.322720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.044 [2024-09-29 16:45:29.322767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.044 qpair failed and we were unable to recover it. 00:37:29.044 [2024-09-29 16:45:29.322904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.044 [2024-09-29 16:45:29.322940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.044 qpair failed and we were unable to recover it. 00:37:29.044 [2024-09-29 16:45:29.323104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.044 [2024-09-29 16:45:29.323157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.044 qpair failed and we were unable to recover it. 00:37:29.044 [2024-09-29 16:45:29.323283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.044 [2024-09-29 16:45:29.323382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.044 qpair failed and we were unable to recover it. 00:37:29.044 [2024-09-29 16:45:29.323494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.044 [2024-09-29 16:45:29.323527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.044 qpair failed and we were unable to recover it. 00:37:29.044 [2024-09-29 16:45:29.323684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.044 [2024-09-29 16:45:29.323719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.044 qpair failed and we were unable to recover it. 00:37:29.044 [2024-09-29 16:45:29.323852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.044 [2024-09-29 16:45:29.323887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.044 qpair failed and we were unable to recover it. 00:37:29.044 [2024-09-29 16:45:29.324104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.044 [2024-09-29 16:45:29.324191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.044 qpair failed and we were unable to recover it. 00:37:29.044 [2024-09-29 16:45:29.324358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.044 [2024-09-29 16:45:29.324398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.044 qpair failed and we were unable to recover it. 00:37:29.044 [2024-09-29 16:45:29.324591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.044 [2024-09-29 16:45:29.324628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.044 qpair failed and we were unable to recover it. 00:37:29.044 [2024-09-29 16:45:29.324786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.044 [2024-09-29 16:45:29.324822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.044 qpair failed and we were unable to recover it. 00:37:29.044 [2024-09-29 16:45:29.325002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.044 [2024-09-29 16:45:29.325037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.044 qpair failed and we were unable to recover it. 00:37:29.044 [2024-09-29 16:45:29.325195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.044 [2024-09-29 16:45:29.325254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.044 qpair failed and we were unable to recover it. 00:37:29.044 [2024-09-29 16:45:29.325394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.044 [2024-09-29 16:45:29.325445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.044 qpair failed and we were unable to recover it. 00:37:29.044 [2024-09-29 16:45:29.325582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.044 [2024-09-29 16:45:29.325615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.044 qpair failed and we were unable to recover it. 00:37:29.044 [2024-09-29 16:45:29.325792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.044 [2024-09-29 16:45:29.325840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.044 qpair failed and we were unable to recover it. 00:37:29.044 [2024-09-29 16:45:29.325988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.044 [2024-09-29 16:45:29.326046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.044 qpair failed and we were unable to recover it. 00:37:29.044 [2024-09-29 16:45:29.326246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.044 [2024-09-29 16:45:29.326283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.044 qpair failed and we were unable to recover it. 00:37:29.044 [2024-09-29 16:45:29.326473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.044 [2024-09-29 16:45:29.326512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.044 qpair failed and we were unable to recover it. 00:37:29.044 [2024-09-29 16:45:29.326700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.044 [2024-09-29 16:45:29.326740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.044 qpair failed and we were unable to recover it. 00:37:29.044 [2024-09-29 16:45:29.326898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.044 [2024-09-29 16:45:29.326965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.044 qpair failed and we were unable to recover it. 00:37:29.044 [2024-09-29 16:45:29.327124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.044 [2024-09-29 16:45:29.327179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.044 qpair failed and we were unable to recover it. 00:37:29.044 [2024-09-29 16:45:29.327331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.044 [2024-09-29 16:45:29.327365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.044 qpair failed and we were unable to recover it. 00:37:29.044 [2024-09-29 16:45:29.327508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.044 [2024-09-29 16:45:29.327543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.044 qpair failed and we were unable to recover it. 00:37:29.044 [2024-09-29 16:45:29.327657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.044 [2024-09-29 16:45:29.327700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.044 qpair failed and we were unable to recover it. 00:37:29.044 [2024-09-29 16:45:29.327857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.044 [2024-09-29 16:45:29.327891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.044 qpair failed and we were unable to recover it. 00:37:29.044 [2024-09-29 16:45:29.328012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.044 [2024-09-29 16:45:29.328047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.044 qpair failed and we were unable to recover it. 00:37:29.044 [2024-09-29 16:45:29.328206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.044 [2024-09-29 16:45:29.328240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.044 qpair failed and we were unable to recover it. 00:37:29.044 [2024-09-29 16:45:29.328394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.044 [2024-09-29 16:45:29.328454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.044 qpair failed and we were unable to recover it. 00:37:29.044 [2024-09-29 16:45:29.328577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.044 [2024-09-29 16:45:29.328612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.044 qpair failed and we were unable to recover it. 00:37:29.045 [2024-09-29 16:45:29.328808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.045 [2024-09-29 16:45:29.328861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.045 qpair failed and we were unable to recover it. 00:37:29.045 [2024-09-29 16:45:29.329056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.045 [2024-09-29 16:45:29.329115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.045 qpair failed and we were unable to recover it. 00:37:29.045 [2024-09-29 16:45:29.329388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.045 [2024-09-29 16:45:29.329449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.045 qpair failed and we were unable to recover it. 00:37:29.045 [2024-09-29 16:45:29.329568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.045 [2024-09-29 16:45:29.329602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.045 qpair failed and we were unable to recover it. 00:37:29.045 [2024-09-29 16:45:29.329776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.045 [2024-09-29 16:45:29.329829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.045 qpair failed and we were unable to recover it. 00:37:29.045 [2024-09-29 16:45:29.329970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.045 [2024-09-29 16:45:29.330022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.045 qpair failed and we were unable to recover it. 00:37:29.045 [2024-09-29 16:45:29.330179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.045 [2024-09-29 16:45:29.330228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.045 qpair failed and we were unable to recover it. 00:37:29.045 [2024-09-29 16:45:29.330369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.045 [2024-09-29 16:45:29.330403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.045 qpair failed and we were unable to recover it. 00:37:29.045 [2024-09-29 16:45:29.330569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.045 [2024-09-29 16:45:29.330603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.045 qpair failed and we were unable to recover it. 00:37:29.045 [2024-09-29 16:45:29.330743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.045 [2024-09-29 16:45:29.330798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.045 qpair failed and we were unable to recover it. 00:37:29.045 [2024-09-29 16:45:29.330991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.045 [2024-09-29 16:45:29.331037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.045 qpair failed and we were unable to recover it. 00:37:29.045 [2024-09-29 16:45:29.331196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.045 [2024-09-29 16:45:29.331231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.045 qpair failed and we were unable to recover it. 00:37:29.045 [2024-09-29 16:45:29.331346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.045 [2024-09-29 16:45:29.331380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.045 qpair failed and we were unable to recover it. 00:37:29.045 [2024-09-29 16:45:29.331500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.045 [2024-09-29 16:45:29.331534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.045 qpair failed and we were unable to recover it. 00:37:29.045 [2024-09-29 16:45:29.331699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.045 [2024-09-29 16:45:29.331748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.045 qpair failed and we were unable to recover it. 00:37:29.045 [2024-09-29 16:45:29.331941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.045 [2024-09-29 16:45:29.331994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.045 qpair failed and we were unable to recover it. 00:37:29.045 [2024-09-29 16:45:29.332164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.045 [2024-09-29 16:45:29.332222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.045 qpair failed and we were unable to recover it. 00:37:29.045 [2024-09-29 16:45:29.332387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.045 [2024-09-29 16:45:29.332441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.045 qpair failed and we were unable to recover it. 00:37:29.045 [2024-09-29 16:45:29.332585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.045 [2024-09-29 16:45:29.332620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.045 qpair failed and we were unable to recover it. 00:37:29.045 [2024-09-29 16:45:29.332805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.045 [2024-09-29 16:45:29.332871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.045 qpair failed and we were unable to recover it. 00:37:29.045 [2024-09-29 16:45:29.333060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.045 [2024-09-29 16:45:29.333124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.045 qpair failed and we were unable to recover it. 00:37:29.045 [2024-09-29 16:45:29.333385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.045 [2024-09-29 16:45:29.333441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.045 qpair failed and we were unable to recover it. 00:37:29.045 [2024-09-29 16:45:29.333596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.045 [2024-09-29 16:45:29.333629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.045 qpair failed and we were unable to recover it. 00:37:29.045 [2024-09-29 16:45:29.333789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.045 [2024-09-29 16:45:29.333825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.045 qpair failed and we were unable to recover it. 00:37:29.045 [2024-09-29 16:45:29.333972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.045 [2024-09-29 16:45:29.334010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.045 qpair failed and we were unable to recover it. 00:37:29.045 [2024-09-29 16:45:29.334166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.046 [2024-09-29 16:45:29.334203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.046 qpair failed and we were unable to recover it. 00:37:29.046 [2024-09-29 16:45:29.334459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.046 [2024-09-29 16:45:29.334539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.046 qpair failed and we were unable to recover it. 00:37:29.046 [2024-09-29 16:45:29.334663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.046 [2024-09-29 16:45:29.334711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.046 qpair failed and we were unable to recover it. 00:37:29.046 [2024-09-29 16:45:29.334862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.046 [2024-09-29 16:45:29.334911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.046 qpair failed and we were unable to recover it. 00:37:29.046 [2024-09-29 16:45:29.335031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.046 [2024-09-29 16:45:29.335069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.046 qpair failed and we were unable to recover it. 00:37:29.046 [2024-09-29 16:45:29.335209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.046 [2024-09-29 16:45:29.335261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.046 qpair failed and we were unable to recover it. 00:37:29.046 [2024-09-29 16:45:29.335495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.046 [2024-09-29 16:45:29.335530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.046 qpair failed and we were unable to recover it. 00:37:29.046 [2024-09-29 16:45:29.335693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.046 [2024-09-29 16:45:29.335750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.046 qpair failed and we were unable to recover it. 00:37:29.046 [2024-09-29 16:45:29.335883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.046 [2024-09-29 16:45:29.335921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.046 qpair failed and we were unable to recover it. 00:37:29.046 [2024-09-29 16:45:29.336042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.046 [2024-09-29 16:45:29.336077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.046 qpair failed and we were unable to recover it. 00:37:29.046 [2024-09-29 16:45:29.336258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.046 [2024-09-29 16:45:29.336294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.046 qpair failed and we were unable to recover it. 00:37:29.046 [2024-09-29 16:45:29.336467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.046 [2024-09-29 16:45:29.336512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.046 qpair failed and we were unable to recover it. 00:37:29.046 [2024-09-29 16:45:29.336679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.046 [2024-09-29 16:45:29.336727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.046 qpair failed and we were unable to recover it. 00:37:29.046 [2024-09-29 16:45:29.336903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.046 [2024-09-29 16:45:29.336990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.046 qpair failed and we were unable to recover it. 00:37:29.046 [2024-09-29 16:45:29.337178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.046 [2024-09-29 16:45:29.337212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.046 qpair failed and we were unable to recover it. 00:37:29.046 [2024-09-29 16:45:29.337478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.046 [2024-09-29 16:45:29.337535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.046 qpair failed and we were unable to recover it. 00:37:29.046 [2024-09-29 16:45:29.337681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.046 [2024-09-29 16:45:29.337715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.046 qpair failed and we were unable to recover it. 00:37:29.046 [2024-09-29 16:45:29.337908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.046 [2024-09-29 16:45:29.337962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.046 qpair failed and we were unable to recover it. 00:37:29.046 [2024-09-29 16:45:29.338141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.046 [2024-09-29 16:45:29.338181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.046 qpair failed and we were unable to recover it. 00:37:29.046 [2024-09-29 16:45:29.338440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.046 [2024-09-29 16:45:29.338511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.046 qpair failed and we were unable to recover it. 00:37:29.046 [2024-09-29 16:45:29.338688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.046 [2024-09-29 16:45:29.338729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.046 qpair failed and we were unable to recover it. 00:37:29.046 [2024-09-29 16:45:29.338846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.046 [2024-09-29 16:45:29.338881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.046 qpair failed and we were unable to recover it. 00:37:29.046 [2024-09-29 16:45:29.339040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.046 [2024-09-29 16:45:29.339077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.046 qpair failed and we were unable to recover it. 00:37:29.046 [2024-09-29 16:45:29.339229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.046 [2024-09-29 16:45:29.339326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.046 qpair failed and we were unable to recover it. 00:37:29.046 [2024-09-29 16:45:29.339532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.046 [2024-09-29 16:45:29.339588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.046 qpair failed and we were unable to recover it. 00:37:29.046 [2024-09-29 16:45:29.339762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.046 [2024-09-29 16:45:29.339796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.046 qpair failed and we were unable to recover it. 00:37:29.046 [2024-09-29 16:45:29.339994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.046 [2024-09-29 16:45:29.340056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.046 qpair failed and we were unable to recover it. 00:37:29.046 [2024-09-29 16:45:29.340173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.046 [2024-09-29 16:45:29.340210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.046 qpair failed and we were unable to recover it. 00:37:29.046 [2024-09-29 16:45:29.340335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.046 [2024-09-29 16:45:29.340372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.046 qpair failed and we were unable to recover it. 00:37:29.046 [2024-09-29 16:45:29.340510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.046 [2024-09-29 16:45:29.340544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.046 qpair failed and we were unable to recover it. 00:37:29.046 [2024-09-29 16:45:29.340701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.046 [2024-09-29 16:45:29.340736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.046 qpair failed and we were unable to recover it. 00:37:29.046 [2024-09-29 16:45:29.340878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.046 [2024-09-29 16:45:29.340926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.046 qpair failed and we were unable to recover it. 00:37:29.046 [2024-09-29 16:45:29.341165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.046 [2024-09-29 16:45:29.341204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.046 qpair failed and we were unable to recover it. 00:37:29.046 [2024-09-29 16:45:29.341356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.046 [2024-09-29 16:45:29.341410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.046 qpair failed and we were unable to recover it. 00:37:29.047 [2024-09-29 16:45:29.341575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.047 [2024-09-29 16:45:29.341609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.047 qpair failed and we were unable to recover it. 00:37:29.047 [2024-09-29 16:45:29.341766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.047 [2024-09-29 16:45:29.341800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.047 qpair failed and we were unable to recover it. 00:37:29.047 [2024-09-29 16:45:29.341932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.047 [2024-09-29 16:45:29.341968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.047 qpair failed and we were unable to recover it. 00:37:29.047 [2024-09-29 16:45:29.342116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.047 [2024-09-29 16:45:29.342153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.047 qpair failed and we were unable to recover it. 00:37:29.047 [2024-09-29 16:45:29.342316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.047 [2024-09-29 16:45:29.342354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.047 qpair failed and we were unable to recover it. 00:37:29.047 [2024-09-29 16:45:29.342509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.047 [2024-09-29 16:45:29.342546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.047 qpair failed and we were unable to recover it. 00:37:29.047 [2024-09-29 16:45:29.342695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.047 [2024-09-29 16:45:29.342745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.047 qpair failed and we were unable to recover it. 00:37:29.047 [2024-09-29 16:45:29.342868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.047 [2024-09-29 16:45:29.342915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.047 qpair failed and we were unable to recover it. 00:37:29.047 [2024-09-29 16:45:29.343196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.047 [2024-09-29 16:45:29.343264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.047 qpair failed and we were unable to recover it. 00:37:29.047 [2024-09-29 16:45:29.343405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.047 [2024-09-29 16:45:29.343481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.047 qpair failed and we were unable to recover it. 00:37:29.047 [2024-09-29 16:45:29.343605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.047 [2024-09-29 16:45:29.343639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.047 qpair failed and we were unable to recover it. 00:37:29.047 [2024-09-29 16:45:29.343812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.047 [2024-09-29 16:45:29.343880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.047 qpair failed and we were unable to recover it. 00:37:29.047 [2024-09-29 16:45:29.344105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.047 [2024-09-29 16:45:29.344191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.047 qpair failed and we were unable to recover it. 00:37:29.047 [2024-09-29 16:45:29.344421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.047 [2024-09-29 16:45:29.344479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.047 qpair failed and we were unable to recover it. 00:37:29.047 [2024-09-29 16:45:29.344648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.047 [2024-09-29 16:45:29.344688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.047 qpair failed and we were unable to recover it. 00:37:29.047 [2024-09-29 16:45:29.344827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.047 [2024-09-29 16:45:29.344860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.047 qpair failed and we were unable to recover it. 00:37:29.047 [2024-09-29 16:45:29.345071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.047 [2024-09-29 16:45:29.345124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.047 qpair failed and we were unable to recover it. 00:37:29.047 [2024-09-29 16:45:29.345473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.047 [2024-09-29 16:45:29.345532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.047 qpair failed and we were unable to recover it. 00:37:29.047 [2024-09-29 16:45:29.345716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.047 [2024-09-29 16:45:29.345751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.047 qpair failed and we were unable to recover it. 00:37:29.047 [2024-09-29 16:45:29.345868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.047 [2024-09-29 16:45:29.345903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.047 qpair failed and we were unable to recover it. 00:37:29.047 [2024-09-29 16:45:29.346058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.047 [2024-09-29 16:45:29.346096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.047 qpair failed and we were unable to recover it. 00:37:29.047 [2024-09-29 16:45:29.346254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.047 [2024-09-29 16:45:29.346304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.047 qpair failed and we were unable to recover it. 00:37:29.047 [2024-09-29 16:45:29.346570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.047 [2024-09-29 16:45:29.346605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.047 qpair failed and we were unable to recover it. 00:37:29.047 [2024-09-29 16:45:29.346767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.047 [2024-09-29 16:45:29.346801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.047 qpair failed and we were unable to recover it. 00:37:29.047 [2024-09-29 16:45:29.346943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.047 [2024-09-29 16:45:29.346977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.047 qpair failed and we were unable to recover it. 00:37:29.047 [2024-09-29 16:45:29.347124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.047 [2024-09-29 16:45:29.347157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.047 qpair failed and we were unable to recover it. 00:37:29.047 [2024-09-29 16:45:29.347353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.047 [2024-09-29 16:45:29.347390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.047 qpair failed and we were unable to recover it. 00:37:29.047 [2024-09-29 16:45:29.347547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.047 [2024-09-29 16:45:29.347584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.047 qpair failed and we were unable to recover it. 00:37:29.047 [2024-09-29 16:45:29.347768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.047 [2024-09-29 16:45:29.347804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.047 qpair failed and we were unable to recover it. 00:37:29.047 [2024-09-29 16:45:29.347950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.047 [2024-09-29 16:45:29.347983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.047 qpair failed and we were unable to recover it. 00:37:29.047 [2024-09-29 16:45:29.348146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.047 [2024-09-29 16:45:29.348184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.048 qpair failed and we were unable to recover it. 00:37:29.048 [2024-09-29 16:45:29.348372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.048 [2024-09-29 16:45:29.348427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.048 qpair failed and we were unable to recover it. 00:37:29.048 [2024-09-29 16:45:29.348603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.048 [2024-09-29 16:45:29.348640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.048 qpair failed and we were unable to recover it. 00:37:29.048 [2024-09-29 16:45:29.348812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.048 [2024-09-29 16:45:29.348860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.048 qpair failed and we were unable to recover it. 00:37:29.048 [2024-09-29 16:45:29.349020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.048 [2024-09-29 16:45:29.349074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.048 qpair failed and we were unable to recover it. 00:37:29.048 [2024-09-29 16:45:29.349214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.048 [2024-09-29 16:45:29.349310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.048 qpair failed and we were unable to recover it. 00:37:29.048 [2024-09-29 16:45:29.349567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.048 [2024-09-29 16:45:29.349627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.048 qpair failed and we were unable to recover it. 00:37:29.048 [2024-09-29 16:45:29.349803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.048 [2024-09-29 16:45:29.349838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.048 qpair failed and we were unable to recover it. 00:37:29.048 [2024-09-29 16:45:29.349982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.048 [2024-09-29 16:45:29.350016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.048 qpair failed and we were unable to recover it. 00:37:29.048 [2024-09-29 16:45:29.350265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.048 [2024-09-29 16:45:29.350322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.048 qpair failed and we were unable to recover it. 00:37:29.048 [2024-09-29 16:45:29.350475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.048 [2024-09-29 16:45:29.350512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.048 qpair failed and we were unable to recover it. 00:37:29.048 [2024-09-29 16:45:29.350691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.048 [2024-09-29 16:45:29.350743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.048 qpair failed and we were unable to recover it. 00:37:29.048 [2024-09-29 16:45:29.350870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.048 [2024-09-29 16:45:29.350903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.048 qpair failed and we were unable to recover it. 00:37:29.048 [2024-09-29 16:45:29.351045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.048 [2024-09-29 16:45:29.351078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.048 qpair failed and we were unable to recover it. 00:37:29.048 [2024-09-29 16:45:29.351314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.048 [2024-09-29 16:45:29.351352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.048 qpair failed and we were unable to recover it. 00:37:29.048 [2024-09-29 16:45:29.351533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.048 [2024-09-29 16:45:29.351570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.048 qpair failed and we were unable to recover it. 00:37:29.048 [2024-09-29 16:45:29.351739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.048 [2024-09-29 16:45:29.351774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.048 qpair failed and we were unable to recover it. 00:37:29.048 [2024-09-29 16:45:29.351936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.048 [2024-09-29 16:45:29.351983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.048 qpair failed and we were unable to recover it. 00:37:29.048 [2024-09-29 16:45:29.352104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.048 [2024-09-29 16:45:29.352140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.048 qpair failed and we were unable to recover it. 00:37:29.048 [2024-09-29 16:45:29.352374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.048 [2024-09-29 16:45:29.352442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.048 qpair failed and we were unable to recover it. 00:37:29.048 [2024-09-29 16:45:29.352588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.048 [2024-09-29 16:45:29.352623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.048 qpair failed and we were unable to recover it. 00:37:29.048 [2024-09-29 16:45:29.352793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.048 [2024-09-29 16:45:29.352841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.048 qpair failed and we were unable to recover it. 00:37:29.048 [2024-09-29 16:45:29.353002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.048 [2024-09-29 16:45:29.353050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.048 qpair failed and we were unable to recover it. 00:37:29.048 [2024-09-29 16:45:29.353201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.048 [2024-09-29 16:45:29.353237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.048 qpair failed and we were unable to recover it. 00:37:29.048 [2024-09-29 16:45:29.353496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.048 [2024-09-29 16:45:29.353553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.048 qpair failed and we were unable to recover it. 00:37:29.048 [2024-09-29 16:45:29.353730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.048 [2024-09-29 16:45:29.353764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.048 qpair failed and we were unable to recover it. 00:37:29.048 [2024-09-29 16:45:29.353901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.048 [2024-09-29 16:45:29.353957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.048 qpair failed and we were unable to recover it. 00:37:29.048 [2024-09-29 16:45:29.354142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.048 [2024-09-29 16:45:29.354181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.048 qpair failed and we were unable to recover it. 00:37:29.048 [2024-09-29 16:45:29.354303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.048 [2024-09-29 16:45:29.354340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.048 qpair failed and we were unable to recover it. 00:37:29.048 [2024-09-29 16:45:29.354473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.048 [2024-09-29 16:45:29.354510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.048 qpair failed and we were unable to recover it. 00:37:29.049 [2024-09-29 16:45:29.354651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.049 [2024-09-29 16:45:29.354696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.049 qpair failed and we were unable to recover it. 00:37:29.049 [2024-09-29 16:45:29.354856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.049 [2024-09-29 16:45:29.354904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.049 qpair failed and we were unable to recover it. 00:37:29.049 [2024-09-29 16:45:29.355201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.049 [2024-09-29 16:45:29.355260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.049 qpair failed and we were unable to recover it. 00:37:29.049 [2024-09-29 16:45:29.355489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.049 [2024-09-29 16:45:29.355545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.049 qpair failed and we were unable to recover it. 00:37:29.049 [2024-09-29 16:45:29.355688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.049 [2024-09-29 16:45:29.355722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.049 qpair failed and we were unable to recover it. 00:37:29.049 [2024-09-29 16:45:29.355865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.049 [2024-09-29 16:45:29.355913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.049 qpair failed and we were unable to recover it. 00:37:29.049 [2024-09-29 16:45:29.356109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.049 [2024-09-29 16:45:29.356183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.049 qpair failed and we were unable to recover it. 00:37:29.049 [2024-09-29 16:45:29.356455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.049 [2024-09-29 16:45:29.356494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.049 qpair failed and we were unable to recover it. 00:37:29.049 [2024-09-29 16:45:29.356642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.049 [2024-09-29 16:45:29.356686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.049 qpair failed and we were unable to recover it. 00:37:29.049 [2024-09-29 16:45:29.356850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.049 [2024-09-29 16:45:29.356884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.049 qpair failed and we were unable to recover it. 00:37:29.049 [2024-09-29 16:45:29.357075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.049 [2024-09-29 16:45:29.357155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.049 qpair failed and we were unable to recover it. 00:37:29.049 [2024-09-29 16:45:29.357357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.049 [2024-09-29 16:45:29.357415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.049 qpair failed and we were unable to recover it. 00:37:29.049 [2024-09-29 16:45:29.357562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.049 [2024-09-29 16:45:29.357597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.049 qpair failed and we were unable to recover it. 00:37:29.049 [2024-09-29 16:45:29.357746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.049 [2024-09-29 16:45:29.357782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.049 qpair failed and we were unable to recover it. 00:37:29.049 [2024-09-29 16:45:29.357940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.049 [2024-09-29 16:45:29.357993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.049 qpair failed and we were unable to recover it. 00:37:29.049 [2024-09-29 16:45:29.358245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.049 [2024-09-29 16:45:29.358299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.049 qpair failed and we were unable to recover it. 00:37:29.049 [2024-09-29 16:45:29.358487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.049 [2024-09-29 16:45:29.358548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.049 qpair failed and we were unable to recover it. 00:37:29.049 [2024-09-29 16:45:29.358687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.049 [2024-09-29 16:45:29.358721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.049 qpair failed and we were unable to recover it. 00:37:29.049 [2024-09-29 16:45:29.358888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.049 [2024-09-29 16:45:29.358922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.049 qpair failed and we were unable to recover it. 00:37:29.049 [2024-09-29 16:45:29.359038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.049 [2024-09-29 16:45:29.359073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.049 qpair failed and we were unable to recover it. 00:37:29.049 [2024-09-29 16:45:29.359244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.049 [2024-09-29 16:45:29.359297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.049 qpair failed and we were unable to recover it. 00:37:29.049 [2024-09-29 16:45:29.359487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.049 [2024-09-29 16:45:29.359538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.049 qpair failed and we were unable to recover it. 00:37:29.049 [2024-09-29 16:45:29.359680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.049 [2024-09-29 16:45:29.359715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.049 qpair failed and we were unable to recover it. 00:37:29.049 [2024-09-29 16:45:29.359936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.049 [2024-09-29 16:45:29.359970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.049 qpair failed and we were unable to recover it. 00:37:29.049 [2024-09-29 16:45:29.360224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.049 [2024-09-29 16:45:29.360266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.049 qpair failed and we were unable to recover it. 00:37:29.049 [2024-09-29 16:45:29.360441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.049 [2024-09-29 16:45:29.360504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.049 qpair failed and we were unable to recover it. 00:37:29.049 [2024-09-29 16:45:29.360687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.049 [2024-09-29 16:45:29.360739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.049 qpair failed and we were unable to recover it. 00:37:29.049 [2024-09-29 16:45:29.360885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.049 [2024-09-29 16:45:29.360918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.049 qpair failed and we were unable to recover it. 00:37:29.049 [2024-09-29 16:45:29.361063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.049 [2024-09-29 16:45:29.361097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.049 qpair failed and we were unable to recover it. 00:37:29.049 [2024-09-29 16:45:29.361260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.049 [2024-09-29 16:45:29.361303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.049 qpair failed and we were unable to recover it. 00:37:29.049 [2024-09-29 16:45:29.361532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.049 [2024-09-29 16:45:29.361566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.049 qpair failed and we were unable to recover it. 00:37:29.049 [2024-09-29 16:45:29.361722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.050 [2024-09-29 16:45:29.361776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.050 qpair failed and we were unable to recover it. 00:37:29.050 [2024-09-29 16:45:29.361961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.050 [2024-09-29 16:45:29.362015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.050 qpair failed and we were unable to recover it. 00:37:29.050 [2024-09-29 16:45:29.362308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.050 [2024-09-29 16:45:29.362373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.050 qpair failed and we were unable to recover it. 00:37:29.050 [2024-09-29 16:45:29.362502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.050 [2024-09-29 16:45:29.362553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.050 qpair failed and we were unable to recover it. 00:37:29.050 [2024-09-29 16:45:29.362726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.050 [2024-09-29 16:45:29.362760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.050 qpair failed and we were unable to recover it. 00:37:29.050 [2024-09-29 16:45:29.362906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.050 [2024-09-29 16:45:29.362940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.050 qpair failed and we were unable to recover it. 00:37:29.050 [2024-09-29 16:45:29.363133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.050 [2024-09-29 16:45:29.363170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.050 qpair failed and we were unable to recover it. 00:37:29.050 [2024-09-29 16:45:29.363358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.050 [2024-09-29 16:45:29.363413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.050 qpair failed and we were unable to recover it. 00:37:29.050 [2024-09-29 16:45:29.363564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.050 [2024-09-29 16:45:29.363598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.050 qpair failed and we were unable to recover it. 00:37:29.050 [2024-09-29 16:45:29.363762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.050 [2024-09-29 16:45:29.363816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.050 qpair failed and we were unable to recover it. 00:37:29.050 [2024-09-29 16:45:29.363979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.050 [2024-09-29 16:45:29.364030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.050 qpair failed and we were unable to recover it. 00:37:29.050 [2024-09-29 16:45:29.364173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.050 [2024-09-29 16:45:29.364249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.050 qpair failed and we were unable to recover it. 00:37:29.050 [2024-09-29 16:45:29.364398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.050 [2024-09-29 16:45:29.364432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.050 qpair failed and we were unable to recover it. 00:37:29.050 [2024-09-29 16:45:29.364575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.050 [2024-09-29 16:45:29.364609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.050 qpair failed and we were unable to recover it. 00:37:29.050 [2024-09-29 16:45:29.364797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.050 [2024-09-29 16:45:29.364850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.050 qpair failed and we were unable to recover it. 00:37:29.050 [2024-09-29 16:45:29.365042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.050 [2024-09-29 16:45:29.365094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.050 qpair failed and we were unable to recover it. 00:37:29.050 [2024-09-29 16:45:29.365342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.050 [2024-09-29 16:45:29.365382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.050 qpair failed and we were unable to recover it. 00:37:29.050 [2024-09-29 16:45:29.365546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.050 [2024-09-29 16:45:29.365583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.050 qpair failed and we were unable to recover it. 00:37:29.050 [2024-09-29 16:45:29.365759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.050 [2024-09-29 16:45:29.365793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.050 qpair failed and we were unable to recover it. 00:37:29.050 [2024-09-29 16:45:29.365958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.050 [2024-09-29 16:45:29.366010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.050 qpair failed and we were unable to recover it. 00:37:29.050 [2024-09-29 16:45:29.366195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.050 [2024-09-29 16:45:29.366229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.050 qpair failed and we were unable to recover it. 00:37:29.050 [2024-09-29 16:45:29.366478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.050 [2024-09-29 16:45:29.366549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.050 qpair failed and we were unable to recover it. 00:37:29.050 [2024-09-29 16:45:29.366731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.050 [2024-09-29 16:45:29.366768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.050 qpair failed and we were unable to recover it. 00:37:29.050 [2024-09-29 16:45:29.366925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.050 [2024-09-29 16:45:29.366974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.050 qpair failed and we were unable to recover it. 00:37:29.050 [2024-09-29 16:45:29.367239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.050 [2024-09-29 16:45:29.367298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.050 qpair failed and we were unable to recover it. 00:37:29.050 [2024-09-29 16:45:29.367556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.050 [2024-09-29 16:45:29.367611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.050 qpair failed and we were unable to recover it. 00:37:29.050 [2024-09-29 16:45:29.367760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.050 [2024-09-29 16:45:29.367797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.050 qpair failed and we were unable to recover it. 00:37:29.050 [2024-09-29 16:45:29.367969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.050 [2024-09-29 16:45:29.368026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.050 qpair failed and we were unable to recover it. 00:37:29.050 [2024-09-29 16:45:29.368223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.050 [2024-09-29 16:45:29.368273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.050 qpair failed and we were unable to recover it. 00:37:29.050 [2024-09-29 16:45:29.368408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.050 [2024-09-29 16:45:29.368481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.050 qpair failed and we were unable to recover it. 00:37:29.051 [2024-09-29 16:45:29.368591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.051 [2024-09-29 16:45:29.368625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.051 qpair failed and we were unable to recover it. 00:37:29.051 [2024-09-29 16:45:29.368819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.051 [2024-09-29 16:45:29.368872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.051 qpair failed and we were unable to recover it. 00:37:29.051 [2024-09-29 16:45:29.369036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.051 [2024-09-29 16:45:29.369076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.051 qpair failed and we were unable to recover it. 00:37:29.051 [2024-09-29 16:45:29.369266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.051 [2024-09-29 16:45:29.369326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.051 qpair failed and we were unable to recover it. 00:37:29.051 [2024-09-29 16:45:29.369496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.051 [2024-09-29 16:45:29.369530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.051 qpair failed and we were unable to recover it. 00:37:29.051 [2024-09-29 16:45:29.369654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.051 [2024-09-29 16:45:29.369711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.051 qpair failed and we were unable to recover it. 00:37:29.051 [2024-09-29 16:45:29.369850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.051 [2024-09-29 16:45:29.369918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.051 qpair failed and we were unable to recover it. 00:37:29.051 [2024-09-29 16:45:29.370179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.051 [2024-09-29 16:45:29.370236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.051 qpair failed and we were unable to recover it. 00:37:29.051 [2024-09-29 16:45:29.370392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.051 [2024-09-29 16:45:29.370437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.051 qpair failed and we were unable to recover it. 00:37:29.051 [2024-09-29 16:45:29.370572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.051 [2024-09-29 16:45:29.370609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.051 qpair failed and we were unable to recover it. 00:37:29.051 [2024-09-29 16:45:29.370782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.051 [2024-09-29 16:45:29.370817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.051 qpair failed and we were unable to recover it. 00:37:29.051 [2024-09-29 16:45:29.370937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.051 [2024-09-29 16:45:29.370972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.051 qpair failed and we were unable to recover it. 00:37:29.051 [2024-09-29 16:45:29.371136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.051 [2024-09-29 16:45:29.371174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.051 qpair failed and we were unable to recover it. 00:37:29.051 [2024-09-29 16:45:29.371386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.051 [2024-09-29 16:45:29.371439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.051 qpair failed and we were unable to recover it. 00:37:29.051 [2024-09-29 16:45:29.371583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.051 [2024-09-29 16:45:29.371617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.051 qpair failed and we were unable to recover it. 00:37:29.051 [2024-09-29 16:45:29.371752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.051 [2024-09-29 16:45:29.371799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.051 qpair failed and we were unable to recover it. 00:37:29.051 [2024-09-29 16:45:29.371996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.051 [2024-09-29 16:45:29.372036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.051 qpair failed and we were unable to recover it. 00:37:29.051 [2024-09-29 16:45:29.372187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.051 [2024-09-29 16:45:29.372240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.051 qpair failed and we were unable to recover it. 00:37:29.051 [2024-09-29 16:45:29.372434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.051 [2024-09-29 16:45:29.372473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.051 qpair failed and we were unable to recover it. 00:37:29.051 [2024-09-29 16:45:29.372636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.051 [2024-09-29 16:45:29.372683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.051 qpair failed and we were unable to recover it. 00:37:29.051 [2024-09-29 16:45:29.372830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.051 [2024-09-29 16:45:29.372865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.051 qpair failed and we were unable to recover it. 00:37:29.051 [2024-09-29 16:45:29.373045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.051 [2024-09-29 16:45:29.373112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.051 qpair failed and we were unable to recover it. 00:37:29.051 [2024-09-29 16:45:29.373258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.051 [2024-09-29 16:45:29.373299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.051 qpair failed and we were unable to recover it. 00:37:29.051 [2024-09-29 16:45:29.373489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.051 [2024-09-29 16:45:29.373524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.051 qpair failed and we were unable to recover it. 00:37:29.051 [2024-09-29 16:45:29.373677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.051 [2024-09-29 16:45:29.373716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.051 qpair failed and we were unable to recover it. 00:37:29.051 [2024-09-29 16:45:29.373852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.051 [2024-09-29 16:45:29.373899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.051 qpair failed and we were unable to recover it. 00:37:29.051 [2024-09-29 16:45:29.374079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.051 [2024-09-29 16:45:29.374115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.051 qpair failed and we were unable to recover it. 00:37:29.051 [2024-09-29 16:45:29.374260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.051 [2024-09-29 16:45:29.374293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.051 qpair failed and we were unable to recover it. 00:37:29.051 [2024-09-29 16:45:29.374439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.051 [2024-09-29 16:45:29.374472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.051 qpair failed and we were unable to recover it. 00:37:29.051 [2024-09-29 16:45:29.374626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.051 [2024-09-29 16:45:29.374663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.051 qpair failed and we were unable to recover it. 00:37:29.051 [2024-09-29 16:45:29.374836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.051 [2024-09-29 16:45:29.374870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.051 qpair failed and we were unable to recover it. 00:37:29.051 [2024-09-29 16:45:29.375030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.051 [2024-09-29 16:45:29.375082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.051 qpair failed and we were unable to recover it. 00:37:29.051 [2024-09-29 16:45:29.375342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.051 [2024-09-29 16:45:29.375409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.051 qpair failed and we were unable to recover it. 00:37:29.052 [2024-09-29 16:45:29.375571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.052 [2024-09-29 16:45:29.375619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.052 qpair failed and we were unable to recover it. 00:37:29.052 [2024-09-29 16:45:29.375759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.052 [2024-09-29 16:45:29.375795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.052 qpair failed and we were unable to recover it. 00:37:29.052 [2024-09-29 16:45:29.375952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.052 [2024-09-29 16:45:29.376004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.052 qpair failed and we were unable to recover it. 00:37:29.052 [2024-09-29 16:45:29.376288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.052 [2024-09-29 16:45:29.376344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.052 qpair failed and we were unable to recover it. 00:37:29.052 [2024-09-29 16:45:29.376530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.052 [2024-09-29 16:45:29.376563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.052 qpair failed and we were unable to recover it. 00:37:29.052 [2024-09-29 16:45:29.376733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.052 [2024-09-29 16:45:29.376768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.052 qpair failed and we were unable to recover it. 00:37:29.052 [2024-09-29 16:45:29.376891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.052 [2024-09-29 16:45:29.376928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.052 qpair failed and we were unable to recover it. 00:37:29.052 [2024-09-29 16:45:29.377113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.052 [2024-09-29 16:45:29.377150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.052 qpair failed and we were unable to recover it. 00:37:29.052 [2024-09-29 16:45:29.377310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.052 [2024-09-29 16:45:29.377348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.052 qpair failed and we were unable to recover it. 00:37:29.052 [2024-09-29 16:45:29.377478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.052 [2024-09-29 16:45:29.377516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.052 qpair failed and we were unable to recover it. 00:37:29.052 [2024-09-29 16:45:29.377703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.052 [2024-09-29 16:45:29.377738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.052 qpair failed and we were unable to recover it. 00:37:29.052 [2024-09-29 16:45:29.377879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.052 [2024-09-29 16:45:29.377913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.052 qpair failed and we were unable to recover it. 00:37:29.052 [2024-09-29 16:45:29.378154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.052 [2024-09-29 16:45:29.378211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.052 qpair failed and we were unable to recover it. 00:37:29.052 [2024-09-29 16:45:29.378381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.052 [2024-09-29 16:45:29.378415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.052 qpair failed and we were unable to recover it. 00:37:29.052 [2024-09-29 16:45:29.378594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.052 [2024-09-29 16:45:29.378630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.052 qpair failed and we were unable to recover it. 00:37:29.052 [2024-09-29 16:45:29.378786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.052 [2024-09-29 16:45:29.378824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.052 qpair failed and we were unable to recover it. 00:37:29.052 [2024-09-29 16:45:29.378988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.052 [2024-09-29 16:45:29.379030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.052 qpair failed and we were unable to recover it. 00:37:29.052 [2024-09-29 16:45:29.379184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.052 [2024-09-29 16:45:29.379254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.052 qpair failed and we were unable to recover it. 00:37:29.052 [2024-09-29 16:45:29.379425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.052 [2024-09-29 16:45:29.379462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.052 qpair failed and we were unable to recover it. 00:37:29.052 [2024-09-29 16:45:29.379617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.052 [2024-09-29 16:45:29.379654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.052 qpair failed and we were unable to recover it. 00:37:29.052 [2024-09-29 16:45:29.379844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.052 [2024-09-29 16:45:29.379892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.052 qpair failed and we were unable to recover it. 00:37:29.052 [2024-09-29 16:45:29.380065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.052 [2024-09-29 16:45:29.380113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.052 qpair failed and we were unable to recover it. 00:37:29.052 [2024-09-29 16:45:29.380343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.052 [2024-09-29 16:45:29.380400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.052 qpair failed and we were unable to recover it. 00:37:29.052 [2024-09-29 16:45:29.380533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.052 [2024-09-29 16:45:29.380586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.052 qpair failed and we were unable to recover it. 00:37:29.052 [2024-09-29 16:45:29.380738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.052 [2024-09-29 16:45:29.380773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.052 qpair failed and we were unable to recover it. 00:37:29.052 [2024-09-29 16:45:29.380906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.052 [2024-09-29 16:45:29.380959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.052 qpair failed and we were unable to recover it. 00:37:29.052 [2024-09-29 16:45:29.381119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.052 [2024-09-29 16:45:29.381174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.052 qpair failed and we were unable to recover it. 00:37:29.052 [2024-09-29 16:45:29.381314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.052 [2024-09-29 16:45:29.381347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.052 qpair failed and we were unable to recover it. 00:37:29.052 [2024-09-29 16:45:29.381514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.052 [2024-09-29 16:45:29.381562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.052 qpair failed and we were unable to recover it. 00:37:29.052 [2024-09-29 16:45:29.381709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.052 [2024-09-29 16:45:29.381756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.052 qpair failed and we were unable to recover it. 00:37:29.052 [2024-09-29 16:45:29.381919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.052 [2024-09-29 16:45:29.381967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.052 qpair failed and we were unable to recover it. 00:37:29.052 [2024-09-29 16:45:29.382121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.053 [2024-09-29 16:45:29.382157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.053 qpair failed and we were unable to recover it. 00:37:29.053 [2024-09-29 16:45:29.382461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.053 [2024-09-29 16:45:29.382520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.053 qpair failed and we were unable to recover it. 00:37:29.053 [2024-09-29 16:45:29.382680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.053 [2024-09-29 16:45:29.382731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.053 qpair failed and we were unable to recover it. 00:37:29.053 [2024-09-29 16:45:29.382901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.053 [2024-09-29 16:45:29.382938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.053 qpair failed and we were unable to recover it. 00:37:29.053 [2024-09-29 16:45:29.383101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.053 [2024-09-29 16:45:29.383139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.053 qpair failed and we were unable to recover it. 00:37:29.053 [2024-09-29 16:45:29.383272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.053 [2024-09-29 16:45:29.383309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.053 qpair failed and we were unable to recover it. 00:37:29.053 [2024-09-29 16:45:29.383506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.053 [2024-09-29 16:45:29.383539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.053 qpair failed and we were unable to recover it. 00:37:29.053 [2024-09-29 16:45:29.383647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.053 [2024-09-29 16:45:29.383687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.053 qpair failed and we were unable to recover it. 00:37:29.053 [2024-09-29 16:45:29.383803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.053 [2024-09-29 16:45:29.383837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.053 qpair failed and we were unable to recover it. 00:37:29.053 [2024-09-29 16:45:29.383988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.053 [2024-09-29 16:45:29.384025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.053 qpair failed and we were unable to recover it. 00:37:29.053 [2024-09-29 16:45:29.384175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.053 [2024-09-29 16:45:29.384212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.053 qpair failed and we were unable to recover it. 00:37:29.053 [2024-09-29 16:45:29.384344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.053 [2024-09-29 16:45:29.384383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.053 qpair failed and we were unable to recover it. 00:37:29.053 [2024-09-29 16:45:29.384555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.053 [2024-09-29 16:45:29.384591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.053 qpair failed and we were unable to recover it. 00:37:29.053 [2024-09-29 16:45:29.384783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.053 [2024-09-29 16:45:29.384831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.053 qpair failed and we were unable to recover it. 00:37:29.053 [2024-09-29 16:45:29.384970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.053 [2024-09-29 16:45:29.385018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.053 qpair failed and we were unable to recover it. 00:37:29.053 [2024-09-29 16:45:29.385143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.053 [2024-09-29 16:45:29.385178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.053 qpair failed and we were unable to recover it. 00:37:29.053 [2024-09-29 16:45:29.385477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.053 [2024-09-29 16:45:29.385533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.053 qpair failed and we were unable to recover it. 00:37:29.053 [2024-09-29 16:45:29.385712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.053 [2024-09-29 16:45:29.385747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.053 qpair failed and we were unable to recover it. 00:37:29.053 [2024-09-29 16:45:29.385889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.053 [2024-09-29 16:45:29.385923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.053 qpair failed and we were unable to recover it. 00:37:29.053 [2024-09-29 16:45:29.386207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.053 [2024-09-29 16:45:29.386267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.053 qpair failed and we were unable to recover it. 00:37:29.053 [2024-09-29 16:45:29.386426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.053 [2024-09-29 16:45:29.386492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.053 qpair failed and we were unable to recover it. 00:37:29.053 [2024-09-29 16:45:29.386642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.053 [2024-09-29 16:45:29.386704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.053 qpair failed and we were unable to recover it. 00:37:29.053 [2024-09-29 16:45:29.386855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.053 [2024-09-29 16:45:29.386888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.053 qpair failed and we were unable to recover it. 00:37:29.053 [2024-09-29 16:45:29.387079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.053 [2024-09-29 16:45:29.387115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.053 qpair failed and we were unable to recover it. 00:37:29.053 [2024-09-29 16:45:29.387355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.053 [2024-09-29 16:45:29.387419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.053 qpair failed and we were unable to recover it. 00:37:29.053 [2024-09-29 16:45:29.387591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.053 [2024-09-29 16:45:29.387624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.053 qpair failed and we were unable to recover it. 00:37:29.053 [2024-09-29 16:45:29.387747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.053 [2024-09-29 16:45:29.387781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.053 qpair failed and we were unable to recover it. 00:37:29.053 [2024-09-29 16:45:29.387964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.053 [2024-09-29 16:45:29.388001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.053 qpair failed and we were unable to recover it. 00:37:29.053 [2024-09-29 16:45:29.388124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.053 [2024-09-29 16:45:29.388161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.053 qpair failed and we were unable to recover it. 00:37:29.053 [2024-09-29 16:45:29.388404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.053 [2024-09-29 16:45:29.388463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.053 qpair failed and we were unable to recover it. 00:37:29.053 [2024-09-29 16:45:29.388600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.053 [2024-09-29 16:45:29.388637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.053 qpair failed and we were unable to recover it. 00:37:29.053 [2024-09-29 16:45:29.388781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.053 [2024-09-29 16:45:29.388814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.053 qpair failed and we were unable to recover it. 00:37:29.053 [2024-09-29 16:45:29.388972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.053 [2024-09-29 16:45:29.389010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.053 qpair failed and we were unable to recover it. 00:37:29.053 [2024-09-29 16:45:29.389190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.053 [2024-09-29 16:45:29.389227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.053 qpair failed and we were unable to recover it. 00:37:29.054 [2024-09-29 16:45:29.389406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.054 [2024-09-29 16:45:29.389442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.054 qpair failed and we were unable to recover it. 00:37:29.054 [2024-09-29 16:45:29.389589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.054 [2024-09-29 16:45:29.389626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.054 qpair failed and we were unable to recover it. 00:37:29.054 [2024-09-29 16:45:29.389797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.054 [2024-09-29 16:45:29.389831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.054 qpair failed and we were unable to recover it. 00:37:29.054 [2024-09-29 16:45:29.389963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.054 [2024-09-29 16:45:29.390011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.054 qpair failed and we were unable to recover it. 00:37:29.054 [2024-09-29 16:45:29.390207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.054 [2024-09-29 16:45:29.390260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.054 qpair failed and we were unable to recover it. 00:37:29.054 [2024-09-29 16:45:29.390430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.054 [2024-09-29 16:45:29.390469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.054 qpair failed and we were unable to recover it. 00:37:29.054 [2024-09-29 16:45:29.390600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.054 [2024-09-29 16:45:29.390638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.054 qpair failed and we were unable to recover it. 00:37:29.054 [2024-09-29 16:45:29.390810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.054 [2024-09-29 16:45:29.390844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.054 qpair failed and we were unable to recover it. 00:37:29.054 [2024-09-29 16:45:29.390961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.054 [2024-09-29 16:45:29.390995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.054 qpair failed and we were unable to recover it. 00:37:29.054 [2024-09-29 16:45:29.391145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.054 [2024-09-29 16:45:29.391178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.054 qpair failed and we were unable to recover it. 00:37:29.054 [2024-09-29 16:45:29.391330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.054 [2024-09-29 16:45:29.391427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.054 qpair failed and we were unable to recover it. 00:37:29.054 [2024-09-29 16:45:29.391586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.054 [2024-09-29 16:45:29.391623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.054 qpair failed and we were unable to recover it. 00:37:29.054 [2024-09-29 16:45:29.391764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.054 [2024-09-29 16:45:29.391799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.054 qpair failed and we were unable to recover it. 00:37:29.054 [2024-09-29 16:45:29.391924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.054 [2024-09-29 16:45:29.391957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.054 qpair failed and we were unable to recover it. 00:37:29.054 [2024-09-29 16:45:29.392094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.054 [2024-09-29 16:45:29.392146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.054 qpair failed and we were unable to recover it. 00:37:29.054 [2024-09-29 16:45:29.392298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.054 [2024-09-29 16:45:29.392336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.054 qpair failed and we were unable to recover it. 00:37:29.054 [2024-09-29 16:45:29.392522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.054 [2024-09-29 16:45:29.392559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.054 qpair failed and we were unable to recover it. 00:37:29.054 [2024-09-29 16:45:29.392749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.054 [2024-09-29 16:45:29.392797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.054 qpair failed and we were unable to recover it. 00:37:29.054 [2024-09-29 16:45:29.392925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.054 [2024-09-29 16:45:29.392977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.054 qpair failed and we were unable to recover it. 00:37:29.054 [2024-09-29 16:45:29.393195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.054 [2024-09-29 16:45:29.393233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.054 qpair failed and we were unable to recover it. 00:37:29.054 [2024-09-29 16:45:29.393523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.054 [2024-09-29 16:45:29.393582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.054 qpair failed and we were unable to recover it. 00:37:29.054 [2024-09-29 16:45:29.393779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.054 [2024-09-29 16:45:29.393813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.054 qpair failed and we were unable to recover it. 00:37:29.054 [2024-09-29 16:45:29.393952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.054 [2024-09-29 16:45:29.394000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.054 qpair failed and we were unable to recover it. 00:37:29.054 [2024-09-29 16:45:29.394228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.054 [2024-09-29 16:45:29.394282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.054 qpair failed and we were unable to recover it. 00:37:29.054 [2024-09-29 16:45:29.394482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.054 [2024-09-29 16:45:29.394540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.054 qpair failed and we were unable to recover it. 00:37:29.054 [2024-09-29 16:45:29.394687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.054 [2024-09-29 16:45:29.394723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.054 qpair failed and we were unable to recover it. 00:37:29.054 [2024-09-29 16:45:29.394910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.054 [2024-09-29 16:45:29.394963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.054 qpair failed and we were unable to recover it. 00:37:29.054 [2024-09-29 16:45:29.395116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.054 [2024-09-29 16:45:29.395168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.054 qpair failed and we were unable to recover it. 00:37:29.054 [2024-09-29 16:45:29.395392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.054 [2024-09-29 16:45:29.395429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.054 qpair failed and we were unable to recover it. 00:37:29.054 [2024-09-29 16:45:29.395569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.054 [2024-09-29 16:45:29.395603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.054 qpair failed and we were unable to recover it. 00:37:29.054 [2024-09-29 16:45:29.395743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.054 [2024-09-29 16:45:29.395807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.054 qpair failed and we were unable to recover it. 00:37:29.055 [2024-09-29 16:45:29.395981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.055 [2024-09-29 16:45:29.396020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.055 qpair failed and we were unable to recover it. 00:37:29.055 [2024-09-29 16:45:29.396299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.055 [2024-09-29 16:45:29.396356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.055 qpair failed and we were unable to recover it. 00:37:29.055 [2024-09-29 16:45:29.396632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.055 [2024-09-29 16:45:29.396695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.055 qpair failed and we were unable to recover it. 00:37:29.055 [2024-09-29 16:45:29.396829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.055 [2024-09-29 16:45:29.396862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.055 qpair failed and we were unable to recover it. 00:37:29.055 [2024-09-29 16:45:29.397004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.055 [2024-09-29 16:45:29.397036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.055 qpair failed and we were unable to recover it. 00:37:29.055 [2024-09-29 16:45:29.397250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.055 [2024-09-29 16:45:29.397310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.055 qpair failed and we were unable to recover it. 00:37:29.055 [2024-09-29 16:45:29.397549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.055 [2024-09-29 16:45:29.397587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.055 qpair failed and we were unable to recover it. 00:37:29.055 [2024-09-29 16:45:29.397758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.055 [2024-09-29 16:45:29.397792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.055 qpair failed and we were unable to recover it. 00:37:29.055 [2024-09-29 16:45:29.397910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.055 [2024-09-29 16:45:29.397943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.055 qpair failed and we were unable to recover it. 00:37:29.055 [2024-09-29 16:45:29.398190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.055 [2024-09-29 16:45:29.398256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.055 qpair failed and we were unable to recover it. 00:37:29.055 [2024-09-29 16:45:29.398478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.055 [2024-09-29 16:45:29.398556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.055 qpair failed and we were unable to recover it. 00:37:29.055 [2024-09-29 16:45:29.398698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.055 [2024-09-29 16:45:29.398753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.055 qpair failed and we were unable to recover it. 00:37:29.055 [2024-09-29 16:45:29.398898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.055 [2024-09-29 16:45:29.398932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.055 qpair failed and we were unable to recover it. 00:37:29.055 [2024-09-29 16:45:29.399140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.055 [2024-09-29 16:45:29.399198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.055 qpair failed and we were unable to recover it. 00:37:29.055 [2024-09-29 16:45:29.399325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.055 [2024-09-29 16:45:29.399362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.055 qpair failed and we were unable to recover it. 00:37:29.055 [2024-09-29 16:45:29.399520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.055 [2024-09-29 16:45:29.399558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.055 qpair failed and we were unable to recover it. 00:37:29.055 [2024-09-29 16:45:29.399707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.055 [2024-09-29 16:45:29.399755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.055 qpair failed and we were unable to recover it. 00:37:29.055 [2024-09-29 16:45:29.399877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.055 [2024-09-29 16:45:29.399913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.055 qpair failed and we were unable to recover it. 00:37:29.055 [2024-09-29 16:45:29.400082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.055 [2024-09-29 16:45:29.400135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.055 qpair failed and we were unable to recover it. 00:37:29.055 [2024-09-29 16:45:29.400293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.055 [2024-09-29 16:45:29.400345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.055 qpair failed and we were unable to recover it. 00:37:29.055 [2024-09-29 16:45:29.400648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.055 [2024-09-29 16:45:29.400749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.055 qpair failed and we were unable to recover it. 00:37:29.055 [2024-09-29 16:45:29.400907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.055 [2024-09-29 16:45:29.400944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.055 qpair failed and we were unable to recover it. 00:37:29.055 [2024-09-29 16:45:29.401234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.055 [2024-09-29 16:45:29.401313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.055 qpair failed and we were unable to recover it. 00:37:29.055 [2024-09-29 16:45:29.401547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.055 [2024-09-29 16:45:29.401604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.055 qpair failed and we were unable to recover it. 00:37:29.055 [2024-09-29 16:45:29.401785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.055 [2024-09-29 16:45:29.401820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.055 qpair failed and we were unable to recover it. 00:37:29.055 [2024-09-29 16:45:29.401932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.055 [2024-09-29 16:45:29.401983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.055 qpair failed and we were unable to recover it. 00:37:29.055 [2024-09-29 16:45:29.402198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.055 [2024-09-29 16:45:29.402234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.055 qpair failed and we were unable to recover it. 00:37:29.055 [2024-09-29 16:45:29.402468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.055 [2024-09-29 16:45:29.402525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.055 qpair failed and we were unable to recover it. 00:37:29.055 [2024-09-29 16:45:29.402699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.055 [2024-09-29 16:45:29.402750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.055 qpair failed and we were unable to recover it. 00:37:29.055 [2024-09-29 16:45:29.402889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.055 [2024-09-29 16:45:29.402923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.055 qpair failed and we were unable to recover it. 00:37:29.055 [2024-09-29 16:45:29.403092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.055 [2024-09-29 16:45:29.403129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.055 qpair failed and we were unable to recover it. 00:37:29.055 [2024-09-29 16:45:29.403255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.056 [2024-09-29 16:45:29.403292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.056 qpair failed and we were unable to recover it. 00:37:29.056 [2024-09-29 16:45:29.403454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.056 [2024-09-29 16:45:29.403493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.056 qpair failed and we were unable to recover it. 00:37:29.056 [2024-09-29 16:45:29.403687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.056 [2024-09-29 16:45:29.403721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.056 qpair failed and we were unable to recover it. 00:37:29.056 [2024-09-29 16:45:29.403860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.056 [2024-09-29 16:45:29.403907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.056 qpair failed and we were unable to recover it. 00:37:29.056 [2024-09-29 16:45:29.404047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.056 [2024-09-29 16:45:29.404087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.056 qpair failed and we were unable to recover it. 00:37:29.056 [2024-09-29 16:45:29.404375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.056 [2024-09-29 16:45:29.404434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.056 qpair failed and we were unable to recover it. 00:37:29.056 [2024-09-29 16:45:29.404618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.056 [2024-09-29 16:45:29.404656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.056 qpair failed and we were unable to recover it. 00:37:29.056 [2024-09-29 16:45:29.404858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.056 [2024-09-29 16:45:29.404902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.056 qpair failed and we were unable to recover it. 00:37:29.056 [2024-09-29 16:45:29.405071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.056 [2024-09-29 16:45:29.405114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.056 qpair failed and we were unable to recover it. 00:37:29.056 [2024-09-29 16:45:29.405309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.056 [2024-09-29 16:45:29.405344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.056 qpair failed and we were unable to recover it. 00:37:29.056 [2024-09-29 16:45:29.405570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.056 [2024-09-29 16:45:29.405647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.056 qpair failed and we were unable to recover it. 00:37:29.056 [2024-09-29 16:45:29.405819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.056 [2024-09-29 16:45:29.405867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.056 qpair failed and we were unable to recover it. 00:37:29.056 [2024-09-29 16:45:29.406017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.056 [2024-09-29 16:45:29.406067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.056 qpair failed and we were unable to recover it. 00:37:29.056 [2024-09-29 16:45:29.406246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.056 [2024-09-29 16:45:29.406312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.056 qpair failed and we were unable to recover it. 00:37:29.056 [2024-09-29 16:45:29.406440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.056 [2024-09-29 16:45:29.406477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.056 qpair failed and we were unable to recover it. 00:37:29.056 [2024-09-29 16:45:29.406652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.056 [2024-09-29 16:45:29.406696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.056 qpair failed and we were unable to recover it. 00:37:29.056 [2024-09-29 16:45:29.406835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.056 [2024-09-29 16:45:29.406870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.056 qpair failed and we were unable to recover it. 00:37:29.056 [2024-09-29 16:45:29.407062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.056 [2024-09-29 16:45:29.407123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.056 qpair failed and we were unable to recover it. 00:37:29.056 [2024-09-29 16:45:29.407339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.056 [2024-09-29 16:45:29.407397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.056 qpair failed and we were unable to recover it. 00:37:29.056 [2024-09-29 16:45:29.407529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.056 [2024-09-29 16:45:29.407568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.056 qpair failed and we were unable to recover it. 00:37:29.056 [2024-09-29 16:45:29.407739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.056 [2024-09-29 16:45:29.407774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.056 qpair failed and we were unable to recover it. 00:37:29.056 [2024-09-29 16:45:29.407939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.056 [2024-09-29 16:45:29.407977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.056 qpair failed and we were unable to recover it. 00:37:29.056 [2024-09-29 16:45:29.408137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.056 [2024-09-29 16:45:29.408175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.056 qpair failed and we were unable to recover it. 00:37:29.057 [2024-09-29 16:45:29.408359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.057 [2024-09-29 16:45:29.408396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.057 qpair failed and we were unable to recover it. 00:37:29.057 [2024-09-29 16:45:29.408565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.057 [2024-09-29 16:45:29.408600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.057 qpair failed and we were unable to recover it. 00:37:29.057 [2024-09-29 16:45:29.408745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.057 [2024-09-29 16:45:29.408780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.057 qpair failed and we were unable to recover it. 00:37:29.057 [2024-09-29 16:45:29.408942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.057 [2024-09-29 16:45:29.408995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.057 qpair failed and we were unable to recover it. 00:37:29.057 [2024-09-29 16:45:29.409122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.057 [2024-09-29 16:45:29.409160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.057 qpair failed and we were unable to recover it. 00:37:29.057 [2024-09-29 16:45:29.409356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.057 [2024-09-29 16:45:29.409446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.057 qpair failed and we were unable to recover it. 00:37:29.057 [2024-09-29 16:45:29.409578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.057 [2024-09-29 16:45:29.409616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.057 qpair failed and we were unable to recover it. 00:37:29.057 [2024-09-29 16:45:29.409835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.057 [2024-09-29 16:45:29.409870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.057 qpair failed and we were unable to recover it. 00:37:29.057 [2024-09-29 16:45:29.410107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.057 [2024-09-29 16:45:29.410165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.057 qpair failed and we were unable to recover it. 00:37:29.057 [2024-09-29 16:45:29.410370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.057 [2024-09-29 16:45:29.410429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.057 qpair failed and we were unable to recover it. 00:37:29.057 [2024-09-29 16:45:29.410613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.057 [2024-09-29 16:45:29.410650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.057 qpair failed and we were unable to recover it. 00:37:29.057 [2024-09-29 16:45:29.410829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.057 [2024-09-29 16:45:29.410864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.057 qpair failed and we were unable to recover it. 00:37:29.057 [2024-09-29 16:45:29.411038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.057 [2024-09-29 16:45:29.411103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.057 qpair failed and we were unable to recover it. 00:37:29.057 [2024-09-29 16:45:29.411397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.057 [2024-09-29 16:45:29.411459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.057 qpair failed and we were unable to recover it. 00:37:29.057 [2024-09-29 16:45:29.411649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.057 [2024-09-29 16:45:29.411697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.057 qpair failed and we were unable to recover it. 00:37:29.057 [2024-09-29 16:45:29.411894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.057 [2024-09-29 16:45:29.411931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.057 qpair failed and we were unable to recover it. 00:37:29.057 [2024-09-29 16:45:29.412061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.057 [2024-09-29 16:45:29.412097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.057 qpair failed and we were unable to recover it. 00:37:29.057 [2024-09-29 16:45:29.412235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.057 [2024-09-29 16:45:29.412272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.057 qpair failed and we were unable to recover it. 00:37:29.057 [2024-09-29 16:45:29.412493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.057 [2024-09-29 16:45:29.412530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.057 qpair failed and we were unable to recover it. 00:37:29.057 [2024-09-29 16:45:29.412693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.057 [2024-09-29 16:45:29.412745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.057 qpair failed and we were unable to recover it. 00:37:29.057 [2024-09-29 16:45:29.412907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.057 [2024-09-29 16:45:29.412955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.057 qpair failed and we were unable to recover it. 00:37:29.057 [2024-09-29 16:45:29.413100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.057 [2024-09-29 16:45:29.413172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.057 qpair failed and we were unable to recover it. 00:37:29.057 [2024-09-29 16:45:29.413394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.057 [2024-09-29 16:45:29.413449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.057 qpair failed and we were unable to recover it. 00:37:29.057 [2024-09-29 16:45:29.413594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.057 [2024-09-29 16:45:29.413629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.057 qpair failed and we were unable to recover it. 00:37:29.057 [2024-09-29 16:45:29.413777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.057 [2024-09-29 16:45:29.413830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.057 qpair failed and we were unable to recover it. 00:37:29.057 [2024-09-29 16:45:29.413968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.057 [2024-09-29 16:45:29.414027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.057 qpair failed and we were unable to recover it. 00:37:29.057 [2024-09-29 16:45:29.414218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.057 [2024-09-29 16:45:29.414269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.057 qpair failed and we were unable to recover it. 00:37:29.057 [2024-09-29 16:45:29.414404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.057 [2024-09-29 16:45:29.414458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.057 qpair failed and we were unable to recover it. 00:37:29.057 [2024-09-29 16:45:29.414563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.057 [2024-09-29 16:45:29.414597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.057 qpair failed and we were unable to recover it. 00:37:29.057 [2024-09-29 16:45:29.414802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.057 [2024-09-29 16:45:29.414850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.057 qpair failed and we were unable to recover it. 00:37:29.057 [2024-09-29 16:45:29.414971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.057 [2024-09-29 16:45:29.415007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.057 qpair failed and we were unable to recover it. 00:37:29.057 [2024-09-29 16:45:29.415179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.057 [2024-09-29 16:45:29.415211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.057 qpair failed and we were unable to recover it. 00:37:29.057 [2024-09-29 16:45:29.415352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.057 [2024-09-29 16:45:29.415384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.058 qpair failed and we were unable to recover it. 00:37:29.058 [2024-09-29 16:45:29.415553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.058 [2024-09-29 16:45:29.415586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.058 qpair failed and we were unable to recover it. 00:37:29.058 [2024-09-29 16:45:29.415727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.058 [2024-09-29 16:45:29.415792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.058 qpair failed and we were unable to recover it. 00:37:29.058 [2024-09-29 16:45:29.415986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.058 [2024-09-29 16:45:29.416026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.058 qpair failed and we were unable to recover it. 00:37:29.058 [2024-09-29 16:45:29.416162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.058 [2024-09-29 16:45:29.416200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.058 qpair failed and we were unable to recover it. 00:37:29.058 [2024-09-29 16:45:29.416436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.058 [2024-09-29 16:45:29.416493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.058 qpair failed and we were unable to recover it. 00:37:29.058 [2024-09-29 16:45:29.416640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.058 [2024-09-29 16:45:29.416681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.058 qpair failed and we were unable to recover it. 00:37:29.058 [2024-09-29 16:45:29.416819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.058 [2024-09-29 16:45:29.416854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.058 qpair failed and we were unable to recover it. 00:37:29.058 [2024-09-29 16:45:29.417004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.058 [2024-09-29 16:45:29.417039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.058 qpair failed and we were unable to recover it. 00:37:29.058 [2024-09-29 16:45:29.417184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.058 [2024-09-29 16:45:29.417236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.058 qpair failed and we were unable to recover it. 00:37:29.058 [2024-09-29 16:45:29.417394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.058 [2024-09-29 16:45:29.417445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.058 qpair failed and we were unable to recover it. 00:37:29.058 [2024-09-29 16:45:29.417596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.058 [2024-09-29 16:45:29.417632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.058 qpair failed and we were unable to recover it. 00:37:29.058 [2024-09-29 16:45:29.417812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.058 [2024-09-29 16:45:29.417846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.058 qpair failed and we were unable to recover it. 00:37:29.058 [2024-09-29 16:45:29.418064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.058 [2024-09-29 16:45:29.418117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.058 qpair failed and we were unable to recover it. 00:37:29.058 [2024-09-29 16:45:29.418253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.058 [2024-09-29 16:45:29.418293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.058 qpair failed and we were unable to recover it. 00:37:29.058 [2024-09-29 16:45:29.418564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.058 [2024-09-29 16:45:29.418621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.058 qpair failed and we were unable to recover it. 00:37:29.058 [2024-09-29 16:45:29.418775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.058 [2024-09-29 16:45:29.418810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.058 qpair failed and we were unable to recover it. 00:37:29.058 [2024-09-29 16:45:29.418978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.058 [2024-09-29 16:45:29.419026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.058 qpair failed and we were unable to recover it. 00:37:29.058 [2024-09-29 16:45:29.419202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.058 [2024-09-29 16:45:29.419267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.058 qpair failed and we were unable to recover it. 00:37:29.058 [2024-09-29 16:45:29.419512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.058 [2024-09-29 16:45:29.419546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.058 qpair failed and we were unable to recover it. 00:37:29.058 [2024-09-29 16:45:29.419691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.058 [2024-09-29 16:45:29.419727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.058 qpair failed and we were unable to recover it. 00:37:29.058 [2024-09-29 16:45:29.419874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.058 [2024-09-29 16:45:29.419910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.058 qpair failed and we were unable to recover it. 00:37:29.058 [2024-09-29 16:45:29.420099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.058 [2024-09-29 16:45:29.420137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.058 qpair failed and we were unable to recover it. 00:37:29.058 [2024-09-29 16:45:29.420365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.058 [2024-09-29 16:45:29.420427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.058 qpair failed and we were unable to recover it. 00:37:29.058 [2024-09-29 16:45:29.420563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.058 [2024-09-29 16:45:29.420596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.058 qpair failed and we were unable to recover it. 00:37:29.058 [2024-09-29 16:45:29.420778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.058 [2024-09-29 16:45:29.420816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.058 qpair failed and we were unable to recover it. 00:37:29.058 [2024-09-29 16:45:29.420986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.058 [2024-09-29 16:45:29.421034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.058 qpair failed and we were unable to recover it. 00:37:29.058 [2024-09-29 16:45:29.421207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.058 [2024-09-29 16:45:29.421246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.058 qpair failed and we were unable to recover it. 00:37:29.058 [2024-09-29 16:45:29.421460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.058 [2024-09-29 16:45:29.421498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.058 qpair failed and we were unable to recover it. 00:37:29.058 [2024-09-29 16:45:29.421631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.058 [2024-09-29 16:45:29.421664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.058 qpair failed and we were unable to recover it. 00:37:29.058 [2024-09-29 16:45:29.421846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.058 [2024-09-29 16:45:29.421879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.058 qpair failed and we were unable to recover it. 00:37:29.058 [2024-09-29 16:45:29.422016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.058 [2024-09-29 16:45:29.422054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.058 qpair failed and we were unable to recover it. 00:37:29.058 [2024-09-29 16:45:29.422268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.058 [2024-09-29 16:45:29.422324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.058 qpair failed and we were unable to recover it. 00:37:29.058 [2024-09-29 16:45:29.422485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.058 [2024-09-29 16:45:29.422533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.058 qpair failed and we were unable to recover it. 00:37:29.059 [2024-09-29 16:45:29.422732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.059 [2024-09-29 16:45:29.422768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.059 qpair failed and we were unable to recover it. 00:37:29.059 [2024-09-29 16:45:29.422890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.059 [2024-09-29 16:45:29.422924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.059 qpair failed and we were unable to recover it. 00:37:29.059 [2024-09-29 16:45:29.423067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.059 [2024-09-29 16:45:29.423100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.059 qpair failed and we were unable to recover it. 00:37:29.059 [2024-09-29 16:45:29.423249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.059 [2024-09-29 16:45:29.423283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.059 qpair failed and we were unable to recover it. 00:37:29.059 [2024-09-29 16:45:29.423461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.059 [2024-09-29 16:45:29.423498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.059 qpair failed and we were unable to recover it. 00:37:29.059 [2024-09-29 16:45:29.423641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.059 [2024-09-29 16:45:29.423682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.059 qpair failed and we were unable to recover it. 00:37:29.059 [2024-09-29 16:45:29.423817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.059 [2024-09-29 16:45:29.423850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.059 qpair failed and we were unable to recover it. 00:37:29.059 [2024-09-29 16:45:29.423965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.059 [2024-09-29 16:45:29.423998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.059 qpair failed and we were unable to recover it. 00:37:29.059 [2024-09-29 16:45:29.424148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.059 [2024-09-29 16:45:29.424183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.059 qpair failed and we were unable to recover it. 00:37:29.059 [2024-09-29 16:45:29.424312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.059 [2024-09-29 16:45:29.424349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.059 qpair failed and we were unable to recover it. 00:37:29.059 [2024-09-29 16:45:29.424549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.059 [2024-09-29 16:45:29.424614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.059 qpair failed and we were unable to recover it. 00:37:29.059 [2024-09-29 16:45:29.424779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.059 [2024-09-29 16:45:29.424816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.059 qpair failed and we were unable to recover it. 00:37:29.059 [2024-09-29 16:45:29.424983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.059 [2024-09-29 16:45:29.425037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.059 qpair failed and we were unable to recover it. 00:37:29.059 [2024-09-29 16:45:29.425209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.059 [2024-09-29 16:45:29.425248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.059 qpair failed and we were unable to recover it. 00:37:29.059 [2024-09-29 16:45:29.425493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.059 [2024-09-29 16:45:29.425552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.059 qpair failed and we were unable to recover it. 00:37:29.059 [2024-09-29 16:45:29.425724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.059 [2024-09-29 16:45:29.425759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.059 qpair failed and we were unable to recover it. 00:37:29.059 [2024-09-29 16:45:29.425928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.059 [2024-09-29 16:45:29.425979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.059 qpair failed and we were unable to recover it. 00:37:29.059 [2024-09-29 16:45:29.426110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.059 [2024-09-29 16:45:29.426145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.059 qpair failed and we were unable to recover it. 00:37:29.059 [2024-09-29 16:45:29.426287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.059 [2024-09-29 16:45:29.426338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.059 qpair failed and we were unable to recover it. 00:37:29.059 [2024-09-29 16:45:29.426496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.059 [2024-09-29 16:45:29.426529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.059 qpair failed and we were unable to recover it. 00:37:29.059 [2024-09-29 16:45:29.426680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.059 [2024-09-29 16:45:29.426714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.059 qpair failed and we were unable to recover it. 00:37:29.059 [2024-09-29 16:45:29.426873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.059 [2024-09-29 16:45:29.426908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.059 qpair failed and we were unable to recover it. 00:37:29.059 [2024-09-29 16:45:29.427057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.059 [2024-09-29 16:45:29.427094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.059 qpair failed and we were unable to recover it. 00:37:29.059 [2024-09-29 16:45:29.427275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.059 [2024-09-29 16:45:29.427311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.059 qpair failed and we were unable to recover it. 00:37:29.059 [2024-09-29 16:45:29.427471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.059 [2024-09-29 16:45:29.427507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.059 qpair failed and we were unable to recover it. 00:37:29.059 [2024-09-29 16:45:29.427693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.059 [2024-09-29 16:45:29.427759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.059 qpair failed and we were unable to recover it. 00:37:29.059 [2024-09-29 16:45:29.427977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.059 [2024-09-29 16:45:29.428030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.059 qpair failed and we were unable to recover it. 00:37:29.059 [2024-09-29 16:45:29.428256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.059 [2024-09-29 16:45:29.428295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.059 qpair failed and we were unable to recover it. 00:37:29.059 [2024-09-29 16:45:29.428480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.059 [2024-09-29 16:45:29.428518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.059 qpair failed and we were unable to recover it. 00:37:29.059 [2024-09-29 16:45:29.428653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.059 [2024-09-29 16:45:29.428717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.059 qpair failed and we were unable to recover it. 00:37:29.059 [2024-09-29 16:45:29.428856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.059 [2024-09-29 16:45:29.428904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.059 qpair failed and we were unable to recover it. 00:37:29.059 [2024-09-29 16:45:29.429051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.059 [2024-09-29 16:45:29.429105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.059 qpair failed and we were unable to recover it. 00:37:29.059 [2024-09-29 16:45:29.429316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.059 [2024-09-29 16:45:29.429371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.059 qpair failed and we were unable to recover it. 00:37:29.059 [2024-09-29 16:45:29.429513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.060 [2024-09-29 16:45:29.429547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.060 qpair failed and we were unable to recover it. 00:37:29.060 [2024-09-29 16:45:29.429670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.060 [2024-09-29 16:45:29.429715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.060 qpair failed and we were unable to recover it. 00:37:29.060 [2024-09-29 16:45:29.429927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.060 [2024-09-29 16:45:29.429961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.060 qpair failed and we were unable to recover it. 00:37:29.060 [2024-09-29 16:45:29.430230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.060 [2024-09-29 16:45:29.430309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.060 qpair failed and we were unable to recover it. 00:37:29.060 [2024-09-29 16:45:29.430547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.060 [2024-09-29 16:45:29.430616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.060 qpair failed and we were unable to recover it. 00:37:29.060 [2024-09-29 16:45:29.430769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.060 [2024-09-29 16:45:29.430803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.060 qpair failed and we were unable to recover it. 00:37:29.060 [2024-09-29 16:45:29.430984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.060 [2024-09-29 16:45:29.431022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.060 qpair failed and we were unable to recover it. 00:37:29.060 [2024-09-29 16:45:29.431171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.060 [2024-09-29 16:45:29.431206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.060 qpair failed and we were unable to recover it. 00:37:29.060 [2024-09-29 16:45:29.431343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.060 [2024-09-29 16:45:29.431376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.060 qpair failed and we were unable to recover it. 00:37:29.060 [2024-09-29 16:45:29.431504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.060 [2024-09-29 16:45:29.431541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.060 qpair failed and we were unable to recover it. 00:37:29.060 [2024-09-29 16:45:29.431686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.060 [2024-09-29 16:45:29.431721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.060 qpair failed and we were unable to recover it. 00:37:29.060 [2024-09-29 16:45:29.431935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.060 [2024-09-29 16:45:29.431988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.060 qpair failed and we were unable to recover it. 00:37:29.060 [2024-09-29 16:45:29.432189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.060 [2024-09-29 16:45:29.432246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.060 qpair failed and we were unable to recover it. 00:37:29.060 [2024-09-29 16:45:29.432426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.060 [2024-09-29 16:45:29.432486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.060 qpair failed and we were unable to recover it. 00:37:29.060 [2024-09-29 16:45:29.432618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.060 [2024-09-29 16:45:29.432652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.060 qpair failed and we were unable to recover it. 00:37:29.060 [2024-09-29 16:45:29.432800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.060 [2024-09-29 16:45:29.432834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.060 qpair failed and we were unable to recover it. 00:37:29.060 [2024-09-29 16:45:29.432964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.060 [2024-09-29 16:45:29.433030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.060 qpair failed and we were unable to recover it. 00:37:29.060 [2024-09-29 16:45:29.433270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.060 [2024-09-29 16:45:29.433308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.060 qpair failed and we were unable to recover it. 00:37:29.060 [2024-09-29 16:45:29.433440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.060 [2024-09-29 16:45:29.433478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.060 qpair failed and we were unable to recover it. 00:37:29.060 [2024-09-29 16:45:29.433638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.060 [2024-09-29 16:45:29.433682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.060 qpair failed and we were unable to recover it. 00:37:29.060 [2024-09-29 16:45:29.433846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.060 [2024-09-29 16:45:29.433879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.060 qpair failed and we were unable to recover it. 00:37:29.060 [2024-09-29 16:45:29.433994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.060 [2024-09-29 16:45:29.434046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.060 qpair failed and we were unable to recover it. 00:37:29.060 [2024-09-29 16:45:29.434249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.060 [2024-09-29 16:45:29.434282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.060 qpair failed and we were unable to recover it. 00:37:29.060 [2024-09-29 16:45:29.434449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.060 [2024-09-29 16:45:29.434486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.060 qpair failed and we were unable to recover it. 00:37:29.060 [2024-09-29 16:45:29.434669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.060 [2024-09-29 16:45:29.434732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.060 qpair failed and we were unable to recover it. 00:37:29.060 [2024-09-29 16:45:29.434844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.060 [2024-09-29 16:45:29.434878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.060 qpair failed and we were unable to recover it. 00:37:29.060 [2024-09-29 16:45:29.435027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.060 [2024-09-29 16:45:29.435061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.060 qpair failed and we were unable to recover it. 00:37:29.060 [2024-09-29 16:45:29.435266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.060 [2024-09-29 16:45:29.435322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.060 qpair failed and we were unable to recover it. 00:37:29.060 [2024-09-29 16:45:29.435509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.060 [2024-09-29 16:45:29.435546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.060 qpair failed and we were unable to recover it. 00:37:29.060 [2024-09-29 16:45:29.435710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.060 [2024-09-29 16:45:29.435743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.060 qpair failed and we were unable to recover it. 00:37:29.060 [2024-09-29 16:45:29.435901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.060 [2024-09-29 16:45:29.435966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.060 qpair failed and we were unable to recover it. 00:37:29.060 [2024-09-29 16:45:29.436127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.060 [2024-09-29 16:45:29.436163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.060 qpair failed and we were unable to recover it. 00:37:29.060 [2024-09-29 16:45:29.436338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.060 [2024-09-29 16:45:29.436377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.060 qpair failed and we were unable to recover it. 00:37:29.060 [2024-09-29 16:45:29.436537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.061 [2024-09-29 16:45:29.436574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.061 qpair failed and we were unable to recover it. 00:37:29.061 [2024-09-29 16:45:29.436715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.061 [2024-09-29 16:45:29.436749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.061 qpair failed and we were unable to recover it. 00:37:29.061 [2024-09-29 16:45:29.436890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.061 [2024-09-29 16:45:29.436923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.061 qpair failed and we were unable to recover it. 00:37:29.061 [2024-09-29 16:45:29.437079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.061 [2024-09-29 16:45:29.437117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.061 qpair failed and we were unable to recover it. 00:37:29.061 [2024-09-29 16:45:29.437326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.061 [2024-09-29 16:45:29.437376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.061 qpair failed and we were unable to recover it. 00:37:29.061 [2024-09-29 16:45:29.437597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.061 [2024-09-29 16:45:29.437633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.061 qpair failed and we were unable to recover it. 00:37:29.061 [2024-09-29 16:45:29.437836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.061 [2024-09-29 16:45:29.437871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.061 qpair failed and we were unable to recover it. 00:37:29.061 [2024-09-29 16:45:29.438035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.061 [2024-09-29 16:45:29.438073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.061 qpair failed and we were unable to recover it. 00:37:29.061 [2024-09-29 16:45:29.438344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.061 [2024-09-29 16:45:29.438406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.061 qpair failed and we were unable to recover it. 00:37:29.061 [2024-09-29 16:45:29.438554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.061 [2024-09-29 16:45:29.438587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.061 qpair failed and we were unable to recover it. 00:37:29.061 [2024-09-29 16:45:29.438730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.061 [2024-09-29 16:45:29.438764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.061 qpair failed and we were unable to recover it. 00:37:29.061 [2024-09-29 16:45:29.438910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.061 [2024-09-29 16:45:29.438945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.061 qpair failed and we were unable to recover it. 00:37:29.061 [2024-09-29 16:45:29.439129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.061 [2024-09-29 16:45:29.439165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.061 qpair failed and we were unable to recover it. 00:37:29.061 [2024-09-29 16:45:29.439304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.061 [2024-09-29 16:45:29.439380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.061 qpair failed and we were unable to recover it. 00:37:29.061 [2024-09-29 16:45:29.439589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.061 [2024-09-29 16:45:29.439626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.061 qpair failed and we were unable to recover it. 00:37:29.061 [2024-09-29 16:45:29.439780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.061 [2024-09-29 16:45:29.439814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.061 qpair failed and we were unable to recover it. 00:37:29.061 [2024-09-29 16:45:29.439974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.061 [2024-09-29 16:45:29.440011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.061 qpair failed and we were unable to recover it. 00:37:29.061 [2024-09-29 16:45:29.440168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.061 [2024-09-29 16:45:29.440206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.061 qpair failed and we were unable to recover it. 00:37:29.061 [2024-09-29 16:45:29.440358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.061 [2024-09-29 16:45:29.440395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.061 qpair failed and we were unable to recover it. 00:37:29.061 [2024-09-29 16:45:29.440548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.061 [2024-09-29 16:45:29.440585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.061 qpair failed and we were unable to recover it. 00:37:29.061 [2024-09-29 16:45:29.440775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.061 [2024-09-29 16:45:29.440823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.061 qpair failed and we were unable to recover it. 00:37:29.061 [2024-09-29 16:45:29.441013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.061 [2024-09-29 16:45:29.441061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.061 qpair failed and we were unable to recover it. 00:37:29.061 [2024-09-29 16:45:29.441220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.061 [2024-09-29 16:45:29.441259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.061 qpair failed and we were unable to recover it. 00:37:29.061 [2024-09-29 16:45:29.441401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.061 [2024-09-29 16:45:29.441439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.061 qpair failed and we were unable to recover it. 00:37:29.062 [2024-09-29 16:45:29.441653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.062 [2024-09-29 16:45:29.441703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.062 qpair failed and we were unable to recover it. 00:37:29.062 [2024-09-29 16:45:29.441864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.062 [2024-09-29 16:45:29.441897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.062 qpair failed and we were unable to recover it. 00:37:29.062 [2024-09-29 16:45:29.442096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.062 [2024-09-29 16:45:29.442133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.062 qpair failed and we were unable to recover it. 00:37:29.062 [2024-09-29 16:45:29.442275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.062 [2024-09-29 16:45:29.442371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.062 qpair failed and we were unable to recover it. 00:37:29.062 [2024-09-29 16:45:29.442532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.062 [2024-09-29 16:45:29.442570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.062 qpair failed and we were unable to recover it. 00:37:29.062 [2024-09-29 16:45:29.442723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.062 [2024-09-29 16:45:29.442759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.062 qpair failed and we were unable to recover it. 00:37:29.062 [2024-09-29 16:45:29.442872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.062 [2024-09-29 16:45:29.442905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.062 qpair failed and we were unable to recover it. 00:37:29.062 [2024-09-29 16:45:29.443048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.062 [2024-09-29 16:45:29.443101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.062 qpair failed and we were unable to recover it. 00:37:29.062 [2024-09-29 16:45:29.443221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.062 [2024-09-29 16:45:29.443258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.062 qpair failed and we were unable to recover it. 00:37:29.062 [2024-09-29 16:45:29.443389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.062 [2024-09-29 16:45:29.443427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.062 qpair failed and we were unable to recover it. 00:37:29.062 [2024-09-29 16:45:29.443569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.062 [2024-09-29 16:45:29.443602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.062 qpair failed and we were unable to recover it. 00:37:29.062 [2024-09-29 16:45:29.443715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.062 [2024-09-29 16:45:29.443749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.062 qpair failed and we were unable to recover it. 00:37:29.062 [2024-09-29 16:45:29.443928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.062 [2024-09-29 16:45:29.443961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.062 qpair failed and we were unable to recover it. 00:37:29.062 [2024-09-29 16:45:29.444162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.062 [2024-09-29 16:45:29.444195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.062 qpair failed and we were unable to recover it. 00:37:29.062 [2024-09-29 16:45:29.444370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.062 [2024-09-29 16:45:29.444407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.062 qpair failed and we were unable to recover it. 00:37:29.062 [2024-09-29 16:45:29.444570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.062 [2024-09-29 16:45:29.444603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.062 qpair failed and we were unable to recover it. 00:37:29.062 [2024-09-29 16:45:29.444809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.062 [2024-09-29 16:45:29.444857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.062 qpair failed and we were unable to recover it. 00:37:29.062 [2024-09-29 16:45:29.445048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.062 [2024-09-29 16:45:29.445101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.062 qpair failed and we were unable to recover it. 00:37:29.062 [2024-09-29 16:45:29.445417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.062 [2024-09-29 16:45:29.445457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.062 qpair failed and we were unable to recover it. 00:37:29.062 [2024-09-29 16:45:29.445619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.062 [2024-09-29 16:45:29.445658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.062 qpair failed and we were unable to recover it. 00:37:29.062 [2024-09-29 16:45:29.445839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.062 [2024-09-29 16:45:29.445873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.062 qpair failed and we were unable to recover it. 00:37:29.062 [2024-09-29 16:45:29.446038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.062 [2024-09-29 16:45:29.446086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.062 qpair failed and we were unable to recover it. 00:37:29.062 [2024-09-29 16:45:29.446283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.062 [2024-09-29 16:45:29.446337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.062 qpair failed and we were unable to recover it. 00:37:29.062 [2024-09-29 16:45:29.446489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.062 [2024-09-29 16:45:29.446557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.062 qpair failed and we were unable to recover it. 00:37:29.062 [2024-09-29 16:45:29.446702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.062 [2024-09-29 16:45:29.446738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.062 qpair failed and we were unable to recover it. 00:37:29.062 [2024-09-29 16:45:29.446919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.062 [2024-09-29 16:45:29.446971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.062 qpair failed and we were unable to recover it. 00:37:29.062 [2024-09-29 16:45:29.447210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.062 [2024-09-29 16:45:29.447270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.062 qpair failed and we were unable to recover it. 00:37:29.062 [2024-09-29 16:45:29.447493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.062 [2024-09-29 16:45:29.447532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.062 qpair failed and we were unable to recover it. 00:37:29.062 [2024-09-29 16:45:29.447681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.062 [2024-09-29 16:45:29.447717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.062 qpair failed and we were unable to recover it. 00:37:29.062 [2024-09-29 16:45:29.447863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.063 [2024-09-29 16:45:29.447896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.063 qpair failed and we were unable to recover it. 00:37:29.063 [2024-09-29 16:45:29.448059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.063 [2024-09-29 16:45:29.448096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.063 qpair failed and we were unable to recover it. 00:37:29.063 [2024-09-29 16:45:29.448318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.063 [2024-09-29 16:45:29.448354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.063 qpair failed and we were unable to recover it. 00:37:29.063 [2024-09-29 16:45:29.448493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.063 [2024-09-29 16:45:29.448529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.063 qpair failed and we were unable to recover it. 00:37:29.063 [2024-09-29 16:45:29.448688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.063 [2024-09-29 16:45:29.448743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.063 qpair failed and we were unable to recover it. 00:37:29.063 [2024-09-29 16:45:29.448935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.063 [2024-09-29 16:45:29.448982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.063 qpair failed and we were unable to recover it. 00:37:29.063 [2024-09-29 16:45:29.449144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.063 [2024-09-29 16:45:29.449197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.063 qpair failed and we were unable to recover it. 00:37:29.063 [2024-09-29 16:45:29.449367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.063 [2024-09-29 16:45:29.449419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.063 qpair failed and we were unable to recover it. 00:37:29.063 [2024-09-29 16:45:29.449558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.063 [2024-09-29 16:45:29.449592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.063 qpair failed and we were unable to recover it. 00:37:29.063 [2024-09-29 16:45:29.449736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.063 [2024-09-29 16:45:29.449771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.063 qpair failed and we were unable to recover it. 00:37:29.063 [2024-09-29 16:45:29.449929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.063 [2024-09-29 16:45:29.449982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.063 qpair failed and we were unable to recover it. 00:37:29.063 [2024-09-29 16:45:29.450141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.063 [2024-09-29 16:45:29.450194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.063 qpair failed and we were unable to recover it. 00:37:29.063 [2024-09-29 16:45:29.450387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.063 [2024-09-29 16:45:29.450439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.063 qpair failed and we were unable to recover it. 00:37:29.063 [2024-09-29 16:45:29.450562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.063 [2024-09-29 16:45:29.450597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.063 qpair failed and we were unable to recover it. 00:37:29.063 [2024-09-29 16:45:29.450776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.063 [2024-09-29 16:45:29.450824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.063 qpair failed and we were unable to recover it. 00:37:29.063 [2024-09-29 16:45:29.451008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.063 [2024-09-29 16:45:29.451048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.063 qpair failed and we were unable to recover it. 00:37:29.063 [2024-09-29 16:45:29.451180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.063 [2024-09-29 16:45:29.451218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.063 qpair failed and we were unable to recover it. 00:37:29.063 [2024-09-29 16:45:29.451379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.063 [2024-09-29 16:45:29.451443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.063 qpair failed and we were unable to recover it. 00:37:29.063 [2024-09-29 16:45:29.451595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.063 [2024-09-29 16:45:29.451632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.063 qpair failed and we were unable to recover it. 00:37:29.063 [2024-09-29 16:45:29.451803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.063 [2024-09-29 16:45:29.451837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.063 qpair failed and we were unable to recover it. 00:37:29.063 [2024-09-29 16:45:29.452003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.063 [2024-09-29 16:45:29.452052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.063 qpair failed and we were unable to recover it. 00:37:29.063 [2024-09-29 16:45:29.452235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.063 [2024-09-29 16:45:29.452287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.063 qpair failed and we were unable to recover it. 00:37:29.063 [2024-09-29 16:45:29.452420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.063 [2024-09-29 16:45:29.452454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.063 qpair failed and we were unable to recover it. 00:37:29.063 [2024-09-29 16:45:29.452622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.063 [2024-09-29 16:45:29.452654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.063 qpair failed and we were unable to recover it. 00:37:29.063 [2024-09-29 16:45:29.452831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.063 [2024-09-29 16:45:29.452864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.063 qpair failed and we were unable to recover it. 00:37:29.063 [2024-09-29 16:45:29.452994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.063 [2024-09-29 16:45:29.453058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.063 qpair failed and we were unable to recover it. 00:37:29.063 [2024-09-29 16:45:29.453322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.063 [2024-09-29 16:45:29.453363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.063 qpair failed and we were unable to recover it. 00:37:29.063 [2024-09-29 16:45:29.453500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.063 [2024-09-29 16:45:29.453544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.063 qpair failed and we were unable to recover it. 00:37:29.063 [2024-09-29 16:45:29.453687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.063 [2024-09-29 16:45:29.453721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.063 qpair failed and we were unable to recover it. 00:37:29.063 [2024-09-29 16:45:29.453828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.063 [2024-09-29 16:45:29.453862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.063 qpair failed and we were unable to recover it. 00:37:29.063 [2024-09-29 16:45:29.453990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.063 [2024-09-29 16:45:29.454041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.063 qpair failed and we were unable to recover it. 00:37:29.063 [2024-09-29 16:45:29.454203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.063 [2024-09-29 16:45:29.454242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.063 qpair failed and we were unable to recover it. 00:37:29.063 [2024-09-29 16:45:29.454381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.063 [2024-09-29 16:45:29.454434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.063 qpair failed and we were unable to recover it. 00:37:29.063 [2024-09-29 16:45:29.454582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.063 [2024-09-29 16:45:29.454635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.063 qpair failed and we were unable to recover it. 00:37:29.063 [2024-09-29 16:45:29.454825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.063 [2024-09-29 16:45:29.454863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.063 qpair failed and we were unable to recover it. 00:37:29.063 [2024-09-29 16:45:29.455119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.063 [2024-09-29 16:45:29.455186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.063 qpair failed and we were unable to recover it. 00:37:29.063 [2024-09-29 16:45:29.455333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.064 [2024-09-29 16:45:29.455386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.064 qpair failed and we were unable to recover it. 00:37:29.064 [2024-09-29 16:45:29.455536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.064 [2024-09-29 16:45:29.455571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.064 qpair failed and we were unable to recover it. 00:37:29.064 [2024-09-29 16:45:29.455692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.064 [2024-09-29 16:45:29.455727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.064 qpair failed and we were unable to recover it. 00:37:29.064 [2024-09-29 16:45:29.455916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.064 [2024-09-29 16:45:29.455969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.064 qpair failed and we were unable to recover it. 00:37:29.064 [2024-09-29 16:45:29.456119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.064 [2024-09-29 16:45:29.456170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.064 qpair failed and we were unable to recover it. 00:37:29.064 [2024-09-29 16:45:29.456415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.064 [2024-09-29 16:45:29.456471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.064 qpair failed and we were unable to recover it. 00:37:29.064 [2024-09-29 16:45:29.456606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.064 [2024-09-29 16:45:29.456639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.064 qpair failed and we were unable to recover it. 00:37:29.064 [2024-09-29 16:45:29.456763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.064 [2024-09-29 16:45:29.456799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.064 qpair failed and we were unable to recover it. 00:37:29.064 [2024-09-29 16:45:29.456964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.064 [2024-09-29 16:45:29.457012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.064 qpair failed and we were unable to recover it. 00:37:29.064 [2024-09-29 16:45:29.457243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.064 [2024-09-29 16:45:29.457301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.064 qpair failed and we were unable to recover it. 00:37:29.064 [2024-09-29 16:45:29.457485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.064 [2024-09-29 16:45:29.457538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.064 qpair failed and we were unable to recover it. 00:37:29.064 [2024-09-29 16:45:29.457648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.064 [2024-09-29 16:45:29.457689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.064 qpair failed and we were unable to recover it. 00:37:29.064 [2024-09-29 16:45:29.457851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.064 [2024-09-29 16:45:29.457886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.064 qpair failed and we were unable to recover it. 00:37:29.064 [2024-09-29 16:45:29.458081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.064 [2024-09-29 16:45:29.458134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.064 qpair failed and we were unable to recover it. 00:37:29.064 [2024-09-29 16:45:29.458366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.064 [2024-09-29 16:45:29.458406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.064 qpair failed and we were unable to recover it. 00:37:29.064 [2024-09-29 16:45:29.458549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.064 [2024-09-29 16:45:29.458585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.064 qpair failed and we were unable to recover it. 00:37:29.064 [2024-09-29 16:45:29.458762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.064 [2024-09-29 16:45:29.458797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.064 qpair failed and we were unable to recover it. 00:37:29.064 [2024-09-29 16:45:29.458944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.064 [2024-09-29 16:45:29.459000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.064 qpair failed and we were unable to recover it. 00:37:29.064 [2024-09-29 16:45:29.459135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.064 [2024-09-29 16:45:29.459173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.064 qpair failed and we were unable to recover it. 00:37:29.064 [2024-09-29 16:45:29.459359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.064 [2024-09-29 16:45:29.459396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.064 qpair failed and we were unable to recover it. 00:37:29.064 [2024-09-29 16:45:29.459545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.064 [2024-09-29 16:45:29.459598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.064 qpair failed and we were unable to recover it. 00:37:29.064 [2024-09-29 16:45:29.459812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.064 [2024-09-29 16:45:29.459849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.064 qpair failed and we were unable to recover it. 00:37:29.064 [2024-09-29 16:45:29.459959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.064 [2024-09-29 16:45:29.459994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.064 qpair failed and we were unable to recover it. 00:37:29.064 [2024-09-29 16:45:29.460133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.064 [2024-09-29 16:45:29.460184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.064 qpair failed and we were unable to recover it. 00:37:29.064 [2024-09-29 16:45:29.460439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.064 [2024-09-29 16:45:29.460490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.064 qpair failed and we were unable to recover it. 00:37:29.064 [2024-09-29 16:45:29.460656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.064 [2024-09-29 16:45:29.460696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.064 qpair failed and we were unable to recover it. 00:37:29.064 [2024-09-29 16:45:29.460848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.064 [2024-09-29 16:45:29.460883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.064 qpair failed and we were unable to recover it. 00:37:29.064 [2024-09-29 16:45:29.461066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.064 [2024-09-29 16:45:29.461119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.064 qpair failed and we were unable to recover it. 00:37:29.064 [2024-09-29 16:45:29.461361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.064 [2024-09-29 16:45:29.461402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.064 qpair failed and we were unable to recover it. 00:37:29.064 [2024-09-29 16:45:29.461606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.064 [2024-09-29 16:45:29.461640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.064 qpair failed and we were unable to recover it. 00:37:29.064 [2024-09-29 16:45:29.461762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.064 [2024-09-29 16:45:29.461797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.064 qpair failed and we were unable to recover it. 00:37:29.064 [2024-09-29 16:45:29.461925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.064 [2024-09-29 16:45:29.461978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.064 qpair failed and we were unable to recover it. 00:37:29.064 [2024-09-29 16:45:29.462150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.064 [2024-09-29 16:45:29.462206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.064 qpair failed and we were unable to recover it. 00:37:29.065 [2024-09-29 16:45:29.462343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.065 [2024-09-29 16:45:29.462419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.065 qpair failed and we were unable to recover it. 00:37:29.065 [2024-09-29 16:45:29.462589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.065 [2024-09-29 16:45:29.462624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.065 qpair failed and we were unable to recover it. 00:37:29.065 [2024-09-29 16:45:29.462769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.065 [2024-09-29 16:45:29.462816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.065 qpair failed and we were unable to recover it. 00:37:29.065 [2024-09-29 16:45:29.462991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.065 [2024-09-29 16:45:29.463030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.065 qpair failed and we were unable to recover it. 00:37:29.065 [2024-09-29 16:45:29.463272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.065 [2024-09-29 16:45:29.463326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.065 qpair failed and we were unable to recover it. 00:37:29.065 [2024-09-29 16:45:29.463588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.065 [2024-09-29 16:45:29.463658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.065 qpair failed and we were unable to recover it. 00:37:29.065 [2024-09-29 16:45:29.463801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.065 [2024-09-29 16:45:29.463835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.065 qpair failed and we were unable to recover it. 00:37:29.065 [2024-09-29 16:45:29.463993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.065 [2024-09-29 16:45:29.464030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.065 qpair failed and we were unable to recover it. 00:37:29.065 [2024-09-29 16:45:29.464309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.065 [2024-09-29 16:45:29.464368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.065 qpair failed and we were unable to recover it. 00:37:29.065 [2024-09-29 16:45:29.464500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.065 [2024-09-29 16:45:29.464537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.065 qpair failed and we were unable to recover it. 00:37:29.065 [2024-09-29 16:45:29.464697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.065 [2024-09-29 16:45:29.464748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.065 qpair failed and we were unable to recover it. 00:37:29.065 [2024-09-29 16:45:29.464862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.065 [2024-09-29 16:45:29.464894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.065 qpair failed and we were unable to recover it. 00:37:29.065 [2024-09-29 16:45:29.465017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.065 [2024-09-29 16:45:29.465067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.065 qpair failed and we were unable to recover it. 00:37:29.065 [2024-09-29 16:45:29.465234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.065 [2024-09-29 16:45:29.465267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.065 qpair failed and we were unable to recover it. 00:37:29.065 [2024-09-29 16:45:29.465499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.065 [2024-09-29 16:45:29.465536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.065 qpair failed and we were unable to recover it. 00:37:29.065 [2024-09-29 16:45:29.465690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.065 [2024-09-29 16:45:29.465740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.065 qpair failed and we were unable to recover it. 00:37:29.065 [2024-09-29 16:45:29.465899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.065 [2024-09-29 16:45:29.465948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.065 qpair failed and we were unable to recover it. 00:37:29.065 [2024-09-29 16:45:29.466111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.065 [2024-09-29 16:45:29.466166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.065 qpair failed and we were unable to recover it. 00:37:29.065 [2024-09-29 16:45:29.466322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.065 [2024-09-29 16:45:29.466361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.065 qpair failed and we were unable to recover it. 00:37:29.065 [2024-09-29 16:45:29.466547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.065 [2024-09-29 16:45:29.466584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.065 qpair failed and we were unable to recover it. 00:37:29.065 [2024-09-29 16:45:29.466744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.065 [2024-09-29 16:45:29.466779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.065 qpair failed and we were unable to recover it. 00:37:29.065 [2024-09-29 16:45:29.466939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.065 [2024-09-29 16:45:29.466977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.065 qpair failed and we were unable to recover it. 00:37:29.065 [2024-09-29 16:45:29.467185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.065 [2024-09-29 16:45:29.467243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.065 qpair failed and we were unable to recover it. 00:37:29.065 [2024-09-29 16:45:29.467390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.065 [2024-09-29 16:45:29.467428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.065 qpair failed and we were unable to recover it. 00:37:29.065 [2024-09-29 16:45:29.467554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.065 [2024-09-29 16:45:29.467591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.065 qpair failed and we were unable to recover it. 00:37:29.065 [2024-09-29 16:45:29.467773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.065 [2024-09-29 16:45:29.467821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.065 qpair failed and we were unable to recover it. 00:37:29.065 [2024-09-29 16:45:29.467940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.065 [2024-09-29 16:45:29.467975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.065 qpair failed and we were unable to recover it. 00:37:29.065 [2024-09-29 16:45:29.468121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.065 [2024-09-29 16:45:29.468174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.065 qpair failed and we were unable to recover it. 00:37:29.065 [2024-09-29 16:45:29.468391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.065 [2024-09-29 16:45:29.468461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.065 qpair failed and we were unable to recover it. 00:37:29.065 [2024-09-29 16:45:29.468601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.065 [2024-09-29 16:45:29.468635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.065 qpair failed and we were unable to recover it. 00:37:29.065 [2024-09-29 16:45:29.468782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.065 [2024-09-29 16:45:29.468815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.065 qpair failed and we were unable to recover it. 00:37:29.066 [2024-09-29 16:45:29.468979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.066 [2024-09-29 16:45:29.469048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.066 qpair failed and we were unable to recover it. 00:37:29.066 [2024-09-29 16:45:29.469261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.066 [2024-09-29 16:45:29.469297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.066 qpair failed and we were unable to recover it. 00:37:29.066 [2024-09-29 16:45:29.469457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.066 [2024-09-29 16:45:29.469495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.066 qpair failed and we were unable to recover it. 00:37:29.066 [2024-09-29 16:45:29.469650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.066 [2024-09-29 16:45:29.469713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.066 qpair failed and we were unable to recover it. 00:37:29.066 [2024-09-29 16:45:29.469840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.066 [2024-09-29 16:45:29.469874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.066 qpair failed and we were unable to recover it. 00:37:29.066 [2024-09-29 16:45:29.470031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.066 [2024-09-29 16:45:29.470096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.066 qpair failed and we were unable to recover it. 00:37:29.066 [2024-09-29 16:45:29.470345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.066 [2024-09-29 16:45:29.470404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.066 qpair failed and we were unable to recover it. 00:37:29.066 [2024-09-29 16:45:29.470633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.066 [2024-09-29 16:45:29.470684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.066 qpair failed and we were unable to recover it. 00:37:29.066 [2024-09-29 16:45:29.470820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.066 [2024-09-29 16:45:29.470853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.066 qpair failed and we were unable to recover it. 00:37:29.066 [2024-09-29 16:45:29.471000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.066 [2024-09-29 16:45:29.471033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.066 qpair failed and we were unable to recover it. 00:37:29.066 [2024-09-29 16:45:29.471151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.066 [2024-09-29 16:45:29.471184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.066 qpair failed and we were unable to recover it. 00:37:29.066 [2024-09-29 16:45:29.471335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.066 [2024-09-29 16:45:29.471370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.066 qpair failed and we were unable to recover it. 00:37:29.066 [2024-09-29 16:45:29.471509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.066 [2024-09-29 16:45:29.471543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.066 qpair failed and we were unable to recover it. 00:37:29.066 [2024-09-29 16:45:29.471685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.066 [2024-09-29 16:45:29.471719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.066 qpair failed and we were unable to recover it. 00:37:29.066 [2024-09-29 16:45:29.471860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.066 [2024-09-29 16:45:29.471894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.066 qpair failed and we were unable to recover it. 00:37:29.066 [2024-09-29 16:45:29.472080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.066 [2024-09-29 16:45:29.472133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.066 qpair failed and we were unable to recover it. 00:37:29.066 [2024-09-29 16:45:29.472278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.066 [2024-09-29 16:45:29.472314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.066 qpair failed and we were unable to recover it. 00:37:29.066 [2024-09-29 16:45:29.472474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.066 [2024-09-29 16:45:29.472554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.066 qpair failed and we were unable to recover it. 00:37:29.066 [2024-09-29 16:45:29.472690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.066 [2024-09-29 16:45:29.472748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.066 qpair failed and we were unable to recover it. 00:37:29.066 [2024-09-29 16:45:29.472916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.066 [2024-09-29 16:45:29.472950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.066 qpair failed and we were unable to recover it. 00:37:29.066 [2024-09-29 16:45:29.473090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.066 [2024-09-29 16:45:29.473124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.066 qpair failed and we were unable to recover it. 00:37:29.066 [2024-09-29 16:45:29.473270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.066 [2024-09-29 16:45:29.473304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.066 qpair failed and we were unable to recover it. 00:37:29.066 [2024-09-29 16:45:29.473457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.066 [2024-09-29 16:45:29.473491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.066 qpair failed and we were unable to recover it. 00:37:29.066 [2024-09-29 16:45:29.473636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.066 [2024-09-29 16:45:29.473686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.066 qpair failed and we were unable to recover it. 00:37:29.066 [2024-09-29 16:45:29.473874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.066 [2024-09-29 16:45:29.473927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.066 qpair failed and we were unable to recover it. 00:37:29.066 [2024-09-29 16:45:29.474069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.066 [2024-09-29 16:45:29.474106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.066 qpair failed and we were unable to recover it. 00:37:29.066 [2024-09-29 16:45:29.474255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.066 [2024-09-29 16:45:29.474308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.066 qpair failed and we were unable to recover it. 00:37:29.066 [2024-09-29 16:45:29.474464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.066 [2024-09-29 16:45:29.474502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.066 qpair failed and we were unable to recover it. 00:37:29.066 [2024-09-29 16:45:29.474690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.066 [2024-09-29 16:45:29.474724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.066 qpair failed and we were unable to recover it. 00:37:29.066 [2024-09-29 16:45:29.474857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.066 [2024-09-29 16:45:29.474894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.066 qpair failed and we were unable to recover it. 00:37:29.066 [2024-09-29 16:45:29.475046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.066 [2024-09-29 16:45:29.475083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.066 qpair failed and we were unable to recover it. 00:37:29.066 [2024-09-29 16:45:29.475255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.066 [2024-09-29 16:45:29.475289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.066 qpair failed and we were unable to recover it. 00:37:29.066 [2024-09-29 16:45:29.475399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.066 [2024-09-29 16:45:29.475450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.066 qpair failed and we were unable to recover it. 00:37:29.066 [2024-09-29 16:45:29.475586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.066 [2024-09-29 16:45:29.475623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.066 qpair failed and we were unable to recover it. 00:37:29.066 [2024-09-29 16:45:29.475781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.067 [2024-09-29 16:45:29.475814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.067 qpair failed and we were unable to recover it. 00:37:29.067 [2024-09-29 16:45:29.475955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.067 [2024-09-29 16:45:29.475990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.067 qpair failed and we were unable to recover it. 00:37:29.067 [2024-09-29 16:45:29.476178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.067 [2024-09-29 16:45:29.476215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.067 qpair failed and we were unable to recover it. 00:37:29.067 [2024-09-29 16:45:29.476347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.067 [2024-09-29 16:45:29.476381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.067 qpair failed and we were unable to recover it. 00:37:29.067 [2024-09-29 16:45:29.476520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.067 [2024-09-29 16:45:29.476569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.067 qpair failed and we were unable to recover it. 00:37:29.067 [2024-09-29 16:45:29.476701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.067 [2024-09-29 16:45:29.476741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.067 qpair failed and we were unable to recover it. 00:37:29.067 [2024-09-29 16:45:29.476905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.067 [2024-09-29 16:45:29.476939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.067 qpair failed and we were unable to recover it. 00:37:29.067 [2024-09-29 16:45:29.477053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.067 [2024-09-29 16:45:29.477087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.067 qpair failed and we were unable to recover it. 00:37:29.067 [2024-09-29 16:45:29.477224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.067 [2024-09-29 16:45:29.477258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.067 qpair failed and we were unable to recover it. 00:37:29.067 [2024-09-29 16:45:29.477432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.067 [2024-09-29 16:45:29.477466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.067 qpair failed and we were unable to recover it. 00:37:29.067 [2024-09-29 16:45:29.477581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.067 [2024-09-29 16:45:29.477615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.067 qpair failed and we were unable to recover it. 00:37:29.067 [2024-09-29 16:45:29.477769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.067 [2024-09-29 16:45:29.477804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.067 qpair failed and we were unable to recover it. 00:37:29.067 [2024-09-29 16:45:29.477940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.067 [2024-09-29 16:45:29.477988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.067 qpair failed and we were unable to recover it. 00:37:29.067 [2024-09-29 16:45:29.478167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.067 [2024-09-29 16:45:29.478209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.067 qpair failed and we were unable to recover it. 00:37:29.067 [2024-09-29 16:45:29.478375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.067 [2024-09-29 16:45:29.478427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.067 qpair failed and we were unable to recover it. 00:37:29.067 [2024-09-29 16:45:29.478603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.067 [2024-09-29 16:45:29.478641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.067 qpair failed and we were unable to recover it. 00:37:29.067 [2024-09-29 16:45:29.478832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.067 [2024-09-29 16:45:29.478879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.067 qpair failed and we were unable to recover it. 00:37:29.067 [2024-09-29 16:45:29.479003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.067 [2024-09-29 16:45:29.479057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.067 qpair failed and we were unable to recover it. 00:37:29.067 [2024-09-29 16:45:29.479273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.067 [2024-09-29 16:45:29.479311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.067 qpair failed and we were unable to recover it. 00:37:29.067 [2024-09-29 16:45:29.479491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.067 [2024-09-29 16:45:29.479546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.067 qpair failed and we were unable to recover it. 00:37:29.067 [2024-09-29 16:45:29.479685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.067 [2024-09-29 16:45:29.479737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.067 qpair failed and we were unable to recover it. 00:37:29.067 [2024-09-29 16:45:29.479855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.067 [2024-09-29 16:45:29.479889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.067 qpair failed and we were unable to recover it. 00:37:29.067 [2024-09-29 16:45:29.480052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.067 [2024-09-29 16:45:29.480089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.067 qpair failed and we were unable to recover it. 00:37:29.067 [2024-09-29 16:45:29.480242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.067 [2024-09-29 16:45:29.480279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.067 qpair failed and we were unable to recover it. 00:37:29.067 [2024-09-29 16:45:29.480588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.067 [2024-09-29 16:45:29.480654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.067 qpair failed and we were unable to recover it. 00:37:29.067 [2024-09-29 16:45:29.480793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.068 [2024-09-29 16:45:29.480829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.068 qpair failed and we were unable to recover it. 00:37:29.068 [2024-09-29 16:45:29.480968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.068 [2024-09-29 16:45:29.481022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.068 qpair failed and we were unable to recover it. 00:37:29.068 [2024-09-29 16:45:29.481250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.068 [2024-09-29 16:45:29.481304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.068 qpair failed and we were unable to recover it. 00:37:29.068 [2024-09-29 16:45:29.481582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.068 [2024-09-29 16:45:29.481636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.068 qpair failed and we were unable to recover it. 00:37:29.068 [2024-09-29 16:45:29.481808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.068 [2024-09-29 16:45:29.481856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.068 qpair failed and we were unable to recover it. 00:37:29.068 [2024-09-29 16:45:29.481984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.068 [2024-09-29 16:45:29.482037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.068 qpair failed and we were unable to recover it. 00:37:29.068 [2024-09-29 16:45:29.482201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.068 [2024-09-29 16:45:29.482261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.068 qpair failed and we were unable to recover it. 00:37:29.068 [2024-09-29 16:45:29.482488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.068 [2024-09-29 16:45:29.482566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.068 qpair failed and we were unable to recover it. 00:37:29.068 [2024-09-29 16:45:29.482699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.068 [2024-09-29 16:45:29.482751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.068 qpair failed and we were unable to recover it. 00:37:29.068 [2024-09-29 16:45:29.482895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.068 [2024-09-29 16:45:29.482943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.068 qpair failed and we were unable to recover it. 00:37:29.068 [2024-09-29 16:45:29.483091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.068 [2024-09-29 16:45:29.483145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.068 qpair failed and we were unable to recover it. 00:37:29.068 [2024-09-29 16:45:29.483307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.068 [2024-09-29 16:45:29.483344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.068 qpair failed and we were unable to recover it. 00:37:29.068 [2024-09-29 16:45:29.483484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.068 [2024-09-29 16:45:29.483517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.068 qpair failed and we were unable to recover it. 00:37:29.068 [2024-09-29 16:45:29.483689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.068 [2024-09-29 16:45:29.483743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.068 qpair failed and we were unable to recover it. 00:37:29.068 [2024-09-29 16:45:29.483857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.068 [2024-09-29 16:45:29.483890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.068 qpair failed and we were unable to recover it. 00:37:29.068 [2024-09-29 16:45:29.484071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.068 [2024-09-29 16:45:29.484109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.068 qpair failed and we were unable to recover it. 00:37:29.068 [2024-09-29 16:45:29.484291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.068 [2024-09-29 16:45:29.484340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.068 qpair failed and we were unable to recover it. 00:37:29.068 [2024-09-29 16:45:29.484500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.068 [2024-09-29 16:45:29.484548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.068 qpair failed and we were unable to recover it. 00:37:29.068 [2024-09-29 16:45:29.484678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.068 [2024-09-29 16:45:29.484715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.068 qpair failed and we were unable to recover it. 00:37:29.068 [2024-09-29 16:45:29.484862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.068 [2024-09-29 16:45:29.484897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.068 qpair failed and we were unable to recover it. 00:37:29.068 [2024-09-29 16:45:29.485011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.068 [2024-09-29 16:45:29.485045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.068 qpair failed and we were unable to recover it. 00:37:29.068 [2024-09-29 16:45:29.485163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.068 [2024-09-29 16:45:29.485199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.068 qpair failed and we were unable to recover it. 00:37:29.068 [2024-09-29 16:45:29.485367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.068 [2024-09-29 16:45:29.485432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.068 qpair failed and we were unable to recover it. 00:37:29.068 [2024-09-29 16:45:29.485557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.068 [2024-09-29 16:45:29.485591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.068 qpair failed and we were unable to recover it. 00:37:29.068 [2024-09-29 16:45:29.485755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.068 [2024-09-29 16:45:29.485803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.068 qpair failed and we were unable to recover it. 00:37:29.068 [2024-09-29 16:45:29.485957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.068 [2024-09-29 16:45:29.485992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.068 qpair failed and we were unable to recover it. 00:37:29.068 [2024-09-29 16:45:29.486133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.068 [2024-09-29 16:45:29.486172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.068 qpair failed and we were unable to recover it. 00:37:29.068 [2024-09-29 16:45:29.486353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.068 [2024-09-29 16:45:29.486390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.068 qpair failed and we were unable to recover it. 00:37:29.068 [2024-09-29 16:45:29.486545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.068 [2024-09-29 16:45:29.486590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.068 qpair failed and we were unable to recover it. 00:37:29.068 [2024-09-29 16:45:29.486805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.068 [2024-09-29 16:45:29.486843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.068 qpair failed and we were unable to recover it. 00:37:29.068 [2024-09-29 16:45:29.487119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.068 [2024-09-29 16:45:29.487191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.068 qpair failed and we were unable to recover it. 00:37:29.068 [2024-09-29 16:45:29.487412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.068 [2024-09-29 16:45:29.487470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.068 qpair failed and we were unable to recover it. 00:37:29.068 [2024-09-29 16:45:29.487591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.068 [2024-09-29 16:45:29.487629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.068 qpair failed and we were unable to recover it. 00:37:29.068 [2024-09-29 16:45:29.487805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.068 [2024-09-29 16:45:29.487839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.068 qpair failed and we were unable to recover it. 00:37:29.068 [2024-09-29 16:45:29.488023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.068 [2024-09-29 16:45:29.488075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.068 qpair failed and we were unable to recover it. 00:37:29.068 [2024-09-29 16:45:29.488241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.068 [2024-09-29 16:45:29.488312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.068 qpair failed and we were unable to recover it. 00:37:29.068 [2024-09-29 16:45:29.488510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.068 [2024-09-29 16:45:29.488574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.068 qpair failed and we were unable to recover it. 00:37:29.069 [2024-09-29 16:45:29.488724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.069 [2024-09-29 16:45:29.488759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.069 qpair failed and we were unable to recover it. 00:37:29.069 [2024-09-29 16:45:29.488922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.069 [2024-09-29 16:45:29.488970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.069 qpair failed and we were unable to recover it. 00:37:29.069 [2024-09-29 16:45:29.489176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.069 [2024-09-29 16:45:29.489261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.069 qpair failed and we were unable to recover it. 00:37:29.069 [2024-09-29 16:45:29.489443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.069 [2024-09-29 16:45:29.489504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.069 qpair failed and we were unable to recover it. 00:37:29.069 [2024-09-29 16:45:29.489670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.069 [2024-09-29 16:45:29.489740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.069 qpair failed and we were unable to recover it. 00:37:29.069 [2024-09-29 16:45:29.489882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.069 [2024-09-29 16:45:29.489920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.069 qpair failed and we were unable to recover it. 00:37:29.069 [2024-09-29 16:45:29.490076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.069 [2024-09-29 16:45:29.490124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.069 qpair failed and we were unable to recover it. 00:37:29.069 [2024-09-29 16:45:29.490240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.069 [2024-09-29 16:45:29.490275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.069 qpair failed and we were unable to recover it. 00:37:29.069 [2024-09-29 16:45:29.490417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.069 [2024-09-29 16:45:29.490451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.069 qpair failed and we were unable to recover it. 00:37:29.069 [2024-09-29 16:45:29.490632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.069 [2024-09-29 16:45:29.490665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.069 qpair failed and we were unable to recover it. 00:37:29.069 [2024-09-29 16:45:29.490819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.069 [2024-09-29 16:45:29.490853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.069 qpair failed and we were unable to recover it. 00:37:29.069 [2024-09-29 16:45:29.491032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.069 [2024-09-29 16:45:29.491075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.069 qpair failed and we were unable to recover it. 00:37:29.069 [2024-09-29 16:45:29.491280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.069 [2024-09-29 16:45:29.491339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.069 qpair failed and we were unable to recover it. 00:37:29.069 [2024-09-29 16:45:29.491529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.069 [2024-09-29 16:45:29.491568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.069 qpair failed and we were unable to recover it. 00:37:29.069 [2024-09-29 16:45:29.491744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.069 [2024-09-29 16:45:29.491778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.069 qpair failed and we were unable to recover it. 00:37:29.069 [2024-09-29 16:45:29.491926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.069 [2024-09-29 16:45:29.491962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.069 qpair failed and we were unable to recover it. 00:37:29.069 [2024-09-29 16:45:29.492118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.069 [2024-09-29 16:45:29.492171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.069 qpair failed and we were unable to recover it. 00:37:29.069 [2024-09-29 16:45:29.492374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.069 [2024-09-29 16:45:29.492437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.069 qpair failed and we were unable to recover it. 00:37:29.069 [2024-09-29 16:45:29.492585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.069 [2024-09-29 16:45:29.492619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.069 qpair failed and we were unable to recover it. 00:37:29.069 [2024-09-29 16:45:29.492779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.069 [2024-09-29 16:45:29.492813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.069 qpair failed and we were unable to recover it. 00:37:29.069 [2024-09-29 16:45:29.493000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.069 [2024-09-29 16:45:29.493035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.069 qpair failed and we were unable to recover it. 00:37:29.069 [2024-09-29 16:45:29.493175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.069 [2024-09-29 16:45:29.493209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.069 qpair failed and we were unable to recover it. 00:37:29.069 [2024-09-29 16:45:29.493349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.069 [2024-09-29 16:45:29.493383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.069 qpair failed and we were unable to recover it. 00:37:29.069 [2024-09-29 16:45:29.493489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.069 [2024-09-29 16:45:29.493522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.069 qpair failed and we were unable to recover it. 00:37:29.069 [2024-09-29 16:45:29.493666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.069 [2024-09-29 16:45:29.493706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.069 qpair failed and we were unable to recover it. 00:37:29.069 [2024-09-29 16:45:29.493866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.069 [2024-09-29 16:45:29.493913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.069 qpair failed and we were unable to recover it. 00:37:29.069 [2024-09-29 16:45:29.494091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.069 [2024-09-29 16:45:29.494130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.069 qpair failed and we were unable to recover it. 00:37:29.069 [2024-09-29 16:45:29.494262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.069 [2024-09-29 16:45:29.494299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.069 qpair failed and we were unable to recover it. 00:37:29.069 [2024-09-29 16:45:29.494458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.069 [2024-09-29 16:45:29.494496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.069 qpair failed and we were unable to recover it. 00:37:29.069 [2024-09-29 16:45:29.494626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.069 [2024-09-29 16:45:29.494659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.069 qpair failed and we were unable to recover it. 00:37:29.069 [2024-09-29 16:45:29.494778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.069 [2024-09-29 16:45:29.494811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.069 qpair failed and we were unable to recover it. 00:37:29.069 [2024-09-29 16:45:29.494932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.070 [2024-09-29 16:45:29.494990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.070 qpair failed and we were unable to recover it. 00:37:29.070 [2024-09-29 16:45:29.495128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.070 [2024-09-29 16:45:29.495180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.070 qpair failed and we were unable to recover it. 00:37:29.070 [2024-09-29 16:45:29.495365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.070 [2024-09-29 16:45:29.495402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.070 qpair failed and we were unable to recover it. 00:37:29.070 [2024-09-29 16:45:29.495574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.070 [2024-09-29 16:45:29.495608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.070 qpair failed and we were unable to recover it. 00:37:29.070 [2024-09-29 16:45:29.495754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.070 [2024-09-29 16:45:29.495789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.070 qpair failed and we were unable to recover it. 00:37:29.070 [2024-09-29 16:45:29.495929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.070 [2024-09-29 16:45:29.495980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.070 qpair failed and we were unable to recover it. 00:37:29.070 [2024-09-29 16:45:29.496102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.070 [2024-09-29 16:45:29.496141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.070 qpair failed and we were unable to recover it. 00:37:29.070 [2024-09-29 16:45:29.496357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.070 [2024-09-29 16:45:29.496395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.070 qpair failed and we were unable to recover it. 00:37:29.070 [2024-09-29 16:45:29.496518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.070 [2024-09-29 16:45:29.496554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.070 qpair failed and we were unable to recover it. 00:37:29.070 [2024-09-29 16:45:29.496711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.070 [2024-09-29 16:45:29.496746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.070 qpair failed and we were unable to recover it. 00:37:29.070 [2024-09-29 16:45:29.496929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.070 [2024-09-29 16:45:29.496964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.070 qpair failed and we were unable to recover it. 00:37:29.070 [2024-09-29 16:45:29.497125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.070 [2024-09-29 16:45:29.497163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.070 qpair failed and we were unable to recover it. 00:37:29.070 [2024-09-29 16:45:29.497287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.070 [2024-09-29 16:45:29.497323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.070 qpair failed and we were unable to recover it. 00:37:29.070 [2024-09-29 16:45:29.497501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.070 [2024-09-29 16:45:29.497539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.070 qpair failed and we were unable to recover it. 00:37:29.070 [2024-09-29 16:45:29.497713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.070 [2024-09-29 16:45:29.497778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.070 qpair failed and we were unable to recover it. 00:37:29.070 [2024-09-29 16:45:29.497933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.070 [2024-09-29 16:45:29.497967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.070 qpair failed and we were unable to recover it. 00:37:29.070 [2024-09-29 16:45:29.498166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.070 [2024-09-29 16:45:29.498203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.070 qpair failed and we were unable to recover it. 00:37:29.070 [2024-09-29 16:45:29.498446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.070 [2024-09-29 16:45:29.498502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.070 qpair failed and we were unable to recover it. 00:37:29.070 [2024-09-29 16:45:29.498656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.070 [2024-09-29 16:45:29.498700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.070 qpair failed and we were unable to recover it. 00:37:29.070 [2024-09-29 16:45:29.498884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.070 [2024-09-29 16:45:29.498931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.070 qpair failed and we were unable to recover it. 00:37:29.070 [2024-09-29 16:45:29.499143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.070 [2024-09-29 16:45:29.499197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.070 qpair failed and we were unable to recover it. 00:37:29.070 [2024-09-29 16:45:29.499500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.070 [2024-09-29 16:45:29.499561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.070 qpair failed and we were unable to recover it. 00:37:29.070 [2024-09-29 16:45:29.499704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.070 [2024-09-29 16:45:29.499739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.070 qpair failed and we were unable to recover it. 00:37:29.070 [2024-09-29 16:45:29.499929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.070 [2024-09-29 16:45:29.499982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.070 qpair failed and we were unable to recover it. 00:37:29.070 [2024-09-29 16:45:29.500155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.070 [2024-09-29 16:45:29.500194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.070 qpair failed and we were unable to recover it. 00:37:29.070 [2024-09-29 16:45:29.500390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.070 [2024-09-29 16:45:29.500460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.070 qpair failed and we were unable to recover it. 00:37:29.070 [2024-09-29 16:45:29.500626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.070 [2024-09-29 16:45:29.500660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.070 qpair failed and we were unable to recover it. 00:37:29.070 [2024-09-29 16:45:29.500791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.070 [2024-09-29 16:45:29.500825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.070 qpair failed and we were unable to recover it. 00:37:29.070 [2024-09-29 16:45:29.500980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.070 [2024-09-29 16:45:29.501017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.070 qpair failed and we were unable to recover it. 00:37:29.070 [2024-09-29 16:45:29.501225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.070 [2024-09-29 16:45:29.501262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.070 qpair failed and we were unable to recover it. 00:37:29.070 [2024-09-29 16:45:29.501391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.070 [2024-09-29 16:45:29.501427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.070 qpair failed and we were unable to recover it. 00:37:29.070 [2024-09-29 16:45:29.501576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.071 [2024-09-29 16:45:29.501629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.071 qpair failed and we were unable to recover it. 00:37:29.071 [2024-09-29 16:45:29.501831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.071 [2024-09-29 16:45:29.501880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.071 qpair failed and we were unable to recover it. 00:37:29.071 [2024-09-29 16:45:29.502055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.071 [2024-09-29 16:45:29.502091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.071 qpair failed and we were unable to recover it. 00:37:29.071 [2024-09-29 16:45:29.502256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.071 [2024-09-29 16:45:29.502308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.071 qpair failed and we were unable to recover it. 00:37:29.071 [2024-09-29 16:45:29.502453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.071 [2024-09-29 16:45:29.502514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.071 qpair failed and we were unable to recover it. 00:37:29.071 [2024-09-29 16:45:29.502658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.071 [2024-09-29 16:45:29.502700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.071 qpair failed and we were unable to recover it. 00:37:29.071 [2024-09-29 16:45:29.502871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.071 [2024-09-29 16:45:29.502918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.071 qpair failed and we were unable to recover it. 00:37:29.071 [2024-09-29 16:45:29.503166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.071 [2024-09-29 16:45:29.503222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.071 qpair failed and we were unable to recover it. 00:37:29.071 [2024-09-29 16:45:29.503357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.071 [2024-09-29 16:45:29.503396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.071 qpair failed and we were unable to recover it. 00:37:29.071 [2024-09-29 16:45:29.503560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.071 [2024-09-29 16:45:29.503599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.071 qpair failed and we were unable to recover it. 00:37:29.071 [2024-09-29 16:45:29.503772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.071 [2024-09-29 16:45:29.503806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.071 qpair failed and we were unable to recover it. 00:37:29.071 [2024-09-29 16:45:29.503944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.071 [2024-09-29 16:45:29.503992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.071 qpair failed and we were unable to recover it. 00:37:29.071 [2024-09-29 16:45:29.504287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.071 [2024-09-29 16:45:29.504345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.071 qpair failed and we were unable to recover it. 00:37:29.071 [2024-09-29 16:45:29.504490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.071 [2024-09-29 16:45:29.504541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.071 qpair failed and we were unable to recover it. 00:37:29.071 [2024-09-29 16:45:29.504742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.071 [2024-09-29 16:45:29.504777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.071 qpair failed and we were unable to recover it. 00:37:29.071 [2024-09-29 16:45:29.504915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.071 [2024-09-29 16:45:29.504948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.071 qpair failed and we were unable to recover it. 00:37:29.071 [2024-09-29 16:45:29.505074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.071 [2024-09-29 16:45:29.505112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.071 qpair failed and we were unable to recover it. 00:37:29.071 [2024-09-29 16:45:29.505275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.071 [2024-09-29 16:45:29.505312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.071 qpair failed and we were unable to recover it. 00:37:29.071 [2024-09-29 16:45:29.505475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.071 [2024-09-29 16:45:29.505511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.071 qpair failed and we were unable to recover it. 00:37:29.071 [2024-09-29 16:45:29.505698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.071 [2024-09-29 16:45:29.505749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.071 qpair failed and we were unable to recover it. 00:37:29.071 [2024-09-29 16:45:29.505863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.071 [2024-09-29 16:45:29.505897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.071 qpair failed and we were unable to recover it. 00:37:29.071 [2024-09-29 16:45:29.506033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.071 [2024-09-29 16:45:29.506081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.071 qpair failed and we were unable to recover it. 00:37:29.071 [2024-09-29 16:45:29.506252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.071 [2024-09-29 16:45:29.506309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.071 qpair failed and we were unable to recover it. 00:37:29.071 [2024-09-29 16:45:29.506587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.071 [2024-09-29 16:45:29.506645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.071 qpair failed and we were unable to recover it. 00:37:29.071 [2024-09-29 16:45:29.506772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.071 [2024-09-29 16:45:29.506808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.071 qpair failed and we were unable to recover it. 00:37:29.071 [2024-09-29 16:45:29.506948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.071 [2024-09-29 16:45:29.507000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.071 qpair failed and we were unable to recover it. 00:37:29.071 [2024-09-29 16:45:29.507165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.071 [2024-09-29 16:45:29.507216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.071 qpair failed and we were unable to recover it. 00:37:29.071 [2024-09-29 16:45:29.507352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.071 [2024-09-29 16:45:29.507391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.071 qpair failed and we were unable to recover it. 00:37:29.071 [2024-09-29 16:45:29.507562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.071 [2024-09-29 16:45:29.507596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.071 qpair failed and we were unable to recover it. 00:37:29.071 [2024-09-29 16:45:29.507759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.072 [2024-09-29 16:45:29.507808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.072 qpair failed and we were unable to recover it. 00:37:29.072 [2024-09-29 16:45:29.507983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.072 [2024-09-29 16:45:29.508037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.072 qpair failed and we were unable to recover it. 00:37:29.072 [2024-09-29 16:45:29.508306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.072 [2024-09-29 16:45:29.508378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.072 qpair failed and we were unable to recover it. 00:37:29.072 [2024-09-29 16:45:29.508579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.072 [2024-09-29 16:45:29.508618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.072 qpair failed and we were unable to recover it. 00:37:29.072 [2024-09-29 16:45:29.508798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.072 [2024-09-29 16:45:29.508833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.072 qpair failed and we were unable to recover it. 00:37:29.072 [2024-09-29 16:45:29.508974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.072 [2024-09-29 16:45:29.509011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.072 qpair failed and we were unable to recover it. 00:37:29.072 [2024-09-29 16:45:29.509168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.072 [2024-09-29 16:45:29.509205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.072 qpair failed and we were unable to recover it. 00:37:29.072 [2024-09-29 16:45:29.509448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.072 [2024-09-29 16:45:29.509517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.072 qpair failed and we were unable to recover it. 00:37:29.072 [2024-09-29 16:45:29.509694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.072 [2024-09-29 16:45:29.509730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.072 qpair failed and we were unable to recover it. 00:37:29.072 [2024-09-29 16:45:29.509889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.072 [2024-09-29 16:45:29.509944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.072 qpair failed and we were unable to recover it. 00:37:29.072 [2024-09-29 16:45:29.510154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.072 [2024-09-29 16:45:29.510207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.072 qpair failed and we were unable to recover it. 00:37:29.072 [2024-09-29 16:45:29.510423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.072 [2024-09-29 16:45:29.510457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.072 qpair failed and we were unable to recover it. 00:37:29.072 [2024-09-29 16:45:29.510565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.072 [2024-09-29 16:45:29.510599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.072 qpair failed and we were unable to recover it. 00:37:29.072 [2024-09-29 16:45:29.510751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.072 [2024-09-29 16:45:29.510803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.072 qpair failed and we were unable to recover it. 00:37:29.072 [2024-09-29 16:45:29.510993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.072 [2024-09-29 16:45:29.511046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.072 qpair failed and we were unable to recover it. 00:37:29.072 [2024-09-29 16:45:29.511215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.072 [2024-09-29 16:45:29.511255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.072 qpair failed and we were unable to recover it. 00:37:29.072 [2024-09-29 16:45:29.511490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.072 [2024-09-29 16:45:29.511546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.072 qpair failed and we were unable to recover it. 00:37:29.072 [2024-09-29 16:45:29.511731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.072 [2024-09-29 16:45:29.511769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.072 qpair failed and we were unable to recover it. 00:37:29.072 [2024-09-29 16:45:29.511922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.072 [2024-09-29 16:45:29.511960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.072 qpair failed and we were unable to recover it. 00:37:29.072 [2024-09-29 16:45:29.512189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.072 [2024-09-29 16:45:29.512226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.072 qpair failed and we were unable to recover it. 00:37:29.072 [2024-09-29 16:45:29.512510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.072 [2024-09-29 16:45:29.512584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.072 qpair failed and we were unable to recover it. 00:37:29.072 [2024-09-29 16:45:29.512758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.072 [2024-09-29 16:45:29.512793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.072 qpair failed and we were unable to recover it. 00:37:29.072 [2024-09-29 16:45:29.512962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.072 [2024-09-29 16:45:29.513014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.072 qpair failed and we were unable to recover it. 00:37:29.072 [2024-09-29 16:45:29.513179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.072 [2024-09-29 16:45:29.513231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.072 qpair failed and we were unable to recover it. 00:37:29.072 [2024-09-29 16:45:29.513462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.072 [2024-09-29 16:45:29.513515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.072 qpair failed and we were unable to recover it. 00:37:29.072 [2024-09-29 16:45:29.513686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.072 [2024-09-29 16:45:29.513720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.072 qpair failed and we were unable to recover it. 00:37:29.072 [2024-09-29 16:45:29.513858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.072 [2024-09-29 16:45:29.513911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.072 qpair failed and we were unable to recover it. 00:37:29.072 [2024-09-29 16:45:29.514104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.072 [2024-09-29 16:45:29.514156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.072 qpair failed and we were unable to recover it. 00:37:29.072 [2024-09-29 16:45:29.514344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.073 [2024-09-29 16:45:29.514399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.073 qpair failed and we were unable to recover it. 00:37:29.073 [2024-09-29 16:45:29.514571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.073 [2024-09-29 16:45:29.514605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.073 qpair failed and we were unable to recover it. 00:37:29.073 [2024-09-29 16:45:29.514774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.073 [2024-09-29 16:45:29.514825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.073 qpair failed and we were unable to recover it. 00:37:29.073 [2024-09-29 16:45:29.515037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.073 [2024-09-29 16:45:29.515090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.073 qpair failed and we were unable to recover it. 00:37:29.073 [2024-09-29 16:45:29.515259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.073 [2024-09-29 16:45:29.515326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.073 qpair failed and we were unable to recover it. 00:37:29.073 [2024-09-29 16:45:29.515572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.073 [2024-09-29 16:45:29.515630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.073 qpair failed and we were unable to recover it. 00:37:29.073 [2024-09-29 16:45:29.515809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.073 [2024-09-29 16:45:29.515847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.073 qpair failed and we were unable to recover it. 00:37:29.073 [2024-09-29 16:45:29.515982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.073 [2024-09-29 16:45:29.516020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.073 qpair failed and we were unable to recover it. 00:37:29.073 [2024-09-29 16:45:29.516145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.073 [2024-09-29 16:45:29.516182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.073 qpair failed and we were unable to recover it. 00:37:29.073 [2024-09-29 16:45:29.516346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.073 [2024-09-29 16:45:29.516384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.073 qpair failed and we were unable to recover it. 00:37:29.073 [2024-09-29 16:45:29.516583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.073 [2024-09-29 16:45:29.516617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.073 qpair failed and we were unable to recover it. 00:37:29.073 [2024-09-29 16:45:29.516750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.073 [2024-09-29 16:45:29.516785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.073 qpair failed and we were unable to recover it. 00:37:29.073 [2024-09-29 16:45:29.516955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.073 [2024-09-29 16:45:29.516989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.073 qpair failed and we were unable to recover it. 00:37:29.073 [2024-09-29 16:45:29.517161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.073 [2024-09-29 16:45:29.517211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.073 qpair failed and we were unable to recover it. 00:37:29.073 [2024-09-29 16:45:29.517348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.073 [2024-09-29 16:45:29.517399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.073 qpair failed and we were unable to recover it. 00:37:29.073 [2024-09-29 16:45:29.517545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.073 [2024-09-29 16:45:29.517578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.073 qpair failed and we were unable to recover it. 00:37:29.073 [2024-09-29 16:45:29.517744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.073 [2024-09-29 16:45:29.517796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.073 qpair failed and we were unable to recover it. 00:37:29.073 [2024-09-29 16:45:29.517967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.073 [2024-09-29 16:45:29.518001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.073 qpair failed and we were unable to recover it. 00:37:29.073 [2024-09-29 16:45:29.518116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.073 [2024-09-29 16:45:29.518149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.073 qpair failed and we were unable to recover it. 00:37:29.073 [2024-09-29 16:45:29.518300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.073 [2024-09-29 16:45:29.518334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.073 qpair failed and we were unable to recover it. 00:37:29.073 [2024-09-29 16:45:29.518508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.073 [2024-09-29 16:45:29.518542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.073 qpair failed and we were unable to recover it. 00:37:29.073 [2024-09-29 16:45:29.518691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.073 [2024-09-29 16:45:29.518726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.073 qpair failed and we were unable to recover it. 00:37:29.073 [2024-09-29 16:45:29.518884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.073 [2024-09-29 16:45:29.518936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.073 qpair failed and we were unable to recover it. 00:37:29.073 [2024-09-29 16:45:29.519066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.073 [2024-09-29 16:45:29.519117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.073 qpair failed and we were unable to recover it. 00:37:29.073 [2024-09-29 16:45:29.519247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.073 [2024-09-29 16:45:29.519298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.073 qpair failed and we were unable to recover it. 00:37:29.073 [2024-09-29 16:45:29.519469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.073 [2024-09-29 16:45:29.519503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.073 qpair failed and we were unable to recover it. 00:37:29.073 [2024-09-29 16:45:29.519617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.073 [2024-09-29 16:45:29.519651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.073 qpair failed and we were unable to recover it. 00:37:29.073 [2024-09-29 16:45:29.519821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.073 [2024-09-29 16:45:29.519868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.073 qpair failed and we were unable to recover it. 00:37:29.073 [2024-09-29 16:45:29.520042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.073 [2024-09-29 16:45:29.520089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.073 qpair failed and we were unable to recover it. 00:37:29.073 [2024-09-29 16:45:29.520212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.073 [2024-09-29 16:45:29.520248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.073 qpair failed and we were unable to recover it. 00:37:29.073 [2024-09-29 16:45:29.520392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.073 [2024-09-29 16:45:29.520427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.073 qpair failed and we were unable to recover it. 00:37:29.073 [2024-09-29 16:45:29.520576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.073 [2024-09-29 16:45:29.520610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.073 qpair failed and we were unable to recover it. 00:37:29.074 [2024-09-29 16:45:29.520734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.074 [2024-09-29 16:45:29.520779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.074 qpair failed and we were unable to recover it. 00:37:29.074 [2024-09-29 16:45:29.520942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.074 [2024-09-29 16:45:29.521008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.074 qpair failed and we were unable to recover it. 00:37:29.074 [2024-09-29 16:45:29.521171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.074 [2024-09-29 16:45:29.521210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.074 qpair failed and we were unable to recover it. 00:37:29.074 [2024-09-29 16:45:29.521395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.074 [2024-09-29 16:45:29.521432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.074 qpair failed and we were unable to recover it. 00:37:29.074 [2024-09-29 16:45:29.521555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.074 [2024-09-29 16:45:29.521592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.074 qpair failed and we were unable to recover it. 00:37:29.074 [2024-09-29 16:45:29.521764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.074 [2024-09-29 16:45:29.521798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.074 qpair failed and we were unable to recover it. 00:37:29.074 [2024-09-29 16:45:29.521913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.074 [2024-09-29 16:45:29.521947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.074 qpair failed and we were unable to recover it. 00:37:29.074 [2024-09-29 16:45:29.522086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.074 [2024-09-29 16:45:29.522124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.074 qpair failed and we were unable to recover it. 00:37:29.074 [2024-09-29 16:45:29.522286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.074 [2024-09-29 16:45:29.522325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.074 qpair failed and we were unable to recover it. 00:37:29.074 [2024-09-29 16:45:29.522492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.074 [2024-09-29 16:45:29.522530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.074 qpair failed and we were unable to recover it. 00:37:29.074 [2024-09-29 16:45:29.522663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.074 [2024-09-29 16:45:29.522727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.074 qpair failed and we were unable to recover it. 00:37:29.074 [2024-09-29 16:45:29.522892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.074 [2024-09-29 16:45:29.522940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.074 qpair failed and we were unable to recover it. 00:37:29.074 [2024-09-29 16:45:29.523086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.074 [2024-09-29 16:45:29.523141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.074 qpair failed and we were unable to recover it. 00:37:29.074 [2024-09-29 16:45:29.523307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.074 [2024-09-29 16:45:29.523358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.074 qpair failed and we were unable to recover it. 00:37:29.074 [2024-09-29 16:45:29.523550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.074 [2024-09-29 16:45:29.523585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.074 qpair failed and we were unable to recover it. 00:37:29.074 [2024-09-29 16:45:29.523704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.074 [2024-09-29 16:45:29.523739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.074 qpair failed and we were unable to recover it. 00:37:29.074 [2024-09-29 16:45:29.523946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.074 [2024-09-29 16:45:29.523981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.074 qpair failed and we were unable to recover it. 00:37:29.074 [2024-09-29 16:45:29.524122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.074 [2024-09-29 16:45:29.524156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.074 qpair failed and we were unable to recover it. 00:37:29.074 [2024-09-29 16:45:29.524423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.074 [2024-09-29 16:45:29.524482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.074 qpair failed and we were unable to recover it. 00:37:29.074 [2024-09-29 16:45:29.524620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.074 [2024-09-29 16:45:29.524654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.074 qpair failed and we were unable to recover it. 00:37:29.074 [2024-09-29 16:45:29.524828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.074 [2024-09-29 16:45:29.524891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.074 qpair failed and we were unable to recover it. 00:37:29.074 [2024-09-29 16:45:29.525046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.074 [2024-09-29 16:45:29.525099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.074 qpair failed and we were unable to recover it. 00:37:29.074 [2024-09-29 16:45:29.525276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.074 [2024-09-29 16:45:29.525315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.074 qpair failed and we were unable to recover it. 00:37:29.074 [2024-09-29 16:45:29.525485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.074 [2024-09-29 16:45:29.525518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.074 qpair failed and we were unable to recover it. 00:37:29.074 [2024-09-29 16:45:29.525689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.074 [2024-09-29 16:45:29.525724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.074 qpair failed and we were unable to recover it. 00:37:29.074 [2024-09-29 16:45:29.525878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.074 [2024-09-29 16:45:29.525915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.074 qpair failed and we were unable to recover it. 00:37:29.074 [2024-09-29 16:45:29.526066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.074 [2024-09-29 16:45:29.526135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.074 qpair failed and we were unable to recover it. 00:37:29.074 [2024-09-29 16:45:29.526384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.074 [2024-09-29 16:45:29.526442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.074 qpair failed and we were unable to recover it. 00:37:29.074 [2024-09-29 16:45:29.526589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.074 [2024-09-29 16:45:29.526624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.074 qpair failed and we were unable to recover it. 00:37:29.074 [2024-09-29 16:45:29.526800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.074 [2024-09-29 16:45:29.526834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.074 qpair failed and we were unable to recover it. 00:37:29.074 [2024-09-29 16:45:29.526943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.074 [2024-09-29 16:45:29.526977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.074 qpair failed and we were unable to recover it. 00:37:29.074 [2024-09-29 16:45:29.527147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.074 [2024-09-29 16:45:29.527184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.074 qpair failed and we were unable to recover it. 00:37:29.074 [2024-09-29 16:45:29.527349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.074 [2024-09-29 16:45:29.527416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.074 qpair failed and we were unable to recover it. 00:37:29.074 [2024-09-29 16:45:29.527543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.074 [2024-09-29 16:45:29.527580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.074 qpair failed and we were unable to recover it. 00:37:29.075 [2024-09-29 16:45:29.527745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.075 [2024-09-29 16:45:29.527778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.075 qpair failed and we were unable to recover it. 00:37:29.075 [2024-09-29 16:45:29.527917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.075 [2024-09-29 16:45:29.527972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.075 qpair failed and we were unable to recover it. 00:37:29.075 [2024-09-29 16:45:29.528170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.075 [2024-09-29 16:45:29.528220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.075 qpair failed and we were unable to recover it. 00:37:29.075 [2024-09-29 16:45:29.528413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.075 [2024-09-29 16:45:29.528449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.075 qpair failed and we were unable to recover it. 00:37:29.075 [2024-09-29 16:45:29.528608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.075 [2024-09-29 16:45:29.528646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.075 qpair failed and we were unable to recover it. 00:37:29.075 [2024-09-29 16:45:29.528790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.075 [2024-09-29 16:45:29.528822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.075 qpair failed and we were unable to recover it. 00:37:29.075 [2024-09-29 16:45:29.528965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.075 [2024-09-29 16:45:29.529024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.075 qpair failed and we were unable to recover it. 00:37:29.075 [2024-09-29 16:45:29.529189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.075 [2024-09-29 16:45:29.529227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.075 qpair failed and we were unable to recover it. 00:37:29.075 [2024-09-29 16:45:29.529386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.075 [2024-09-29 16:45:29.529423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.075 qpair failed and we were unable to recover it. 00:37:29.075 [2024-09-29 16:45:29.529579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.075 [2024-09-29 16:45:29.529616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.075 qpair failed and we were unable to recover it. 00:37:29.075 [2024-09-29 16:45:29.529794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.075 [2024-09-29 16:45:29.529828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.075 qpair failed and we were unable to recover it. 00:37:29.075 [2024-09-29 16:45:29.529968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.075 [2024-09-29 16:45:29.530020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.075 qpair failed and we were unable to recover it. 00:37:29.075 [2024-09-29 16:45:29.530174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.075 [2024-09-29 16:45:29.530211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.075 qpair failed and we were unable to recover it. 00:37:29.075 [2024-09-29 16:45:29.530363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.075 [2024-09-29 16:45:29.530400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.075 qpair failed and we were unable to recover it. 00:37:29.075 [2024-09-29 16:45:29.530531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.075 [2024-09-29 16:45:29.530568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.075 qpair failed and we were unable to recover it. 00:37:29.075 [2024-09-29 16:45:29.530758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.075 [2024-09-29 16:45:29.530791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.075 qpair failed and we were unable to recover it. 00:37:29.075 [2024-09-29 16:45:29.530904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.075 [2024-09-29 16:45:29.530937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.075 qpair failed and we were unable to recover it. 00:37:29.075 [2024-09-29 16:45:29.531086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.075 [2024-09-29 16:45:29.531124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.075 qpair failed and we were unable to recover it. 00:37:29.075 [2024-09-29 16:45:29.531259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.075 [2024-09-29 16:45:29.531296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.075 qpair failed and we were unable to recover it. 00:37:29.075 [2024-09-29 16:45:29.531543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.075 [2024-09-29 16:45:29.531579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.075 qpair failed and we were unable to recover it. 00:37:29.075 [2024-09-29 16:45:29.531760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.075 [2024-09-29 16:45:29.531794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.075 qpair failed and we were unable to recover it. 00:37:29.075 [2024-09-29 16:45:29.531914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.075 [2024-09-29 16:45:29.531963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.075 qpair failed and we were unable to recover it. 00:37:29.075 [2024-09-29 16:45:29.532119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.075 [2024-09-29 16:45:29.532157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.075 qpair failed and we were unable to recover it. 00:37:29.075 [2024-09-29 16:45:29.532339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.075 [2024-09-29 16:45:29.532376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.075 qpair failed and we were unable to recover it. 00:37:29.075 [2024-09-29 16:45:29.532498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.075 [2024-09-29 16:45:29.532536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.075 qpair failed and we were unable to recover it. 00:37:29.075 [2024-09-29 16:45:29.532694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.075 [2024-09-29 16:45:29.532746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.075 qpair failed and we were unable to recover it. 00:37:29.075 [2024-09-29 16:45:29.532876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.075 [2024-09-29 16:45:29.532910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.075 qpair failed and we were unable to recover it. 00:37:29.075 [2024-09-29 16:45:29.533089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.075 [2024-09-29 16:45:29.533122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.075 qpair failed and we were unable to recover it. 00:37:29.075 [2024-09-29 16:45:29.533245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.075 [2024-09-29 16:45:29.533278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.075 qpair failed and we were unable to recover it. 00:37:29.075 [2024-09-29 16:45:29.533425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.075 [2024-09-29 16:45:29.533457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.075 qpair failed and we were unable to recover it. 00:37:29.075 [2024-09-29 16:45:29.533596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.075 [2024-09-29 16:45:29.533633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.075 qpair failed and we were unable to recover it. 00:37:29.075 [2024-09-29 16:45:29.533780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.075 [2024-09-29 16:45:29.533814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.075 qpair failed and we were unable to recover it. 00:37:29.075 [2024-09-29 16:45:29.533960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.075 [2024-09-29 16:45:29.534012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.075 qpair failed and we were unable to recover it. 00:37:29.075 [2024-09-29 16:45:29.534167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.075 [2024-09-29 16:45:29.534203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.076 qpair failed and we were unable to recover it. 00:37:29.076 [2024-09-29 16:45:29.534337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.076 [2024-09-29 16:45:29.534374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.076 qpair failed and we were unable to recover it. 00:37:29.076 [2024-09-29 16:45:29.534562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.076 [2024-09-29 16:45:29.534595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.076 qpair failed and we were unable to recover it. 00:37:29.076 [2024-09-29 16:45:29.534829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.076 [2024-09-29 16:45:29.534867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.076 qpair failed and we were unable to recover it. 00:37:29.076 [2024-09-29 16:45:29.535019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.076 [2024-09-29 16:45:29.535055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.076 qpair failed and we were unable to recover it. 00:37:29.076 [2024-09-29 16:45:29.535225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.076 [2024-09-29 16:45:29.535258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.076 qpair failed and we were unable to recover it. 00:37:29.076 [2024-09-29 16:45:29.535382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.076 [2024-09-29 16:45:29.535415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.076 qpair failed and we were unable to recover it. 00:37:29.076 [2024-09-29 16:45:29.535545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.076 [2024-09-29 16:45:29.535583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.076 qpair failed and we were unable to recover it. 00:37:29.076 [2024-09-29 16:45:29.535757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.076 [2024-09-29 16:45:29.535790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.076 qpair failed and we were unable to recover it. 00:37:29.076 [2024-09-29 16:45:29.535929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.076 [2024-09-29 16:45:29.535980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.076 qpair failed and we were unable to recover it. 00:37:29.076 [2024-09-29 16:45:29.536116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.076 [2024-09-29 16:45:29.536150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.076 qpair failed and we were unable to recover it. 00:37:29.076 [2024-09-29 16:45:29.536295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.076 [2024-09-29 16:45:29.536347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.076 qpair failed and we were unable to recover it. 00:37:29.076 [2024-09-29 16:45:29.536513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.076 [2024-09-29 16:45:29.536546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.076 qpair failed and we were unable to recover it. 00:37:29.076 [2024-09-29 16:45:29.536694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.076 [2024-09-29 16:45:29.536732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.076 qpair failed and we were unable to recover it. 00:37:29.076 [2024-09-29 16:45:29.536894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.076 [2024-09-29 16:45:29.536926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.076 qpair failed and we were unable to recover it. 00:37:29.076 [2024-09-29 16:45:29.537101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.076 [2024-09-29 16:45:29.537134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.076 qpair failed and we were unable to recover it. 00:37:29.076 [2024-09-29 16:45:29.537273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.076 [2024-09-29 16:45:29.537309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.076 qpair failed and we were unable to recover it. 00:37:29.076 [2024-09-29 16:45:29.537463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.076 [2024-09-29 16:45:29.537500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.076 qpair failed and we were unable to recover it. 00:37:29.076 [2024-09-29 16:45:29.537633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.076 [2024-09-29 16:45:29.537665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.076 qpair failed and we were unable to recover it. 00:37:29.076 [2024-09-29 16:45:29.537820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.076 [2024-09-29 16:45:29.537852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.076 qpair failed and we were unable to recover it. 00:37:29.076 [2024-09-29 16:45:29.538004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.076 [2024-09-29 16:45:29.538036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.076 qpair failed and we were unable to recover it. 00:37:29.076 [2024-09-29 16:45:29.538185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.076 [2024-09-29 16:45:29.538236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.076 qpair failed and we were unable to recover it. 00:37:29.076 [2024-09-29 16:45:29.538393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.076 [2024-09-29 16:45:29.538425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.076 qpair failed and we were unable to recover it. 00:37:29.076 [2024-09-29 16:45:29.538570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.076 [2024-09-29 16:45:29.538602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.076 qpair failed and we were unable to recover it. 00:37:29.076 [2024-09-29 16:45:29.538744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.076 [2024-09-29 16:45:29.538778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.076 qpair failed and we were unable to recover it. 00:37:29.076 [2024-09-29 16:45:29.538946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.076 [2024-09-29 16:45:29.538982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.076 qpair failed and we were unable to recover it. 00:37:29.076 [2024-09-29 16:45:29.539174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.076 [2024-09-29 16:45:29.539206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.076 qpair failed and we were unable to recover it. 00:37:29.076 [2024-09-29 16:45:29.539416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.076 [2024-09-29 16:45:29.539449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.076 qpair failed and we were unable to recover it. 00:37:29.076 [2024-09-29 16:45:29.539552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.076 [2024-09-29 16:45:29.539583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.076 qpair failed and we were unable to recover it. 00:37:29.076 [2024-09-29 16:45:29.539730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.076 [2024-09-29 16:45:29.539763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.076 qpair failed and we were unable to recover it. 00:37:29.076 [2024-09-29 16:45:29.539967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.076 [2024-09-29 16:45:29.539999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.076 qpair failed and we were unable to recover it. 00:37:29.076 [2024-09-29 16:45:29.540117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.076 [2024-09-29 16:45:29.540150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.076 qpair failed and we were unable to recover it. 00:37:29.076 [2024-09-29 16:45:29.540295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.076 [2024-09-29 16:45:29.540329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.076 qpair failed and we were unable to recover it. 00:37:29.076 [2024-09-29 16:45:29.540537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.076 [2024-09-29 16:45:29.540570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.076 qpair failed and we were unable to recover it. 00:37:29.076 [2024-09-29 16:45:29.540708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.077 [2024-09-29 16:45:29.540740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.077 qpair failed and we were unable to recover it. 00:37:29.077 [2024-09-29 16:45:29.540856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.077 [2024-09-29 16:45:29.540906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.077 qpair failed and we were unable to recover it. 00:37:29.077 [2024-09-29 16:45:29.541032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.077 [2024-09-29 16:45:29.541069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.077 qpair failed and we were unable to recover it. 00:37:29.077 [2024-09-29 16:45:29.541252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.077 [2024-09-29 16:45:29.541289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.077 qpair failed and we were unable to recover it. 00:37:29.077 [2024-09-29 16:45:29.541480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.077 [2024-09-29 16:45:29.541512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.077 qpair failed and we were unable to recover it. 00:37:29.077 [2024-09-29 16:45:29.541741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.077 [2024-09-29 16:45:29.541775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.077 qpair failed and we were unable to recover it. 00:37:29.077 [2024-09-29 16:45:29.541884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.077 [2024-09-29 16:45:29.541916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.077 qpair failed and we were unable to recover it. 00:37:29.077 [2024-09-29 16:45:29.542081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.077 [2024-09-29 16:45:29.542114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.077 qpair failed and we were unable to recover it. 00:37:29.077 [2024-09-29 16:45:29.542293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.077 [2024-09-29 16:45:29.542326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.077 qpair failed and we were unable to recover it. 00:37:29.077 [2024-09-29 16:45:29.542487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.077 [2024-09-29 16:45:29.542523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.077 qpair failed and we were unable to recover it. 00:37:29.077 [2024-09-29 16:45:29.542689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.077 [2024-09-29 16:45:29.542740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.077 qpair failed and we were unable to recover it. 00:37:29.077 [2024-09-29 16:45:29.542850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.077 [2024-09-29 16:45:29.542884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.077 qpair failed and we were unable to recover it. 00:37:29.077 [2024-09-29 16:45:29.543029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.077 [2024-09-29 16:45:29.543061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.077 qpair failed and we were unable to recover it. 00:37:29.077 [2024-09-29 16:45:29.543251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.077 [2024-09-29 16:45:29.543288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.077 qpair failed and we were unable to recover it. 00:37:29.077 [2024-09-29 16:45:29.543467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.077 [2024-09-29 16:45:29.543499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.077 qpair failed and we were unable to recover it. 00:37:29.077 [2024-09-29 16:45:29.543608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.077 [2024-09-29 16:45:29.543641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.077 qpair failed and we were unable to recover it. 00:37:29.077 [2024-09-29 16:45:29.543826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.077 [2024-09-29 16:45:29.543858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.077 qpair failed and we were unable to recover it. 00:37:29.077 [2024-09-29 16:45:29.543968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.077 [2024-09-29 16:45:29.544018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.077 qpair failed and we were unable to recover it. 00:37:29.077 [2024-09-29 16:45:29.544170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.077 [2024-09-29 16:45:29.544206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.077 qpair failed and we were unable to recover it. 00:37:29.077 [2024-09-29 16:45:29.544337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.077 [2024-09-29 16:45:29.544391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.077 qpair failed and we were unable to recover it. 00:37:29.077 [2024-09-29 16:45:29.544558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.077 [2024-09-29 16:45:29.544590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.077 qpair failed and we were unable to recover it. 00:37:29.077 [2024-09-29 16:45:29.544733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.077 [2024-09-29 16:45:29.544770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.077 qpair failed and we were unable to recover it. 00:37:29.077 [2024-09-29 16:45:29.544942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.077 [2024-09-29 16:45:29.544976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.077 qpair failed and we were unable to recover it. 00:37:29.077 [2024-09-29 16:45:29.545148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.077 [2024-09-29 16:45:29.545180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.077 qpair failed and we were unable to recover it. 00:37:29.077 [2024-09-29 16:45:29.545319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.077 [2024-09-29 16:45:29.545350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.077 qpair failed and we were unable to recover it. 00:37:29.077 [2024-09-29 16:45:29.545468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.077 [2024-09-29 16:45:29.545506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.077 qpair failed and we were unable to recover it. 00:37:29.077 [2024-09-29 16:45:29.545696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.077 [2024-09-29 16:45:29.545747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.077 qpair failed and we were unable to recover it. 00:37:29.077 [2024-09-29 16:45:29.545905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.078 [2024-09-29 16:45:29.545941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.078 qpair failed and we were unable to recover it. 00:37:29.078 [2024-09-29 16:45:29.546098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.078 [2024-09-29 16:45:29.546130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.078 qpair failed and we were unable to recover it. 00:37:29.078 [2024-09-29 16:45:29.546250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.078 [2024-09-29 16:45:29.546301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.078 qpair failed and we were unable to recover it. 00:37:29.078 [2024-09-29 16:45:29.546455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.078 [2024-09-29 16:45:29.546491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.078 qpair failed and we were unable to recover it. 00:37:29.078 [2024-09-29 16:45:29.546660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.078 [2024-09-29 16:45:29.546699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.078 qpair failed and we were unable to recover it. 00:37:29.078 [2024-09-29 16:45:29.546841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.078 [2024-09-29 16:45:29.546873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.078 qpair failed and we were unable to recover it. 00:37:29.078 [2024-09-29 16:45:29.546998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.078 [2024-09-29 16:45:29.547049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.078 qpair failed and we were unable to recover it. 00:37:29.078 [2024-09-29 16:45:29.547198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.078 [2024-09-29 16:45:29.547234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.078 qpair failed and we were unable to recover it. 00:37:29.078 [2024-09-29 16:45:29.547393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.078 [2024-09-29 16:45:29.547430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.078 qpair failed and we were unable to recover it. 00:37:29.078 [2024-09-29 16:45:29.547623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.078 [2024-09-29 16:45:29.547654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.078 qpair failed and we were unable to recover it. 00:37:29.078 [2024-09-29 16:45:29.547829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.078 [2024-09-29 16:45:29.547866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.078 qpair failed and we were unable to recover it. 00:37:29.078 [2024-09-29 16:45:29.548011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.078 [2024-09-29 16:45:29.548048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.078 qpair failed and we were unable to recover it. 00:37:29.078 [2024-09-29 16:45:29.548203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.078 [2024-09-29 16:45:29.548239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.078 qpair failed and we were unable to recover it. 00:37:29.078 [2024-09-29 16:45:29.548373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.078 [2024-09-29 16:45:29.548404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.078 qpair failed and we were unable to recover it. 00:37:29.078 [2024-09-29 16:45:29.548543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.078 [2024-09-29 16:45:29.548576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.078 qpair failed and we were unable to recover it. 00:37:29.078 [2024-09-29 16:45:29.548772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.078 [2024-09-29 16:45:29.548809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.078 qpair failed and we were unable to recover it. 00:37:29.078 [2024-09-29 16:45:29.548967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.078 [2024-09-29 16:45:29.549003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.078 qpair failed and we were unable to recover it. 00:37:29.078 [2024-09-29 16:45:29.549145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.078 [2024-09-29 16:45:29.549179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.078 qpair failed and we were unable to recover it. 00:37:29.078 [2024-09-29 16:45:29.549319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.078 [2024-09-29 16:45:29.549352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.078 qpair failed and we were unable to recover it. 00:37:29.078 [2024-09-29 16:45:29.549556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.078 [2024-09-29 16:45:29.549593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.078 qpair failed and we were unable to recover it. 00:37:29.078 [2024-09-29 16:45:29.549793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.078 [2024-09-29 16:45:29.549827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.078 qpair failed and we were unable to recover it. 00:37:29.078 [2024-09-29 16:45:29.549935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.078 [2024-09-29 16:45:29.549967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.078 qpair failed and we were unable to recover it. 00:37:29.078 [2024-09-29 16:45:29.550108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.078 [2024-09-29 16:45:29.550159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.078 qpair failed and we were unable to recover it. 00:37:29.078 [2024-09-29 16:45:29.550363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.078 [2024-09-29 16:45:29.550397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.078 qpair failed and we were unable to recover it. 00:37:29.078 [2024-09-29 16:45:29.550567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.078 [2024-09-29 16:45:29.550600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.078 qpair failed and we were unable to recover it. 00:37:29.078 [2024-09-29 16:45:29.550802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.078 [2024-09-29 16:45:29.550835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.078 qpair failed and we were unable to recover it. 00:37:29.078 [2024-09-29 16:45:29.550977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.078 [2024-09-29 16:45:29.551011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.078 qpair failed and we were unable to recover it. 00:37:29.078 [2024-09-29 16:45:29.551152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.078 [2024-09-29 16:45:29.551183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.078 qpair failed and we were unable to recover it. 00:37:29.078 [2024-09-29 16:45:29.551332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.078 [2024-09-29 16:45:29.551381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.078 qpair failed and we were unable to recover it. 00:37:29.078 [2024-09-29 16:45:29.551535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.078 [2024-09-29 16:45:29.551571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.078 qpair failed and we were unable to recover it. 00:37:29.078 [2024-09-29 16:45:29.551755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.078 [2024-09-29 16:45:29.551790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.078 qpair failed and we were unable to recover it. 00:37:29.078 [2024-09-29 16:45:29.551979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.078 [2024-09-29 16:45:29.552016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.078 qpair failed and we were unable to recover it. 00:37:29.078 [2024-09-29 16:45:29.552152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.079 [2024-09-29 16:45:29.552188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.079 qpair failed and we were unable to recover it. 00:37:29.079 [2024-09-29 16:45:29.552353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.079 [2024-09-29 16:45:29.552387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.079 qpair failed and we were unable to recover it. 00:37:29.079 [2024-09-29 16:45:29.552515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.079 [2024-09-29 16:45:29.552549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.079 qpair failed and we were unable to recover it. 00:37:29.079 [2024-09-29 16:45:29.552688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.079 [2024-09-29 16:45:29.552720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.079 qpair failed and we were unable to recover it. 00:37:29.079 [2024-09-29 16:45:29.552867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.079 [2024-09-29 16:45:29.552899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.079 qpair failed and we were unable to recover it. 00:37:29.079 [2024-09-29 16:45:29.553044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.079 [2024-09-29 16:45:29.553076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.079 qpair failed and we were unable to recover it. 00:37:29.079 [2024-09-29 16:45:29.553188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.079 [2024-09-29 16:45:29.553239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.079 qpair failed and we were unable to recover it. 00:37:29.079 [2024-09-29 16:45:29.553397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.079 [2024-09-29 16:45:29.553447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.079 qpair failed and we were unable to recover it. 00:37:29.079 [2024-09-29 16:45:29.553588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.079 [2024-09-29 16:45:29.553622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.079 qpair failed and we were unable to recover it. 00:37:29.079 [2024-09-29 16:45:29.553808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.079 [2024-09-29 16:45:29.553841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.079 qpair failed and we were unable to recover it. 00:37:29.079 [2024-09-29 16:45:29.554014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.079 [2024-09-29 16:45:29.554051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.079 qpair failed and we were unable to recover it. 00:37:29.079 [2024-09-29 16:45:29.554211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.079 [2024-09-29 16:45:29.554247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.079 qpair failed and we were unable to recover it. 00:37:29.079 [2024-09-29 16:45:29.554404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.079 [2024-09-29 16:45:29.554441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.079 qpair failed and we were unable to recover it. 00:37:29.079 [2024-09-29 16:45:29.554576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.079 [2024-09-29 16:45:29.554609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.079 qpair failed and we were unable to recover it. 00:37:29.079 [2024-09-29 16:45:29.554738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.079 [2024-09-29 16:45:29.554772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.079 qpair failed and we were unable to recover it. 00:37:29.079 [2024-09-29 16:45:29.554945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.079 [2024-09-29 16:45:29.554982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.079 qpair failed and we were unable to recover it. 00:37:29.079 [2024-09-29 16:45:29.555146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.079 [2024-09-29 16:45:29.555179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.079 qpair failed and we were unable to recover it. 00:37:29.079 [2024-09-29 16:45:29.555324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.079 [2024-09-29 16:45:29.555357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.079 qpair failed and we were unable to recover it. 00:37:29.079 [2024-09-29 16:45:29.555471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.079 [2024-09-29 16:45:29.555505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.079 qpair failed and we were unable to recover it. 00:37:29.079 [2024-09-29 16:45:29.555647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.079 [2024-09-29 16:45:29.555690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.079 qpair failed and we were unable to recover it. 00:37:29.079 [2024-09-29 16:45:29.555817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.079 [2024-09-29 16:45:29.555854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.079 qpair failed and we were unable to recover it. 00:37:29.079 [2024-09-29 16:45:29.556026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.079 [2024-09-29 16:45:29.556058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.079 qpair failed and we were unable to recover it. 00:37:29.079 [2024-09-29 16:45:29.556181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.079 [2024-09-29 16:45:29.556232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.079 qpair failed and we were unable to recover it. 00:37:29.079 [2024-09-29 16:45:29.556359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.079 [2024-09-29 16:45:29.556395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.079 qpair failed and we were unable to recover it. 00:37:29.079 [2024-09-29 16:45:29.556593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.079 [2024-09-29 16:45:29.556627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.079 qpair failed and we were unable to recover it. 00:37:29.079 [2024-09-29 16:45:29.556749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.079 [2024-09-29 16:45:29.556782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.079 qpair failed and we were unable to recover it. 00:37:29.079 [2024-09-29 16:45:29.556969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.079 [2024-09-29 16:45:29.557005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.079 qpair failed and we were unable to recover it. 00:37:29.079 [2024-09-29 16:45:29.557185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.079 [2024-09-29 16:45:29.557226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.079 qpair failed and we were unable to recover it. 00:37:29.079 [2024-09-29 16:45:29.557383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.079 [2024-09-29 16:45:29.557420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.079 qpair failed and we were unable to recover it. 00:37:29.079 [2024-09-29 16:45:29.557579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.079 [2024-09-29 16:45:29.557612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.079 qpair failed and we were unable to recover it. 00:37:29.079 [2024-09-29 16:45:29.557807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.079 [2024-09-29 16:45:29.557844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.079 qpair failed and we were unable to recover it. 00:37:29.079 [2024-09-29 16:45:29.557958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.079 [2024-09-29 16:45:29.557993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.079 qpair failed and we were unable to recover it. 00:37:29.079 [2024-09-29 16:45:29.558126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.079 [2024-09-29 16:45:29.558162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.079 qpair failed and we were unable to recover it. 00:37:29.079 [2024-09-29 16:45:29.558356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.079 [2024-09-29 16:45:29.558389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.079 qpair failed and we were unable to recover it. 00:37:29.079 [2024-09-29 16:45:29.558539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.079 [2024-09-29 16:45:29.558575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.079 qpair failed and we were unable to recover it. 00:37:29.079 [2024-09-29 16:45:29.558712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.079 [2024-09-29 16:45:29.558762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.079 qpair failed and we were unable to recover it. 00:37:29.079 [2024-09-29 16:45:29.558930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.079 [2024-09-29 16:45:29.558963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.079 qpair failed and we were unable to recover it. 00:37:29.079 [2024-09-29 16:45:29.559150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.079 [2024-09-29 16:45:29.559182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.079 qpair failed and we were unable to recover it. 00:37:29.079 [2024-09-29 16:45:29.559301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.079 [2024-09-29 16:45:29.559335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.079 qpair failed and we were unable to recover it. 00:37:29.080 [2024-09-29 16:45:29.559464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.080 [2024-09-29 16:45:29.559500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.080 qpair failed and we were unable to recover it. 00:37:29.080 [2024-09-29 16:45:29.559690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.080 [2024-09-29 16:45:29.559742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.080 qpair failed and we were unable to recover it. 00:37:29.080 [2024-09-29 16:45:29.559862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.080 [2024-09-29 16:45:29.559895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.080 qpair failed and we were unable to recover it. 00:37:29.080 [2024-09-29 16:45:29.560015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.080 [2024-09-29 16:45:29.560048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.080 qpair failed and we were unable to recover it. 00:37:29.080 [2024-09-29 16:45:29.560218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.080 [2024-09-29 16:45:29.560255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.080 qpair failed and we were unable to recover it. 00:37:29.080 [2024-09-29 16:45:29.560411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.080 [2024-09-29 16:45:29.560449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.080 qpair failed and we were unable to recover it. 00:37:29.080 [2024-09-29 16:45:29.560584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.080 [2024-09-29 16:45:29.560616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.080 qpair failed and we were unable to recover it. 00:37:29.080 [2024-09-29 16:45:29.560749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.080 [2024-09-29 16:45:29.560782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.080 qpair failed and we were unable to recover it. 00:37:29.080 [2024-09-29 16:45:29.560947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.080 [2024-09-29 16:45:29.560995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.080 qpair failed and we were unable to recover it. 00:37:29.080 [2024-09-29 16:45:29.561196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.080 [2024-09-29 16:45:29.561230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.080 qpair failed and we were unable to recover it. 00:37:29.080 [2024-09-29 16:45:29.561400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.080 [2024-09-29 16:45:29.561433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.080 qpair failed and we were unable to recover it. 00:37:29.080 [2024-09-29 16:45:29.561590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.080 [2024-09-29 16:45:29.561625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.080 qpair failed and we were unable to recover it. 00:37:29.080 [2024-09-29 16:45:29.561813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.080 [2024-09-29 16:45:29.561846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.080 qpair failed and we were unable to recover it. 00:37:29.080 [2024-09-29 16:45:29.561962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.080 [2024-09-29 16:45:29.562011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.080 qpair failed and we were unable to recover it. 00:37:29.080 [2024-09-29 16:45:29.562194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.080 [2024-09-29 16:45:29.562227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.080 qpair failed and we were unable to recover it. 00:37:29.080 [2024-09-29 16:45:29.562418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.080 [2024-09-29 16:45:29.562456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.080 qpair failed and we were unable to recover it. 00:37:29.080 [2024-09-29 16:45:29.562611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.080 [2024-09-29 16:45:29.562647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.080 qpair failed and we were unable to recover it. 00:37:29.080 [2024-09-29 16:45:29.562814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.080 [2024-09-29 16:45:29.562867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.080 qpair failed and we were unable to recover it. 00:37:29.080 [2024-09-29 16:45:29.563048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.080 [2024-09-29 16:45:29.563081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.080 qpair failed and we were unable to recover it. 00:37:29.080 [2024-09-29 16:45:29.563203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.080 [2024-09-29 16:45:29.563255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.080 qpair failed and we were unable to recover it. 00:37:29.080 [2024-09-29 16:45:29.563410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.080 [2024-09-29 16:45:29.563446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.080 qpair failed and we were unable to recover it. 00:37:29.080 [2024-09-29 16:45:29.563614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.080 [2024-09-29 16:45:29.563647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.080 qpair failed and we were unable to recover it. 00:37:29.080 [2024-09-29 16:45:29.563774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.080 [2024-09-29 16:45:29.563807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.080 qpair failed and we were unable to recover it. 00:37:29.080 [2024-09-29 16:45:29.563941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.080 [2024-09-29 16:45:29.563992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.080 qpair failed and we were unable to recover it. 00:37:29.080 [2024-09-29 16:45:29.564147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.080 [2024-09-29 16:45:29.564182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.080 qpair failed and we were unable to recover it. 00:37:29.080 [2024-09-29 16:45:29.564308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.080 [2024-09-29 16:45:29.564344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.080 qpair failed and we were unable to recover it. 00:37:29.080 [2024-09-29 16:45:29.564528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.080 [2024-09-29 16:45:29.564561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.080 qpair failed and we were unable to recover it. 00:37:29.080 [2024-09-29 16:45:29.564732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.080 [2024-09-29 16:45:29.564770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.080 qpair failed and we were unable to recover it. 00:37:29.080 [2024-09-29 16:45:29.564929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.080 [2024-09-29 16:45:29.564970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.080 qpair failed and we were unable to recover it. 00:37:29.080 [2024-09-29 16:45:29.565104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.080 [2024-09-29 16:45:29.565142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.080 qpair failed and we were unable to recover it. 00:37:29.080 [2024-09-29 16:45:29.565308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.080 [2024-09-29 16:45:29.565341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.080 qpair failed and we were unable to recover it. 00:37:29.080 [2024-09-29 16:45:29.565452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.080 [2024-09-29 16:45:29.565500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.080 qpair failed and we were unable to recover it. 00:37:29.080 [2024-09-29 16:45:29.565650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.080 [2024-09-29 16:45:29.565694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.080 qpair failed and we were unable to recover it. 00:37:29.081 [2024-09-29 16:45:29.565879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.081 [2024-09-29 16:45:29.565913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.081 qpair failed and we were unable to recover it. 00:37:29.081 [2024-09-29 16:45:29.566025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.081 [2024-09-29 16:45:29.566057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.081 qpair failed and we were unable to recover it. 00:37:29.081 [2024-09-29 16:45:29.566193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.081 [2024-09-29 16:45:29.566226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.081 qpair failed and we were unable to recover it. 00:37:29.081 [2024-09-29 16:45:29.566366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.081 [2024-09-29 16:45:29.566399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.081 qpair failed and we were unable to recover it. 00:37:29.081 [2024-09-29 16:45:29.566582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.081 [2024-09-29 16:45:29.566615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.081 qpair failed and we were unable to recover it. 00:37:29.081 [2024-09-29 16:45:29.566767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.081 [2024-09-29 16:45:29.566799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.081 qpair failed and we were unable to recover it. 00:37:29.081 [2024-09-29 16:45:29.566953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.081 [2024-09-29 16:45:29.566991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.081 qpair failed and we were unable to recover it. 00:37:29.081 [2024-09-29 16:45:29.567151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.081 [2024-09-29 16:45:29.567187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.081 qpair failed and we were unable to recover it. 00:37:29.081 [2024-09-29 16:45:29.567343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.081 [2024-09-29 16:45:29.567382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.081 qpair failed and we were unable to recover it. 00:37:29.081 [2024-09-29 16:45:29.567547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.081 [2024-09-29 16:45:29.567580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.081 qpair failed and we were unable to recover it. 00:37:29.081 [2024-09-29 16:45:29.567724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.081 [2024-09-29 16:45:29.567778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.081 qpair failed and we were unable to recover it. 00:37:29.081 [2024-09-29 16:45:29.568008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.081 [2024-09-29 16:45:29.568047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.081 qpair failed and we were unable to recover it. 00:37:29.081 [2024-09-29 16:45:29.568200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.081 [2024-09-29 16:45:29.568251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.081 qpair failed and we were unable to recover it. 00:37:29.081 [2024-09-29 16:45:29.568366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.081 [2024-09-29 16:45:29.568399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.081 qpair failed and we were unable to recover it. 00:37:29.081 [2024-09-29 16:45:29.568544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.081 [2024-09-29 16:45:29.568578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.081 qpair failed and we were unable to recover it. 00:37:29.081 [2024-09-29 16:45:29.568750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.081 [2024-09-29 16:45:29.568784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.081 qpair failed and we were unable to recover it. 00:37:29.081 [2024-09-29 16:45:29.568895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.081 [2024-09-29 16:45:29.568928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.081 qpair failed and we were unable to recover it. 00:37:29.081 [2024-09-29 16:45:29.569069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.081 [2024-09-29 16:45:29.569102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.081 qpair failed and we were unable to recover it. 00:37:29.081 [2024-09-29 16:45:29.569266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.081 [2024-09-29 16:45:29.569303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.081 qpair failed and we were unable to recover it. 00:37:29.081 [2024-09-29 16:45:29.569454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.081 [2024-09-29 16:45:29.569491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.081 qpair failed and we were unable to recover it. 00:37:29.081 [2024-09-29 16:45:29.569646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.081 [2024-09-29 16:45:29.569689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.081 qpair failed and we were unable to recover it. 00:37:29.081 [2024-09-29 16:45:29.569830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.081 [2024-09-29 16:45:29.569863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.081 qpair failed and we were unable to recover it. 00:37:29.081 [2024-09-29 16:45:29.569977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.081 [2024-09-29 16:45:29.570016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.081 qpair failed and we were unable to recover it. 00:37:29.081 [2024-09-29 16:45:29.570214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.081 [2024-09-29 16:45:29.570250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.081 qpair failed and we were unable to recover it. 00:37:29.081 [2024-09-29 16:45:29.570410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.081 [2024-09-29 16:45:29.570446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.081 qpair failed and we were unable to recover it. 00:37:29.081 [2024-09-29 16:45:29.570617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.081 [2024-09-29 16:45:29.570650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.081 qpair failed and we were unable to recover it. 00:37:29.081 [2024-09-29 16:45:29.570783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.081 [2024-09-29 16:45:29.570837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.081 qpair failed and we were unable to recover it. 00:37:29.081 [2024-09-29 16:45:29.570967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.081 [2024-09-29 16:45:29.571004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.081 qpair failed and we were unable to recover it. 00:37:29.081 [2024-09-29 16:45:29.571161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.081 [2024-09-29 16:45:29.571200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.081 qpair failed and we were unable to recover it. 00:37:29.081 [2024-09-29 16:45:29.571360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.081 [2024-09-29 16:45:29.571404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.081 qpair failed and we were unable to recover it. 00:37:29.081 [2024-09-29 16:45:29.571529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.081 [2024-09-29 16:45:29.571563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.081 qpair failed and we were unable to recover it. 00:37:29.081 [2024-09-29 16:45:29.571700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.081 [2024-09-29 16:45:29.571744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.081 qpair failed and we were unable to recover it. 00:37:29.081 [2024-09-29 16:45:29.571910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.081 [2024-09-29 16:45:29.571948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.081 qpair failed and we were unable to recover it. 00:37:29.081 [2024-09-29 16:45:29.572135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.081 [2024-09-29 16:45:29.572170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.081 qpair failed and we were unable to recover it. 00:37:29.081 [2024-09-29 16:45:29.572318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.081 [2024-09-29 16:45:29.572368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.081 qpair failed and we were unable to recover it. 00:37:29.081 [2024-09-29 16:45:29.572531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.082 [2024-09-29 16:45:29.572568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.082 qpair failed and we were unable to recover it. 00:37:29.082 [2024-09-29 16:45:29.572687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.082 [2024-09-29 16:45:29.572721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.082 qpair failed and we were unable to recover it. 00:37:29.082 [2024-09-29 16:45:29.572892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.082 [2024-09-29 16:45:29.572925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.082 qpair failed and we were unable to recover it. 00:37:29.082 [2024-09-29 16:45:29.573095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.082 [2024-09-29 16:45:29.573128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.082 qpair failed and we were unable to recover it. 00:37:29.082 [2024-09-29 16:45:29.573301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.082 [2024-09-29 16:45:29.573355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.082 qpair failed and we were unable to recover it. 00:37:29.082 [2024-09-29 16:45:29.573519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.082 [2024-09-29 16:45:29.573570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.082 qpair failed and we were unable to recover it. 00:37:29.082 [2024-09-29 16:45:29.573749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.082 [2024-09-29 16:45:29.573782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.082 qpair failed and we were unable to recover it. 00:37:29.082 [2024-09-29 16:45:29.573929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.082 [2024-09-29 16:45:29.573995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.082 qpair failed and we were unable to recover it. 00:37:29.082 [2024-09-29 16:45:29.574201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.082 [2024-09-29 16:45:29.574234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.082 qpair failed and we were unable to recover it. 00:37:29.082 [2024-09-29 16:45:29.574387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.082 [2024-09-29 16:45:29.574420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.082 qpair failed and we were unable to recover it. 00:37:29.082 [2024-09-29 16:45:29.574570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.082 [2024-09-29 16:45:29.574603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.082 qpair failed and we were unable to recover it. 00:37:29.082 [2024-09-29 16:45:29.574773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.082 [2024-09-29 16:45:29.574806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.082 qpair failed and we were unable to recover it. 00:37:29.082 [2024-09-29 16:45:29.574912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.082 [2024-09-29 16:45:29.574946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.082 qpair failed and we were unable to recover it. 00:37:29.082 [2024-09-29 16:45:29.575115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.082 [2024-09-29 16:45:29.575155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.082 qpair failed and we were unable to recover it. 00:37:29.082 [2024-09-29 16:45:29.575344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.082 [2024-09-29 16:45:29.575387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.082 qpair failed and we were unable to recover it. 00:37:29.082 [2024-09-29 16:45:29.575554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.082 [2024-09-29 16:45:29.575592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.082 qpair failed and we were unable to recover it. 00:37:29.371 [2024-09-29 16:45:29.575752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.371 [2024-09-29 16:45:29.575789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.371 qpair failed and we were unable to recover it. 00:37:29.371 [2024-09-29 16:45:29.575940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.371 [2024-09-29 16:45:29.575976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.371 qpair failed and we were unable to recover it. 00:37:29.371 [2024-09-29 16:45:29.576165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.371 [2024-09-29 16:45:29.576199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.371 qpair failed and we were unable to recover it. 00:37:29.371 [2024-09-29 16:45:29.576353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.371 [2024-09-29 16:45:29.576389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.371 qpair failed and we were unable to recover it. 00:37:29.371 [2024-09-29 16:45:29.576522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.371 [2024-09-29 16:45:29.576558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.371 qpair failed and we were unable to recover it. 00:37:29.371 [2024-09-29 16:45:29.576735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.371 [2024-09-29 16:45:29.576787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.371 qpair failed and we were unable to recover it. 00:37:29.371 [2024-09-29 16:45:29.576951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.371 [2024-09-29 16:45:29.576998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.371 qpair failed and we were unable to recover it. 00:37:29.371 [2024-09-29 16:45:29.577160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.371 [2024-09-29 16:45:29.577224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.371 qpair failed and we were unable to recover it. 00:37:29.371 [2024-09-29 16:45:29.577406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.371 [2024-09-29 16:45:29.577458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.371 qpair failed and we were unable to recover it. 00:37:29.371 [2024-09-29 16:45:29.577608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.371 [2024-09-29 16:45:29.577657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.371 qpair failed and we were unable to recover it. 00:37:29.371 [2024-09-29 16:45:29.577859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.371 [2024-09-29 16:45:29.577902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.371 qpair failed and we were unable to recover it. 00:37:29.371 [2024-09-29 16:45:29.578044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.371 [2024-09-29 16:45:29.578088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.371 qpair failed and we were unable to recover it. 00:37:29.371 [2024-09-29 16:45:29.578254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.371 [2024-09-29 16:45:29.578316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.371 qpair failed and we were unable to recover it. 00:37:29.371 [2024-09-29 16:45:29.578497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.371 [2024-09-29 16:45:29.578550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.371 qpair failed and we were unable to recover it. 00:37:29.371 [2024-09-29 16:45:29.578737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.371 [2024-09-29 16:45:29.578780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.371 qpair failed and we were unable to recover it. 00:37:29.371 [2024-09-29 16:45:29.578939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.371 [2024-09-29 16:45:29.578987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.371 qpair failed and we were unable to recover it. 00:37:29.371 [2024-09-29 16:45:29.579182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.371 [2024-09-29 16:45:29.579226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.371 qpair failed and we were unable to recover it. 00:37:29.371 [2024-09-29 16:45:29.579389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.371 [2024-09-29 16:45:29.579451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.371 qpair failed and we were unable to recover it. 00:37:29.371 [2024-09-29 16:45:29.579610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.371 [2024-09-29 16:45:29.579653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.371 qpair failed and we were unable to recover it. 00:37:29.371 [2024-09-29 16:45:29.579748] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2780 (9): Bad file descriptor 00:37:29.371 [2024-09-29 16:45:29.579952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.371 [2024-09-29 16:45:29.580006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.371 qpair failed and we were unable to recover it. 00:37:29.371 [2024-09-29 16:45:29.580177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.371 [2024-09-29 16:45:29.580220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.371 qpair failed and we were unable to recover it. 00:37:29.371 [2024-09-29 16:45:29.580386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.371 [2024-09-29 16:45:29.580421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.371 qpair failed and we were unable to recover it. 00:37:29.371 [2024-09-29 16:45:29.580559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.371 [2024-09-29 16:45:29.580593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.371 qpair failed and we were unable to recover it. 00:37:29.371 [2024-09-29 16:45:29.580732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.371 [2024-09-29 16:45:29.580767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.371 qpair failed and we were unable to recover it. 00:37:29.371 [2024-09-29 16:45:29.580932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.371 [2024-09-29 16:45:29.580967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.371 qpair failed and we were unable to recover it. 00:37:29.371 [2024-09-29 16:45:29.581108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.371 [2024-09-29 16:45:29.581160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.371 qpair failed and we were unable to recover it. 00:37:29.371 [2024-09-29 16:45:29.581315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.371 [2024-09-29 16:45:29.581352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.371 qpair failed and we were unable to recover it. 00:37:29.371 [2024-09-29 16:45:29.581508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.372 [2024-09-29 16:45:29.581542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.372 qpair failed and we were unable to recover it. 00:37:29.372 [2024-09-29 16:45:29.581648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.372 [2024-09-29 16:45:29.581689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.372 qpair failed and we were unable to recover it. 00:37:29.372 [2024-09-29 16:45:29.581804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.372 [2024-09-29 16:45:29.581837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.372 qpair failed and we were unable to recover it. 00:37:29.372 [2024-09-29 16:45:29.581972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.372 [2024-09-29 16:45:29.582006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.372 qpair failed and we were unable to recover it. 00:37:29.372 [2024-09-29 16:45:29.582147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.372 [2024-09-29 16:45:29.582198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.372 qpair failed and we were unable to recover it. 00:37:29.372 [2024-09-29 16:45:29.582364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.372 [2024-09-29 16:45:29.582402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.372 qpair failed and we were unable to recover it. 00:37:29.372 [2024-09-29 16:45:29.582580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.372 [2024-09-29 16:45:29.582617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.372 qpair failed and we were unable to recover it. 00:37:29.372 [2024-09-29 16:45:29.582787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.372 [2024-09-29 16:45:29.582821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.372 qpair failed and we were unable to recover it. 00:37:29.372 [2024-09-29 16:45:29.582933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.372 [2024-09-29 16:45:29.582986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.372 qpair failed and we were unable to recover it. 00:37:29.372 [2024-09-29 16:45:29.583146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.372 [2024-09-29 16:45:29.583180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.372 qpair failed and we were unable to recover it. 00:37:29.372 [2024-09-29 16:45:29.583321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.372 [2024-09-29 16:45:29.583358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.372 qpair failed and we were unable to recover it. 00:37:29.372 [2024-09-29 16:45:29.583485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.372 [2024-09-29 16:45:29.583518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.372 qpair failed and we were unable to recover it. 00:37:29.372 [2024-09-29 16:45:29.583700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.372 [2024-09-29 16:45:29.583733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.372 qpair failed and we were unable to recover it. 00:37:29.372 [2024-09-29 16:45:29.583889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.372 [2024-09-29 16:45:29.583925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.372 qpair failed and we were unable to recover it. 00:37:29.372 [2024-09-29 16:45:29.584075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.372 [2024-09-29 16:45:29.584126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.372 qpair failed and we were unable to recover it. 00:37:29.372 [2024-09-29 16:45:29.584247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.372 [2024-09-29 16:45:29.584280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.372 qpair failed and we were unable to recover it. 00:37:29.372 [2024-09-29 16:45:29.584391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.372 [2024-09-29 16:45:29.584423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.372 qpair failed and we were unable to recover it. 00:37:29.372 [2024-09-29 16:45:29.584590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.372 [2024-09-29 16:45:29.584627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.372 qpair failed and we were unable to recover it. 00:37:29.372 [2024-09-29 16:45:29.584801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.372 [2024-09-29 16:45:29.584835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.372 qpair failed and we were unable to recover it. 00:37:29.372 [2024-09-29 16:45:29.584988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.372 [2024-09-29 16:45:29.585024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.372 qpair failed and we were unable to recover it. 00:37:29.372 [2024-09-29 16:45:29.585151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.372 [2024-09-29 16:45:29.585187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.372 qpair failed and we were unable to recover it. 00:37:29.372 [2024-09-29 16:45:29.585321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.372 [2024-09-29 16:45:29.585354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.372 qpair failed and we were unable to recover it. 00:37:29.372 [2024-09-29 16:45:29.585499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.372 [2024-09-29 16:45:29.585531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.372 qpair failed and we were unable to recover it. 00:37:29.372 [2024-09-29 16:45:29.585651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.372 [2024-09-29 16:45:29.585699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.372 qpair failed and we were unable to recover it. 00:37:29.372 [2024-09-29 16:45:29.585867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.372 [2024-09-29 16:45:29.585900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.372 qpair failed and we were unable to recover it. 00:37:29.372 [2024-09-29 16:45:29.586025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.372 [2024-09-29 16:45:29.586061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.372 qpair failed and we were unable to recover it. 00:37:29.372 [2024-09-29 16:45:29.586230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.372 [2024-09-29 16:45:29.586262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.372 qpair failed and we were unable to recover it. 00:37:29.372 [2024-09-29 16:45:29.586407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.372 [2024-09-29 16:45:29.586441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.372 qpair failed and we were unable to recover it. 00:37:29.372 [2024-09-29 16:45:29.586609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.372 [2024-09-29 16:45:29.586663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.372 qpair failed and we were unable to recover it. 00:37:29.372 [2024-09-29 16:45:29.586846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.372 [2024-09-29 16:45:29.586882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.372 qpair failed and we were unable to recover it. 00:37:29.372 [2024-09-29 16:45:29.587004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.372 [2024-09-29 16:45:29.587040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.372 qpair failed and we were unable to recover it. 00:37:29.372 [2024-09-29 16:45:29.587183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.372 [2024-09-29 16:45:29.587235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.372 qpair failed and we were unable to recover it. 00:37:29.373 [2024-09-29 16:45:29.587415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.373 [2024-09-29 16:45:29.587453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.373 qpair failed and we were unable to recover it. 00:37:29.373 [2024-09-29 16:45:29.587615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.373 [2024-09-29 16:45:29.587653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.373 qpair failed and we were unable to recover it. 00:37:29.373 [2024-09-29 16:45:29.587835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.373 [2024-09-29 16:45:29.587869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.373 qpair failed and we were unable to recover it. 00:37:29.373 [2024-09-29 16:45:29.588011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.373 [2024-09-29 16:45:29.588044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.373 qpair failed and we were unable to recover it. 00:37:29.373 [2024-09-29 16:45:29.588180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.373 [2024-09-29 16:45:29.588212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.373 qpair failed and we were unable to recover it. 00:37:29.373 [2024-09-29 16:45:29.588335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.373 [2024-09-29 16:45:29.588369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.373 qpair failed and we were unable to recover it. 00:37:29.373 [2024-09-29 16:45:29.588538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.373 [2024-09-29 16:45:29.588589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.373 qpair failed and we were unable to recover it. 00:37:29.373 [2024-09-29 16:45:29.588755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.373 [2024-09-29 16:45:29.588788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.373 qpair failed and we were unable to recover it. 00:37:29.373 [2024-09-29 16:45:29.588917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.373 [2024-09-29 16:45:29.588970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.373 qpair failed and we were unable to recover it. 00:37:29.373 [2024-09-29 16:45:29.589122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.373 [2024-09-29 16:45:29.589172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.373 qpair failed and we were unable to recover it. 00:37:29.373 [2024-09-29 16:45:29.589339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.373 [2024-09-29 16:45:29.589372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.373 qpair failed and we were unable to recover it. 00:37:29.373 [2024-09-29 16:45:29.589521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.373 [2024-09-29 16:45:29.589555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.373 qpair failed and we were unable to recover it. 00:37:29.373 [2024-09-29 16:45:29.589704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.373 [2024-09-29 16:45:29.589738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.373 qpair failed and we were unable to recover it. 00:37:29.373 [2024-09-29 16:45:29.589852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.373 [2024-09-29 16:45:29.589885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.373 qpair failed and we were unable to recover it. 00:37:29.373 [2024-09-29 16:45:29.589998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.373 [2024-09-29 16:45:29.590033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.373 qpair failed and we were unable to recover it. 00:37:29.373 [2024-09-29 16:45:29.590169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.373 [2024-09-29 16:45:29.590205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.373 qpair failed and we were unable to recover it. 00:37:29.373 [2024-09-29 16:45:29.590374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.373 [2024-09-29 16:45:29.590406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.373 qpair failed and we were unable to recover it. 00:37:29.373 [2024-09-29 16:45:29.590525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.373 [2024-09-29 16:45:29.590558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.373 qpair failed and we were unable to recover it. 00:37:29.373 [2024-09-29 16:45:29.590705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.373 [2024-09-29 16:45:29.590738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.373 qpair failed and we were unable to recover it. 00:37:29.373 [2024-09-29 16:45:29.590875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.373 [2024-09-29 16:45:29.590907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.373 qpair failed and we were unable to recover it. 00:37:29.373 [2024-09-29 16:45:29.591091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.373 [2024-09-29 16:45:29.591127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.373 qpair failed and we were unable to recover it. 00:37:29.373 [2024-09-29 16:45:29.591310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.373 [2024-09-29 16:45:29.591347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.373 qpair failed and we were unable to recover it. 00:37:29.373 [2024-09-29 16:45:29.591502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.373 [2024-09-29 16:45:29.591534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.373 qpair failed and we were unable to recover it. 00:37:29.373 [2024-09-29 16:45:29.591683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.373 [2024-09-29 16:45:29.591737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.373 qpair failed and we were unable to recover it. 00:37:29.373 [2024-09-29 16:45:29.591931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.373 [2024-09-29 16:45:29.591968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.373 qpair failed and we were unable to recover it. 00:37:29.373 [2024-09-29 16:45:29.592165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.373 [2024-09-29 16:45:29.592199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.373 qpair failed and we were unable to recover it. 00:37:29.373 [2024-09-29 16:45:29.592314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.373 [2024-09-29 16:45:29.592348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.373 qpair failed and we were unable to recover it. 00:37:29.373 [2024-09-29 16:45:29.592486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.373 [2024-09-29 16:45:29.592520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.373 qpair failed and we were unable to recover it. 00:37:29.373 [2024-09-29 16:45:29.592688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.373 [2024-09-29 16:45:29.592722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.373 qpair failed and we were unable to recover it. 00:37:29.373 [2024-09-29 16:45:29.592859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.373 [2024-09-29 16:45:29.592899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.373 qpair failed and we were unable to recover it. 00:37:29.374 [2024-09-29 16:45:29.593051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.374 [2024-09-29 16:45:29.593087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.374 qpair failed and we were unable to recover it. 00:37:29.374 [2024-09-29 16:45:29.593249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.374 [2024-09-29 16:45:29.593287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.374 qpair failed and we were unable to recover it. 00:37:29.374 [2024-09-29 16:45:29.593404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.374 [2024-09-29 16:45:29.593456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.374 qpair failed and we were unable to recover it. 00:37:29.374 [2024-09-29 16:45:29.593584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.374 [2024-09-29 16:45:29.593621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.374 qpair failed and we were unable to recover it. 00:37:29.374 [2024-09-29 16:45:29.593794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.374 [2024-09-29 16:45:29.593828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.374 qpair failed and we were unable to recover it. 00:37:29.374 [2024-09-29 16:45:29.593988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.374 [2024-09-29 16:45:29.594025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.374 qpair failed and we were unable to recover it. 00:37:29.374 [2024-09-29 16:45:29.594212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.374 [2024-09-29 16:45:29.594249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.374 qpair failed and we were unable to recover it. 00:37:29.374 [2024-09-29 16:45:29.594418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.374 [2024-09-29 16:45:29.594452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.374 qpair failed and we were unable to recover it. 00:37:29.374 [2024-09-29 16:45:29.594615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.374 [2024-09-29 16:45:29.594652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.374 qpair failed and we were unable to recover it. 00:37:29.374 [2024-09-29 16:45:29.594827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.374 [2024-09-29 16:45:29.594860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.374 qpair failed and we were unable to recover it. 00:37:29.374 [2024-09-29 16:45:29.594981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.374 [2024-09-29 16:45:29.595014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.374 qpair failed and we were unable to recover it. 00:37:29.374 [2024-09-29 16:45:29.595201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.374 [2024-09-29 16:45:29.595238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.374 qpair failed and we were unable to recover it. 00:37:29.374 [2024-09-29 16:45:29.595369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.374 [2024-09-29 16:45:29.595404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.374 qpair failed and we were unable to recover it. 00:37:29.374 [2024-09-29 16:45:29.595546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.374 [2024-09-29 16:45:29.595579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.374 qpair failed and we were unable to recover it. 00:37:29.374 [2024-09-29 16:45:29.595721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.374 [2024-09-29 16:45:29.595770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.374 qpair failed and we were unable to recover it. 00:37:29.374 [2024-09-29 16:45:29.595920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.374 [2024-09-29 16:45:29.595956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.374 qpair failed and we were unable to recover it. 00:37:29.374 [2024-09-29 16:45:29.596146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.374 [2024-09-29 16:45:29.596179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.374 qpair failed and we were unable to recover it. 00:37:29.374 [2024-09-29 16:45:29.596290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.374 [2024-09-29 16:45:29.596341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.374 qpair failed and we were unable to recover it. 00:37:29.374 [2024-09-29 16:45:29.596499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.374 [2024-09-29 16:45:29.596548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.374 qpair failed and we were unable to recover it. 00:37:29.374 [2024-09-29 16:45:29.596687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.374 [2024-09-29 16:45:29.596719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.374 qpair failed and we were unable to recover it. 00:37:29.374 [2024-09-29 16:45:29.596861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.374 [2024-09-29 16:45:29.596912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.374 qpair failed and we were unable to recover it. 00:37:29.374 [2024-09-29 16:45:29.597039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.374 [2024-09-29 16:45:29.597075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.374 qpair failed and we were unable to recover it. 00:37:29.374 [2024-09-29 16:45:29.597241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.374 [2024-09-29 16:45:29.597273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.374 qpair failed and we were unable to recover it. 00:37:29.374 [2024-09-29 16:45:29.597430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.374 [2024-09-29 16:45:29.597466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.374 qpair failed and we were unable to recover it. 00:37:29.374 [2024-09-29 16:45:29.597615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.374 [2024-09-29 16:45:29.597651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.374 qpair failed and we were unable to recover it. 00:37:29.374 [2024-09-29 16:45:29.597825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.374 [2024-09-29 16:45:29.597859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.374 qpair failed and we were unable to recover it. 00:37:29.374 [2024-09-29 16:45:29.598018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.374 [2024-09-29 16:45:29.598067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.374 qpair failed and we were unable to recover it. 00:37:29.374 [2024-09-29 16:45:29.598193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.374 [2024-09-29 16:45:29.598229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.374 qpair failed and we were unable to recover it. 00:37:29.374 [2024-09-29 16:45:29.598395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.374 [2024-09-29 16:45:29.598443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.374 qpair failed and we were unable to recover it. 00:37:29.374 [2024-09-29 16:45:29.598625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.374 [2024-09-29 16:45:29.598661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.374 qpair failed and we were unable to recover it. 00:37:29.374 [2024-09-29 16:45:29.598792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.374 [2024-09-29 16:45:29.598827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.374 qpair failed and we were unable to recover it. 00:37:29.374 [2024-09-29 16:45:29.598983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.374 [2024-09-29 16:45:29.599021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.375 qpair failed and we were unable to recover it. 00:37:29.375 [2024-09-29 16:45:29.599265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.375 [2024-09-29 16:45:29.599320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.375 qpair failed and we were unable to recover it. 00:37:29.375 [2024-09-29 16:45:29.599509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.375 [2024-09-29 16:45:29.599562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.375 qpair failed and we were unable to recover it. 00:37:29.375 [2024-09-29 16:45:29.599701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.375 [2024-09-29 16:45:29.599737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.375 qpair failed and we were unable to recover it. 00:37:29.375 [2024-09-29 16:45:29.599897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.375 [2024-09-29 16:45:29.599949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.375 qpair failed and we were unable to recover it. 00:37:29.375 [2024-09-29 16:45:29.600083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.375 [2024-09-29 16:45:29.600122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.375 qpair failed and we were unable to recover it. 00:37:29.375 [2024-09-29 16:45:29.600253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.375 [2024-09-29 16:45:29.600286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.375 qpair failed and we were unable to recover it. 00:37:29.375 [2024-09-29 16:45:29.600524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.375 [2024-09-29 16:45:29.600581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.375 qpair failed and we were unable to recover it. 00:37:29.375 [2024-09-29 16:45:29.600746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.375 [2024-09-29 16:45:29.600779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.375 qpair failed and we were unable to recover it. 00:37:29.375 [2024-09-29 16:45:29.600912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.375 [2024-09-29 16:45:29.600978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.375 qpair failed and we were unable to recover it. 00:37:29.375 [2024-09-29 16:45:29.601148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.375 [2024-09-29 16:45:29.601188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.375 qpair failed and we were unable to recover it. 00:37:29.375 [2024-09-29 16:45:29.601383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.375 [2024-09-29 16:45:29.601421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.375 qpair failed and we were unable to recover it. 00:37:29.375 [2024-09-29 16:45:29.601600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.375 [2024-09-29 16:45:29.601637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.375 qpair failed and we were unable to recover it. 00:37:29.375 [2024-09-29 16:45:29.601827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.375 [2024-09-29 16:45:29.601875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.375 qpair failed and we were unable to recover it. 00:37:29.375 [2024-09-29 16:45:29.602046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.375 [2024-09-29 16:45:29.602100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.375 qpair failed and we were unable to recover it. 00:37:29.375 [2024-09-29 16:45:29.602337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.375 [2024-09-29 16:45:29.602396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.375 qpair failed and we were unable to recover it. 00:37:29.375 [2024-09-29 16:45:29.602522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.375 [2024-09-29 16:45:29.602557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.375 qpair failed and we were unable to recover it. 00:37:29.375 [2024-09-29 16:45:29.602681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.375 [2024-09-29 16:45:29.602716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.375 qpair failed and we were unable to recover it. 00:37:29.375 [2024-09-29 16:45:29.602902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.375 [2024-09-29 16:45:29.602955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.375 qpair failed and we were unable to recover it. 00:37:29.375 [2024-09-29 16:45:29.603094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.375 [2024-09-29 16:45:29.603134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.375 qpair failed and we were unable to recover it. 00:37:29.375 [2024-09-29 16:45:29.603294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.375 [2024-09-29 16:45:29.603331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.375 qpair failed and we were unable to recover it. 00:37:29.375 [2024-09-29 16:45:29.603522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.375 [2024-09-29 16:45:29.603557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.375 qpair failed and we were unable to recover it. 00:37:29.375 [2024-09-29 16:45:29.603719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.375 [2024-09-29 16:45:29.603762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.375 qpair failed and we were unable to recover it. 00:37:29.375 [2024-09-29 16:45:29.603922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.375 [2024-09-29 16:45:29.603959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.375 qpair failed and we were unable to recover it. 00:37:29.375 [2024-09-29 16:45:29.604129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.375 [2024-09-29 16:45:29.604182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.375 qpair failed and we were unable to recover it. 00:37:29.375 [2024-09-29 16:45:29.604322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.375 [2024-09-29 16:45:29.604376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.375 qpair failed and we were unable to recover it. 00:37:29.375 [2024-09-29 16:45:29.604502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.375 [2024-09-29 16:45:29.604536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.375 qpair failed and we were unable to recover it. 00:37:29.375 [2024-09-29 16:45:29.604654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.375 [2024-09-29 16:45:29.604695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.375 qpair failed and we were unable to recover it. 00:37:29.375 [2024-09-29 16:45:29.604810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.375 [2024-09-29 16:45:29.604844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.375 qpair failed and we were unable to recover it. 00:37:29.375 [2024-09-29 16:45:29.605008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.375 [2024-09-29 16:45:29.605058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.375 qpair failed and we were unable to recover it. 00:37:29.375 [2024-09-29 16:45:29.605195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.375 [2024-09-29 16:45:29.605248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.375 qpair failed and we were unable to recover it. 00:37:29.375 [2024-09-29 16:45:29.605413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.376 [2024-09-29 16:45:29.605460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.376 qpair failed and we were unable to recover it. 00:37:29.376 [2024-09-29 16:45:29.605578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.376 [2024-09-29 16:45:29.605614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.376 qpair failed and we were unable to recover it. 00:37:29.376 [2024-09-29 16:45:29.605792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.376 [2024-09-29 16:45:29.605827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.376 qpair failed and we were unable to recover it. 00:37:29.376 [2024-09-29 16:45:29.606000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.376 [2024-09-29 16:45:29.606037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.376 qpair failed and we were unable to recover it. 00:37:29.376 [2024-09-29 16:45:29.606190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.376 [2024-09-29 16:45:29.606227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.376 qpair failed and we were unable to recover it. 00:37:29.376 [2024-09-29 16:45:29.606373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.376 [2024-09-29 16:45:29.606410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.376 qpair failed and we were unable to recover it. 00:37:29.376 [2024-09-29 16:45:29.606585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.376 [2024-09-29 16:45:29.606625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.376 qpair failed and we were unable to recover it. 00:37:29.376 [2024-09-29 16:45:29.606800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.376 [2024-09-29 16:45:29.606852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.376 qpair failed and we were unable to recover it. 00:37:29.376 [2024-09-29 16:45:29.607067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.376 [2024-09-29 16:45:29.607120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.376 qpair failed and we were unable to recover it. 00:37:29.376 [2024-09-29 16:45:29.607325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.376 [2024-09-29 16:45:29.607383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.376 qpair failed and we were unable to recover it. 00:37:29.376 [2024-09-29 16:45:29.607572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.376 [2024-09-29 16:45:29.607605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.376 qpair failed and we were unable to recover it. 00:37:29.376 [2024-09-29 16:45:29.607749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.376 [2024-09-29 16:45:29.607783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.376 qpair failed and we were unable to recover it. 00:37:29.376 [2024-09-29 16:45:29.607939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.376 [2024-09-29 16:45:29.607996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.376 qpair failed and we were unable to recover it. 00:37:29.376 [2024-09-29 16:45:29.608131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.376 [2024-09-29 16:45:29.608167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.376 qpair failed and we were unable to recover it. 00:37:29.376 [2024-09-29 16:45:29.608305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.376 [2024-09-29 16:45:29.608356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.376 qpair failed and we were unable to recover it. 00:37:29.376 [2024-09-29 16:45:29.608483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.376 [2024-09-29 16:45:29.608520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.376 qpair failed and we were unable to recover it. 00:37:29.376 [2024-09-29 16:45:29.608687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.376 [2024-09-29 16:45:29.608720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.376 qpair failed and we were unable to recover it. 00:37:29.376 [2024-09-29 16:45:29.608857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.376 [2024-09-29 16:45:29.608891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.376 qpair failed and we were unable to recover it. 00:37:29.376 [2024-09-29 16:45:29.609078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.376 [2024-09-29 16:45:29.609114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.376 qpair failed and we were unable to recover it. 00:37:29.376 [2024-09-29 16:45:29.609231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.376 [2024-09-29 16:45:29.609267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.376 qpair failed and we were unable to recover it. 00:37:29.376 [2024-09-29 16:45:29.609460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.376 [2024-09-29 16:45:29.609496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.376 qpair failed and we were unable to recover it. 00:37:29.376 [2024-09-29 16:45:29.609669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.376 [2024-09-29 16:45:29.609744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.376 qpair failed and we were unable to recover it. 00:37:29.376 [2024-09-29 16:45:29.609896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.376 [2024-09-29 16:45:29.609948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.376 qpair failed and we were unable to recover it. 00:37:29.376 [2024-09-29 16:45:29.610230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.376 [2024-09-29 16:45:29.610288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.376 qpair failed and we were unable to recover it. 00:37:29.376 [2024-09-29 16:45:29.610477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.376 [2024-09-29 16:45:29.610514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.376 qpair failed and we were unable to recover it. 00:37:29.376 [2024-09-29 16:45:29.610662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.376 [2024-09-29 16:45:29.610720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.376 qpair failed and we were unable to recover it. 00:37:29.376 [2024-09-29 16:45:29.610863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.376 [2024-09-29 16:45:29.610898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.376 qpair failed and we were unable to recover it. 00:37:29.376 [2024-09-29 16:45:29.611059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.376 [2024-09-29 16:45:29.611096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.376 qpair failed and we were unable to recover it. 00:37:29.376 [2024-09-29 16:45:29.611252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.377 [2024-09-29 16:45:29.611289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.377 qpair failed and we were unable to recover it. 00:37:29.377 [2024-09-29 16:45:29.611507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.377 [2024-09-29 16:45:29.611544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.377 qpair failed and we were unable to recover it. 00:37:29.377 [2024-09-29 16:45:29.611750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.377 [2024-09-29 16:45:29.611785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.377 qpair failed and we were unable to recover it. 00:37:29.377 [2024-09-29 16:45:29.611907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.377 [2024-09-29 16:45:29.611962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.377 qpair failed and we were unable to recover it. 00:37:29.377 [2024-09-29 16:45:29.612148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.377 [2024-09-29 16:45:29.612186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.377 qpair failed and we were unable to recover it. 00:37:29.377 [2024-09-29 16:45:29.612455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.377 [2024-09-29 16:45:29.612514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.377 qpair failed and we were unable to recover it. 00:37:29.377 [2024-09-29 16:45:29.612653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.377 [2024-09-29 16:45:29.612705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.377 qpair failed and we were unable to recover it. 00:37:29.377 [2024-09-29 16:45:29.612849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.377 [2024-09-29 16:45:29.612882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.377 qpair failed and we were unable to recover it. 00:37:29.377 [2024-09-29 16:45:29.612997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.377 [2024-09-29 16:45:29.613048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.377 qpair failed and we were unable to recover it. 00:37:29.377 [2024-09-29 16:45:29.613203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.377 [2024-09-29 16:45:29.613240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.377 qpair failed and we were unable to recover it. 00:37:29.377 [2024-09-29 16:45:29.613400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.377 [2024-09-29 16:45:29.613436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.377 qpair failed and we were unable to recover it. 00:37:29.377 [2024-09-29 16:45:29.613625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.377 [2024-09-29 16:45:29.613664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.377 qpair failed and we were unable to recover it. 00:37:29.377 [2024-09-29 16:45:29.613806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.377 [2024-09-29 16:45:29.613840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.377 qpair failed and we were unable to recover it. 00:37:29.377 [2024-09-29 16:45:29.614004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.377 [2024-09-29 16:45:29.614042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.377 qpair failed and we were unable to recover it. 00:37:29.377 [2024-09-29 16:45:29.614221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.377 [2024-09-29 16:45:29.614276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.377 qpair failed and we were unable to recover it. 00:37:29.377 [2024-09-29 16:45:29.614425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.377 [2024-09-29 16:45:29.614462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.377 qpair failed and we were unable to recover it. 00:37:29.377 [2024-09-29 16:45:29.614626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.377 [2024-09-29 16:45:29.614659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.377 qpair failed and we were unable to recover it. 00:37:29.377 [2024-09-29 16:45:29.614809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.377 [2024-09-29 16:45:29.614843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.377 qpair failed and we were unable to recover it. 00:37:29.377 [2024-09-29 16:45:29.614988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.377 [2024-09-29 16:45:29.615044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.377 qpair failed and we were unable to recover it. 00:37:29.377 [2024-09-29 16:45:29.615211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.377 [2024-09-29 16:45:29.615285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.377 qpair failed and we were unable to recover it. 00:37:29.377 [2024-09-29 16:45:29.615419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.377 [2024-09-29 16:45:29.615456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.377 qpair failed and we were unable to recover it. 00:37:29.377 [2024-09-29 16:45:29.615638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.377 [2024-09-29 16:45:29.615683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.377 qpair failed and we were unable to recover it. 00:37:29.377 [2024-09-29 16:45:29.615868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.377 [2024-09-29 16:45:29.615915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.377 qpair failed and we were unable to recover it. 00:37:29.377 [2024-09-29 16:45:29.616106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.377 [2024-09-29 16:45:29.616141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.377 qpair failed and we were unable to recover it. 00:37:29.377 [2024-09-29 16:45:29.616288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.377 [2024-09-29 16:45:29.616325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.377 qpair failed and we were unable to recover it. 00:37:29.377 [2024-09-29 16:45:29.616505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.377 [2024-09-29 16:45:29.616542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.377 qpair failed and we were unable to recover it. 00:37:29.377 [2024-09-29 16:45:29.616713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.377 [2024-09-29 16:45:29.616747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.377 qpair failed and we were unable to recover it. 00:37:29.377 [2024-09-29 16:45:29.616851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.377 [2024-09-29 16:45:29.616892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.377 qpair failed and we were unable to recover it. 00:37:29.377 [2024-09-29 16:45:29.617007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.377 [2024-09-29 16:45:29.617041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.377 qpair failed and we were unable to recover it. 00:37:29.377 [2024-09-29 16:45:29.617281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.377 [2024-09-29 16:45:29.617338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.377 qpair failed and we were unable to recover it. 00:37:29.377 [2024-09-29 16:45:29.617491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.378 [2024-09-29 16:45:29.617527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.378 qpair failed and we were unable to recover it. 00:37:29.378 [2024-09-29 16:45:29.617695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.378 [2024-09-29 16:45:29.617744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.378 qpair failed and we were unable to recover it. 00:37:29.378 [2024-09-29 16:45:29.617918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.378 [2024-09-29 16:45:29.617976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.378 qpair failed and we were unable to recover it. 00:37:29.378 [2024-09-29 16:45:29.618143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.378 [2024-09-29 16:45:29.618196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.378 qpair failed and we were unable to recover it. 00:37:29.378 [2024-09-29 16:45:29.618309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.378 [2024-09-29 16:45:29.618343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.378 qpair failed and we were unable to recover it. 00:37:29.378 [2024-09-29 16:45:29.618484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.378 [2024-09-29 16:45:29.618517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.378 qpair failed and we were unable to recover it. 00:37:29.378 [2024-09-29 16:45:29.618654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.378 [2024-09-29 16:45:29.618694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.378 qpair failed and we were unable to recover it. 00:37:29.378 [2024-09-29 16:45:29.618820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.378 [2024-09-29 16:45:29.618853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.378 qpair failed and we were unable to recover it. 00:37:29.378 [2024-09-29 16:45:29.618985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.378 [2024-09-29 16:45:29.619021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.378 qpair failed and we were unable to recover it. 00:37:29.378 [2024-09-29 16:45:29.619179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.378 [2024-09-29 16:45:29.619215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.378 qpair failed and we were unable to recover it. 00:37:29.378 [2024-09-29 16:45:29.619375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.378 [2024-09-29 16:45:29.619411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.378 qpair failed and we were unable to recover it. 00:37:29.378 [2024-09-29 16:45:29.619575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.378 [2024-09-29 16:45:29.619608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.378 qpair failed and we were unable to recover it. 00:37:29.378 [2024-09-29 16:45:29.619751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.378 [2024-09-29 16:45:29.619784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.378 qpair failed and we were unable to recover it. 00:37:29.378 [2024-09-29 16:45:29.619928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.378 [2024-09-29 16:45:29.619961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.378 qpair failed and we were unable to recover it. 00:37:29.378 [2024-09-29 16:45:29.620098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.378 [2024-09-29 16:45:29.620152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.378 qpair failed and we were unable to recover it. 00:37:29.378 [2024-09-29 16:45:29.620299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.378 [2024-09-29 16:45:29.620351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.378 qpair failed and we were unable to recover it. 00:37:29.378 [2024-09-29 16:45:29.620497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.378 [2024-09-29 16:45:29.620532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.378 qpair failed and we were unable to recover it. 00:37:29.378 [2024-09-29 16:45:29.620642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.378 [2024-09-29 16:45:29.620684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.378 qpair failed and we were unable to recover it. 00:37:29.378 [2024-09-29 16:45:29.620812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.378 [2024-09-29 16:45:29.620844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.378 qpair failed and we were unable to recover it. 00:37:29.378 [2024-09-29 16:45:29.621020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.378 [2024-09-29 16:45:29.621067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.378 qpair failed and we were unable to recover it. 00:37:29.378 [2024-09-29 16:45:29.621214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.378 [2024-09-29 16:45:29.621250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.378 qpair failed and we were unable to recover it. 00:37:29.378 [2024-09-29 16:45:29.621387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.378 [2024-09-29 16:45:29.621422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.378 qpair failed and we were unable to recover it. 00:37:29.378 [2024-09-29 16:45:29.621564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.378 [2024-09-29 16:45:29.621597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.378 qpair failed and we were unable to recover it. 00:37:29.378 [2024-09-29 16:45:29.621717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.378 [2024-09-29 16:45:29.621751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.378 qpair failed and we were unable to recover it. 00:37:29.378 [2024-09-29 16:45:29.621867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.378 [2024-09-29 16:45:29.621901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.378 qpair failed and we were unable to recover it. 00:37:29.378 [2024-09-29 16:45:29.622065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.378 [2024-09-29 16:45:29.622102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.378 qpair failed and we were unable to recover it. 00:37:29.378 [2024-09-29 16:45:29.622374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.378 [2024-09-29 16:45:29.622413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.378 qpair failed and we were unable to recover it. 00:37:29.378 [2024-09-29 16:45:29.622574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.378 [2024-09-29 16:45:29.622610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.378 qpair failed and we were unable to recover it. 00:37:29.378 [2024-09-29 16:45:29.622789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.378 [2024-09-29 16:45:29.622829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.378 qpair failed and we were unable to recover it. 00:37:29.378 [2024-09-29 16:45:29.622964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.378 [2024-09-29 16:45:29.623012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.378 qpair failed and we were unable to recover it. 00:37:29.378 [2024-09-29 16:45:29.623160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.378 [2024-09-29 16:45:29.623213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.378 qpair failed and we were unable to recover it. 00:37:29.378 [2024-09-29 16:45:29.623409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.378 [2024-09-29 16:45:29.623461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.378 qpair failed and we were unable to recover it. 00:37:29.378 [2024-09-29 16:45:29.623614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.378 [2024-09-29 16:45:29.623648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.378 qpair failed and we were unable to recover it. 00:37:29.378 [2024-09-29 16:45:29.623846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.379 [2024-09-29 16:45:29.623897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.379 qpair failed and we were unable to recover it. 00:37:29.379 [2024-09-29 16:45:29.624032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.379 [2024-09-29 16:45:29.624109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.379 qpair failed and we were unable to recover it. 00:37:29.379 [2024-09-29 16:45:29.624304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.379 [2024-09-29 16:45:29.624356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.379 qpair failed and we were unable to recover it. 00:37:29.379 [2024-09-29 16:45:29.624466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.379 [2024-09-29 16:45:29.624501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.379 qpair failed and we were unable to recover it. 00:37:29.379 [2024-09-29 16:45:29.624645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.379 [2024-09-29 16:45:29.624699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.379 qpair failed and we were unable to recover it. 00:37:29.379 [2024-09-29 16:45:29.624886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.379 [2024-09-29 16:45:29.624920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.379 qpair failed and we were unable to recover it. 00:37:29.379 [2024-09-29 16:45:29.625058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.379 [2024-09-29 16:45:29.625094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.379 qpair failed and we were unable to recover it. 00:37:29.379 [2024-09-29 16:45:29.625304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.379 [2024-09-29 16:45:29.625340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.379 qpair failed and we were unable to recover it. 00:37:29.379 [2024-09-29 16:45:29.625488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.379 [2024-09-29 16:45:29.625525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.379 qpair failed and we were unable to recover it. 00:37:29.379 [2024-09-29 16:45:29.625715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.379 [2024-09-29 16:45:29.625750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.379 qpair failed and we were unable to recover it. 00:37:29.379 [2024-09-29 16:45:29.625894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.379 [2024-09-29 16:45:29.625959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.379 qpair failed and we were unable to recover it. 00:37:29.379 [2024-09-29 16:45:29.626107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.379 [2024-09-29 16:45:29.626158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.379 qpair failed and we were unable to recover it. 00:37:29.379 [2024-09-29 16:45:29.626326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.379 [2024-09-29 16:45:29.626366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.379 qpair failed and we were unable to recover it. 00:37:29.379 [2024-09-29 16:45:29.626534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.379 [2024-09-29 16:45:29.626567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.379 qpair failed and we were unable to recover it. 00:37:29.379 [2024-09-29 16:45:29.626710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.379 [2024-09-29 16:45:29.626744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.379 qpair failed and we were unable to recover it. 00:37:29.379 [2024-09-29 16:45:29.626892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.379 [2024-09-29 16:45:29.626926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.379 qpair failed and we were unable to recover it. 00:37:29.379 [2024-09-29 16:45:29.627073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.379 [2024-09-29 16:45:29.627124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.379 qpair failed and we were unable to recover it. 00:37:29.379 [2024-09-29 16:45:29.627282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.379 [2024-09-29 16:45:29.627321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.379 qpair failed and we were unable to recover it. 00:37:29.379 [2024-09-29 16:45:29.627454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.379 [2024-09-29 16:45:29.627491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.379 qpair failed and we were unable to recover it. 00:37:29.379 [2024-09-29 16:45:29.627649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.379 [2024-09-29 16:45:29.627688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.379 qpair failed and we were unable to recover it. 00:37:29.379 [2024-09-29 16:45:29.627831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.379 [2024-09-29 16:45:29.627864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.379 qpair failed and we were unable to recover it. 00:37:29.379 [2024-09-29 16:45:29.628014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.379 [2024-09-29 16:45:29.628047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.379 qpair failed and we were unable to recover it. 00:37:29.379 [2024-09-29 16:45:29.628256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.379 [2024-09-29 16:45:29.628293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.379 qpair failed and we were unable to recover it. 00:37:29.379 [2024-09-29 16:45:29.628441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.379 [2024-09-29 16:45:29.628478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.379 qpair failed and we were unable to recover it. 00:37:29.379 [2024-09-29 16:45:29.628615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.379 [2024-09-29 16:45:29.628648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.379 qpair failed and we were unable to recover it. 00:37:29.379 [2024-09-29 16:45:29.628775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.379 [2024-09-29 16:45:29.628809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.379 qpair failed and we were unable to recover it. 00:37:29.379 [2024-09-29 16:45:29.628951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.379 [2024-09-29 16:45:29.629003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.379 qpair failed and we were unable to recover it. 00:37:29.379 [2024-09-29 16:45:29.629169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.379 [2024-09-29 16:45:29.629220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.379 qpair failed and we were unable to recover it. 00:37:29.379 [2024-09-29 16:45:29.629434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.379 [2024-09-29 16:45:29.629472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.379 qpair failed and we were unable to recover it. 00:37:29.379 [2024-09-29 16:45:29.629629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.379 [2024-09-29 16:45:29.629668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.379 qpair failed and we were unable to recover it. 00:37:29.379 [2024-09-29 16:45:29.629819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.379 [2024-09-29 16:45:29.629853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.379 qpair failed and we were unable to recover it. 00:37:29.379 [2024-09-29 16:45:29.629963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.379 [2024-09-29 16:45:29.629996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.379 qpair failed and we were unable to recover it. 00:37:29.379 [2024-09-29 16:45:29.630109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.379 [2024-09-29 16:45:29.630142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.379 qpair failed and we were unable to recover it. 00:37:29.379 [2024-09-29 16:45:29.630286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.379 [2024-09-29 16:45:29.630323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.379 qpair failed and we were unable to recover it. 00:37:29.380 [2024-09-29 16:45:29.630576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.380 [2024-09-29 16:45:29.630609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.380 qpair failed and we were unable to recover it. 00:37:29.380 [2024-09-29 16:45:29.630732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.380 [2024-09-29 16:45:29.630772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.380 qpair failed and we were unable to recover it. 00:37:29.380 [2024-09-29 16:45:29.630921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.380 [2024-09-29 16:45:29.630973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.380 qpair failed and we were unable to recover it. 00:37:29.380 [2024-09-29 16:45:29.631150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.380 [2024-09-29 16:45:29.631188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.380 qpair failed and we were unable to recover it. 00:37:29.380 [2024-09-29 16:45:29.631340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.380 [2024-09-29 16:45:29.631377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.380 qpair failed and we were unable to recover it. 00:37:29.380 [2024-09-29 16:45:29.631495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.380 [2024-09-29 16:45:29.631532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.380 qpair failed and we were unable to recover it. 00:37:29.380 [2024-09-29 16:45:29.631687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.380 [2024-09-29 16:45:29.631735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.380 qpair failed and we were unable to recover it. 00:37:29.380 [2024-09-29 16:45:29.631886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.380 [2024-09-29 16:45:29.631922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.380 qpair failed and we were unable to recover it. 00:37:29.380 [2024-09-29 16:45:29.632115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.380 [2024-09-29 16:45:29.632168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.380 qpair failed and we were unable to recover it. 00:37:29.380 [2024-09-29 16:45:29.632330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.380 [2024-09-29 16:45:29.632384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.380 qpair failed and we were unable to recover it. 00:37:29.380 [2024-09-29 16:45:29.632496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.380 [2024-09-29 16:45:29.632530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.380 qpair failed and we were unable to recover it. 00:37:29.380 [2024-09-29 16:45:29.632645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.380 [2024-09-29 16:45:29.632688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.380 qpair failed and we were unable to recover it. 00:37:29.380 [2024-09-29 16:45:29.632872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.380 [2024-09-29 16:45:29.632906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.380 qpair failed and we were unable to recover it. 00:37:29.380 [2024-09-29 16:45:29.633015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.380 [2024-09-29 16:45:29.633049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.380 qpair failed and we were unable to recover it. 00:37:29.380 [2024-09-29 16:45:29.633190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.380 [2024-09-29 16:45:29.633224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.380 qpair failed and we were unable to recover it. 00:37:29.380 [2024-09-29 16:45:29.633386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.380 [2024-09-29 16:45:29.633421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.380 qpair failed and we were unable to recover it. 00:37:29.380 [2024-09-29 16:45:29.633564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.380 [2024-09-29 16:45:29.633612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.380 qpair failed and we were unable to recover it. 00:37:29.380 [2024-09-29 16:45:29.633777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.380 [2024-09-29 16:45:29.633813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.380 qpair failed and we were unable to recover it. 00:37:29.380 [2024-09-29 16:45:29.633958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.380 [2024-09-29 16:45:29.633992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.380 qpair failed and we were unable to recover it. 00:37:29.380 [2024-09-29 16:45:29.634137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.380 [2024-09-29 16:45:29.634170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.380 qpair failed and we were unable to recover it. 00:37:29.380 [2024-09-29 16:45:29.634321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.380 [2024-09-29 16:45:29.634354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.380 qpair failed and we were unable to recover it. 00:37:29.380 [2024-09-29 16:45:29.634506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.380 [2024-09-29 16:45:29.634542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.380 qpair failed and we were unable to recover it. 00:37:29.380 [2024-09-29 16:45:29.634690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.380 [2024-09-29 16:45:29.634724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.380 qpair failed and we were unable to recover it. 00:37:29.380 [2024-09-29 16:45:29.634854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.380 [2024-09-29 16:45:29.634892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.380 qpair failed and we were unable to recover it. 00:37:29.380 [2024-09-29 16:45:29.635058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.380 [2024-09-29 16:45:29.635109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.380 qpair failed and we were unable to recover it. 00:37:29.380 [2024-09-29 16:45:29.635377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.380 [2024-09-29 16:45:29.635446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.380 qpair failed and we were unable to recover it. 00:37:29.380 [2024-09-29 16:45:29.635652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.380 [2024-09-29 16:45:29.635701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.380 qpair failed and we were unable to recover it. 00:37:29.380 [2024-09-29 16:45:29.635872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.380 [2024-09-29 16:45:29.635926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.380 qpair failed and we were unable to recover it. 00:37:29.380 [2024-09-29 16:45:29.636130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.380 [2024-09-29 16:45:29.636180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.380 qpair failed and we were unable to recover it. 00:37:29.380 [2024-09-29 16:45:29.636345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.380 [2024-09-29 16:45:29.636397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.380 qpair failed and we were unable to recover it. 00:37:29.380 [2024-09-29 16:45:29.636538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.380 [2024-09-29 16:45:29.636571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.380 qpair failed and we were unable to recover it. 00:37:29.380 [2024-09-29 16:45:29.636742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.380 [2024-09-29 16:45:29.636794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.381 qpair failed and we were unable to recover it. 00:37:29.381 [2024-09-29 16:45:29.636936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.381 [2024-09-29 16:45:29.636970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.381 qpair failed and we were unable to recover it. 00:37:29.381 [2024-09-29 16:45:29.637139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.381 [2024-09-29 16:45:29.637174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.381 qpair failed and we were unable to recover it. 00:37:29.381 [2024-09-29 16:45:29.637286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.381 [2024-09-29 16:45:29.637321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.381 qpair failed and we were unable to recover it. 00:37:29.381 [2024-09-29 16:45:29.637458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.381 [2024-09-29 16:45:29.637492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.381 qpair failed and we were unable to recover it. 00:37:29.381 [2024-09-29 16:45:29.637623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.381 [2024-09-29 16:45:29.637679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.381 qpair failed and we were unable to recover it. 00:37:29.381 [2024-09-29 16:45:29.637874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.381 [2024-09-29 16:45:29.637921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.381 qpair failed and we were unable to recover it. 00:37:29.381 [2024-09-29 16:45:29.638065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.381 [2024-09-29 16:45:29.638100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.381 qpair failed and we were unable to recover it. 00:37:29.381 [2024-09-29 16:45:29.638256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.381 [2024-09-29 16:45:29.638290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.381 qpair failed and we were unable to recover it. 00:37:29.381 [2024-09-29 16:45:29.638460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.381 [2024-09-29 16:45:29.638492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.381 qpair failed and we were unable to recover it. 00:37:29.381 [2024-09-29 16:45:29.638630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.381 [2024-09-29 16:45:29.638669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.381 qpair failed and we were unable to recover it. 00:37:29.381 [2024-09-29 16:45:29.638841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.381 [2024-09-29 16:45:29.638878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.381 qpair failed and we were unable to recover it. 00:37:29.381 [2024-09-29 16:45:29.639020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.381 [2024-09-29 16:45:29.639073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.381 qpair failed and we were unable to recover it. 00:37:29.381 [2024-09-29 16:45:29.639266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.381 [2024-09-29 16:45:29.639306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.381 qpair failed and we were unable to recover it. 00:37:29.381 [2024-09-29 16:45:29.639465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.381 [2024-09-29 16:45:29.639519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.381 qpair failed and we were unable to recover it. 00:37:29.381 [2024-09-29 16:45:29.639660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.381 [2024-09-29 16:45:29.639700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.381 qpair failed and we were unable to recover it. 00:37:29.381 [2024-09-29 16:45:29.639857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.381 [2024-09-29 16:45:29.639909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.381 qpair failed and we were unable to recover it. 00:37:29.381 [2024-09-29 16:45:29.640053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.381 [2024-09-29 16:45:29.640086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.381 qpair failed and we were unable to recover it. 00:37:29.381 [2024-09-29 16:45:29.640231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.381 [2024-09-29 16:45:29.640265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.381 qpair failed and we were unable to recover it. 00:37:29.381 [2024-09-29 16:45:29.640404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.381 [2024-09-29 16:45:29.640439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.381 qpair failed and we were unable to recover it. 00:37:29.381 [2024-09-29 16:45:29.640549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.381 [2024-09-29 16:45:29.640582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.381 qpair failed and we were unable to recover it. 00:37:29.381 [2024-09-29 16:45:29.640693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.381 [2024-09-29 16:45:29.640727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.381 qpair failed and we were unable to recover it. 00:37:29.381 [2024-09-29 16:45:29.640840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.381 [2024-09-29 16:45:29.640873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.381 qpair failed and we were unable to recover it. 00:37:29.381 [2024-09-29 16:45:29.640982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.381 [2024-09-29 16:45:29.641015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.381 qpair failed and we were unable to recover it. 00:37:29.381 [2024-09-29 16:45:29.641181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.381 [2024-09-29 16:45:29.641235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.381 qpair failed and we were unable to recover it. 00:37:29.381 [2024-09-29 16:45:29.641385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.381 [2024-09-29 16:45:29.641421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.381 qpair failed and we were unable to recover it. 00:37:29.381 [2024-09-29 16:45:29.641593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.381 [2024-09-29 16:45:29.641638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.381 qpair failed and we were unable to recover it. 00:37:29.381 [2024-09-29 16:45:29.641785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.381 [2024-09-29 16:45:29.641819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.381 qpair failed and we were unable to recover it. 00:37:29.381 [2024-09-29 16:45:29.641961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.381 [2024-09-29 16:45:29.641995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.381 qpair failed and we were unable to recover it. 00:37:29.381 [2024-09-29 16:45:29.642136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.381 [2024-09-29 16:45:29.642171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.381 qpair failed and we were unable to recover it. 00:37:29.381 [2024-09-29 16:45:29.642285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.382 [2024-09-29 16:45:29.642319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.382 qpair failed and we were unable to recover it. 00:37:29.382 [2024-09-29 16:45:29.642436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.382 [2024-09-29 16:45:29.642468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.382 qpair failed and we were unable to recover it. 00:37:29.382 [2024-09-29 16:45:29.642636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.382 [2024-09-29 16:45:29.642668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.382 qpair failed and we were unable to recover it. 00:37:29.382 [2024-09-29 16:45:29.642794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.382 [2024-09-29 16:45:29.642827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.382 qpair failed and we were unable to recover it. 00:37:29.382 [2024-09-29 16:45:29.642938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.382 [2024-09-29 16:45:29.642969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.382 qpair failed and we were unable to recover it. 00:37:29.382 [2024-09-29 16:45:29.643114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.382 [2024-09-29 16:45:29.643146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.382 qpair failed and we were unable to recover it. 00:37:29.382 [2024-09-29 16:45:29.643375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.382 [2024-09-29 16:45:29.643412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.382 qpair failed and we were unable to recover it. 00:37:29.382 [2024-09-29 16:45:29.643586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.382 [2024-09-29 16:45:29.643619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.382 qpair failed and we were unable to recover it. 00:37:29.382 [2024-09-29 16:45:29.643755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.382 [2024-09-29 16:45:29.643803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.382 qpair failed and we were unable to recover it. 00:37:29.382 [2024-09-29 16:45:29.643975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.382 [2024-09-29 16:45:29.644015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.382 qpair failed and we were unable to recover it. 00:37:29.382 [2024-09-29 16:45:29.644166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.382 [2024-09-29 16:45:29.644204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.382 qpair failed and we were unable to recover it. 00:37:29.382 [2024-09-29 16:45:29.644328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.382 [2024-09-29 16:45:29.644365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.382 qpair failed and we were unable to recover it. 00:37:29.382 [2024-09-29 16:45:29.644548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.382 [2024-09-29 16:45:29.644604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.382 qpair failed and we were unable to recover it. 00:37:29.382 [2024-09-29 16:45:29.644755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.382 [2024-09-29 16:45:29.644790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.382 qpair failed and we were unable to recover it. 00:37:29.382 [2024-09-29 16:45:29.644949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.382 [2024-09-29 16:45:29.645000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.382 qpair failed and we were unable to recover it. 00:37:29.382 [2024-09-29 16:45:29.645129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.382 [2024-09-29 16:45:29.645165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.382 qpair failed and we were unable to recover it. 00:37:29.382 [2024-09-29 16:45:29.645292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.382 [2024-09-29 16:45:29.645328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.382 qpair failed and we were unable to recover it. 00:37:29.382 [2024-09-29 16:45:29.645476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.382 [2024-09-29 16:45:29.645513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.382 qpair failed and we were unable to recover it. 00:37:29.382 [2024-09-29 16:45:29.645712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.382 [2024-09-29 16:45:29.645760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.382 qpair failed and we were unable to recover it. 00:37:29.382 [2024-09-29 16:45:29.645884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.382 [2024-09-29 16:45:29.645920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.382 qpair failed and we were unable to recover it. 00:37:29.382 [2024-09-29 16:45:29.646124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.382 [2024-09-29 16:45:29.646182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.382 qpair failed and we were unable to recover it. 00:37:29.382 [2024-09-29 16:45:29.646489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.382 [2024-09-29 16:45:29.646545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.382 qpair failed and we were unable to recover it. 00:37:29.382 [2024-09-29 16:45:29.646719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.382 [2024-09-29 16:45:29.646754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.382 qpair failed and we were unable to recover it. 00:37:29.382 [2024-09-29 16:45:29.646919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.382 [2024-09-29 16:45:29.646967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.382 qpair failed and we were unable to recover it. 00:37:29.382 [2024-09-29 16:45:29.647255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.382 [2024-09-29 16:45:29.647315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.382 qpair failed and we were unable to recover it. 00:37:29.382 [2024-09-29 16:45:29.647487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.382 [2024-09-29 16:45:29.647541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.382 qpair failed and we were unable to recover it. 00:37:29.382 [2024-09-29 16:45:29.647688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.382 [2024-09-29 16:45:29.647723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.382 qpair failed and we were unable to recover it. 00:37:29.382 [2024-09-29 16:45:29.647893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.382 [2024-09-29 16:45:29.647944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.382 qpair failed and we were unable to recover it. 00:37:29.383 [2024-09-29 16:45:29.648109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.383 [2024-09-29 16:45:29.648162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.383 qpair failed and we were unable to recover it. 00:37:29.383 [2024-09-29 16:45:29.648362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.383 [2024-09-29 16:45:29.648418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.383 qpair failed and we were unable to recover it. 00:37:29.383 [2024-09-29 16:45:29.648525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.383 [2024-09-29 16:45:29.648558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.383 qpair failed and we were unable to recover it. 00:37:29.383 [2024-09-29 16:45:29.648700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.383 [2024-09-29 16:45:29.648734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.383 qpair failed and we were unable to recover it. 00:37:29.383 [2024-09-29 16:45:29.648882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.383 [2024-09-29 16:45:29.648923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.383 qpair failed and we were unable to recover it. 00:37:29.383 [2024-09-29 16:45:29.649175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.383 [2024-09-29 16:45:29.649237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.383 qpair failed and we were unable to recover it. 00:37:29.383 [2024-09-29 16:45:29.649405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.383 [2024-09-29 16:45:29.649441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.383 qpair failed and we were unable to recover it. 00:37:29.383 [2024-09-29 16:45:29.649606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.383 [2024-09-29 16:45:29.649638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.383 qpair failed and we were unable to recover it. 00:37:29.383 [2024-09-29 16:45:29.649787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.383 [2024-09-29 16:45:29.649823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.383 qpair failed and we were unable to recover it. 00:37:29.383 [2024-09-29 16:45:29.649983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.383 [2024-09-29 16:45:29.650018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.383 qpair failed and we were unable to recover it. 00:37:29.383 [2024-09-29 16:45:29.650145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.383 [2024-09-29 16:45:29.650181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.383 qpair failed and we were unable to recover it. 00:37:29.383 [2024-09-29 16:45:29.650374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.383 [2024-09-29 16:45:29.650411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.383 qpair failed and we were unable to recover it. 00:37:29.383 [2024-09-29 16:45:29.650543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.383 [2024-09-29 16:45:29.650576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.383 qpair failed and we were unable to recover it. 00:37:29.383 [2024-09-29 16:45:29.650704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.383 [2024-09-29 16:45:29.650740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.383 qpair failed and we were unable to recover it. 00:37:29.383 [2024-09-29 16:45:29.650897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.383 [2024-09-29 16:45:29.650945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.383 qpair failed and we were unable to recover it. 00:37:29.383 [2024-09-29 16:45:29.651093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.383 [2024-09-29 16:45:29.651134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.383 qpair failed and we were unable to recover it. 00:37:29.383 [2024-09-29 16:45:29.651298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.383 [2024-09-29 16:45:29.651336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.383 qpair failed and we were unable to recover it. 00:37:29.383 [2024-09-29 16:45:29.651484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.383 [2024-09-29 16:45:29.651522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.383 qpair failed and we were unable to recover it. 00:37:29.383 [2024-09-29 16:45:29.651657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.383 [2024-09-29 16:45:29.651701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.383 qpair failed and we were unable to recover it. 00:37:29.383 [2024-09-29 16:45:29.651831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.383 [2024-09-29 16:45:29.651864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.383 qpair failed and we were unable to recover it. 00:37:29.383 [2024-09-29 16:45:29.652005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.383 [2024-09-29 16:45:29.652041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.383 qpair failed and we were unable to recover it. 00:37:29.383 [2024-09-29 16:45:29.652228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.383 [2024-09-29 16:45:29.652264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.383 qpair failed and we were unable to recover it. 00:37:29.383 [2024-09-29 16:45:29.652417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.383 [2024-09-29 16:45:29.652454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.383 qpair failed and we were unable to recover it. 00:37:29.383 [2024-09-29 16:45:29.652607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.383 [2024-09-29 16:45:29.652655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.383 qpair failed and we were unable to recover it. 00:37:29.383 [2024-09-29 16:45:29.652840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.383 [2024-09-29 16:45:29.652876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.383 qpair failed and we were unable to recover it. 00:37:29.383 [2024-09-29 16:45:29.653104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.383 [2024-09-29 16:45:29.653165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.383 qpair failed and we were unable to recover it. 00:37:29.383 [2024-09-29 16:45:29.653398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.383 [2024-09-29 16:45:29.653455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.383 qpair failed and we were unable to recover it. 00:37:29.383 [2024-09-29 16:45:29.653588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.383 [2024-09-29 16:45:29.653626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.383 qpair failed and we were unable to recover it. 00:37:29.383 [2024-09-29 16:45:29.653819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.383 [2024-09-29 16:45:29.653853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.383 qpair failed and we were unable to recover it. 00:37:29.383 [2024-09-29 16:45:29.654018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.383 [2024-09-29 16:45:29.654055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.383 qpair failed and we were unable to recover it. 00:37:29.383 [2024-09-29 16:45:29.654185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.383 [2024-09-29 16:45:29.654221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.383 qpair failed and we were unable to recover it. 00:37:29.383 [2024-09-29 16:45:29.654361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.383 [2024-09-29 16:45:29.654413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.383 qpair failed and we were unable to recover it. 00:37:29.383 [2024-09-29 16:45:29.654557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.383 [2024-09-29 16:45:29.654600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.383 qpair failed and we were unable to recover it. 00:37:29.383 [2024-09-29 16:45:29.654762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.384 [2024-09-29 16:45:29.654796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.384 qpair failed and we were unable to recover it. 00:37:29.384 [2024-09-29 16:45:29.654964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.384 [2024-09-29 16:45:29.655013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.384 qpair failed and we were unable to recover it. 00:37:29.384 [2024-09-29 16:45:29.655159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.384 [2024-09-29 16:45:29.655213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.384 qpair failed and we were unable to recover it. 00:37:29.384 [2024-09-29 16:45:29.655361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.384 [2024-09-29 16:45:29.655399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.384 qpair failed and we were unable to recover it. 00:37:29.384 [2024-09-29 16:45:29.655529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.384 [2024-09-29 16:45:29.655564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.384 qpair failed and we were unable to recover it. 00:37:29.384 [2024-09-29 16:45:29.655722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.384 [2024-09-29 16:45:29.655770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.384 qpair failed and we were unable to recover it. 00:37:29.384 [2024-09-29 16:45:29.655896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.384 [2024-09-29 16:45:29.655931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.384 qpair failed and we were unable to recover it. 00:37:29.384 [2024-09-29 16:45:29.656078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.384 [2024-09-29 16:45:29.656112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.384 qpair failed and we were unable to recover it. 00:37:29.384 [2024-09-29 16:45:29.656249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.384 [2024-09-29 16:45:29.656281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.384 qpair failed and we were unable to recover it. 00:37:29.384 [2024-09-29 16:45:29.656394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.384 [2024-09-29 16:45:29.656425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.384 qpair failed and we were unable to recover it. 00:37:29.384 [2024-09-29 16:45:29.656588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.384 [2024-09-29 16:45:29.656636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.384 qpair failed and we were unable to recover it. 00:37:29.384 [2024-09-29 16:45:29.656764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.384 [2024-09-29 16:45:29.656800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.384 qpair failed and we were unable to recover it. 00:37:29.384 [2024-09-29 16:45:29.656942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.384 [2024-09-29 16:45:29.656995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.384 qpair failed and we were unable to recover it. 00:37:29.384 [2024-09-29 16:45:29.657186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.384 [2024-09-29 16:45:29.657239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.384 qpair failed and we were unable to recover it. 00:37:29.384 [2024-09-29 16:45:29.657360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.384 [2024-09-29 16:45:29.657394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.384 qpair failed and we were unable to recover it. 00:37:29.384 [2024-09-29 16:45:29.657532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.384 [2024-09-29 16:45:29.657566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.384 qpair failed and we were unable to recover it. 00:37:29.384 [2024-09-29 16:45:29.657708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.384 [2024-09-29 16:45:29.657742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.384 qpair failed and we were unable to recover it. 00:37:29.384 [2024-09-29 16:45:29.657882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.384 [2024-09-29 16:45:29.657916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.384 qpair failed and we were unable to recover it. 00:37:29.384 [2024-09-29 16:45:29.658027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.384 [2024-09-29 16:45:29.658060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.384 qpair failed and we were unable to recover it. 00:37:29.384 [2024-09-29 16:45:29.658197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.384 [2024-09-29 16:45:29.658230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.384 qpair failed and we were unable to recover it. 00:37:29.384 [2024-09-29 16:45:29.658404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.384 [2024-09-29 16:45:29.658442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.384 qpair failed and we were unable to recover it. 00:37:29.384 [2024-09-29 16:45:29.658565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.384 [2024-09-29 16:45:29.658599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.384 qpair failed and we were unable to recover it. 00:37:29.384 [2024-09-29 16:45:29.658748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.384 [2024-09-29 16:45:29.658785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.384 qpair failed and we were unable to recover it. 00:37:29.384 [2024-09-29 16:45:29.658899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.384 [2024-09-29 16:45:29.658933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.384 qpair failed and we were unable to recover it. 00:37:29.384 [2024-09-29 16:45:29.659044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.384 [2024-09-29 16:45:29.659077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.384 qpair failed and we were unable to recover it. 00:37:29.384 [2024-09-29 16:45:29.659219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.384 [2024-09-29 16:45:29.659253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.384 qpair failed and we were unable to recover it. 00:37:29.384 [2024-09-29 16:45:29.659403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.384 [2024-09-29 16:45:29.659437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.384 qpair failed and we were unable to recover it. 00:37:29.384 [2024-09-29 16:45:29.659551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.384 [2024-09-29 16:45:29.659584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.384 qpair failed and we were unable to recover it. 00:37:29.384 [2024-09-29 16:45:29.659713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.384 [2024-09-29 16:45:29.659749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.384 qpair failed and we were unable to recover it. 00:37:29.384 [2024-09-29 16:45:29.659911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.384 [2024-09-29 16:45:29.659958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.384 qpair failed and we were unable to recover it. 00:37:29.384 [2024-09-29 16:45:29.660106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.384 [2024-09-29 16:45:29.660141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.384 qpair failed and we were unable to recover it. 00:37:29.384 [2024-09-29 16:45:29.660286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.384 [2024-09-29 16:45:29.660319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.384 qpair failed and we were unable to recover it. 00:37:29.384 [2024-09-29 16:45:29.660450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.385 [2024-09-29 16:45:29.660483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.385 qpair failed and we were unable to recover it. 00:37:29.385 [2024-09-29 16:45:29.660620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.385 [2024-09-29 16:45:29.660653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.385 qpair failed and we were unable to recover it. 00:37:29.385 [2024-09-29 16:45:29.660796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.385 [2024-09-29 16:45:29.660852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.385 qpair failed and we were unable to recover it. 00:37:29.385 [2024-09-29 16:45:29.660995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.385 [2024-09-29 16:45:29.661049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.385 qpair failed and we were unable to recover it. 00:37:29.385 [2024-09-29 16:45:29.661196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.385 [2024-09-29 16:45:29.661230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.385 qpair failed and we were unable to recover it. 00:37:29.385 [2024-09-29 16:45:29.661374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.385 [2024-09-29 16:45:29.661408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.385 qpair failed and we were unable to recover it. 00:37:29.385 [2024-09-29 16:45:29.661539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.385 [2024-09-29 16:45:29.661587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.385 qpair failed and we were unable to recover it. 00:37:29.385 [2024-09-29 16:45:29.661739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.385 [2024-09-29 16:45:29.661781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.385 qpair failed and we were unable to recover it. 00:37:29.385 [2024-09-29 16:45:29.661946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.385 [2024-09-29 16:45:29.661984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.385 qpair failed and we were unable to recover it. 00:37:29.385 [2024-09-29 16:45:29.662202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.385 [2024-09-29 16:45:29.662257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.385 qpair failed and we were unable to recover it. 00:37:29.385 [2024-09-29 16:45:29.662449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.385 [2024-09-29 16:45:29.662485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.385 qpair failed and we were unable to recover it. 00:37:29.385 [2024-09-29 16:45:29.662632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.385 [2024-09-29 16:45:29.662669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.385 qpair failed and we were unable to recover it. 00:37:29.385 [2024-09-29 16:45:29.662834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.385 [2024-09-29 16:45:29.662889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.385 qpair failed and we were unable to recover it. 00:37:29.385 [2024-09-29 16:45:29.663006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.385 [2024-09-29 16:45:29.663040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.385 qpair failed and we were unable to recover it. 00:37:29.385 [2024-09-29 16:45:29.663192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.385 [2024-09-29 16:45:29.663245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.385 qpair failed and we were unable to recover it. 00:37:29.385 [2024-09-29 16:45:29.663484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.385 [2024-09-29 16:45:29.663541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.385 qpair failed and we were unable to recover it. 00:37:29.385 [2024-09-29 16:45:29.663688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.385 [2024-09-29 16:45:29.663722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.385 qpair failed and we were unable to recover it. 00:37:29.385 [2024-09-29 16:45:29.663895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.385 [2024-09-29 16:45:29.663961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.385 qpair failed and we were unable to recover it. 00:37:29.385 [2024-09-29 16:45:29.664256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.385 [2024-09-29 16:45:29.664321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.385 qpair failed and we were unable to recover it. 00:37:29.385 [2024-09-29 16:45:29.664484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.385 [2024-09-29 16:45:29.664517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.385 qpair failed and we were unable to recover it. 00:37:29.385 [2024-09-29 16:45:29.664666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.385 [2024-09-29 16:45:29.664706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.385 qpair failed and we were unable to recover it. 00:37:29.385 [2024-09-29 16:45:29.664900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.385 [2024-09-29 16:45:29.664968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.385 qpair failed and we were unable to recover it. 00:37:29.385 [2024-09-29 16:45:29.665136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.385 [2024-09-29 16:45:29.665175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.385 qpair failed and we were unable to recover it. 00:37:29.385 [2024-09-29 16:45:29.665354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.385 [2024-09-29 16:45:29.665389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.385 qpair failed and we were unable to recover it. 00:37:29.385 [2024-09-29 16:45:29.665544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.385 [2024-09-29 16:45:29.665579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.385 qpair failed and we were unable to recover it. 00:37:29.385 [2024-09-29 16:45:29.665771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.385 [2024-09-29 16:45:29.665805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.385 qpair failed and we were unable to recover it. 00:37:29.385 [2024-09-29 16:45:29.665940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.385 [2024-09-29 16:45:29.665988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.385 qpair failed and we were unable to recover it. 00:37:29.385 [2024-09-29 16:45:29.666182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.385 [2024-09-29 16:45:29.666234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.385 qpair failed and we were unable to recover it. 00:37:29.385 [2024-09-29 16:45:29.666419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.385 [2024-09-29 16:45:29.666459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.385 qpair failed and we were unable to recover it. 00:37:29.385 [2024-09-29 16:45:29.666595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.385 [2024-09-29 16:45:29.666632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.385 qpair failed and we were unable to recover it. 00:37:29.385 [2024-09-29 16:45:29.666791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.385 [2024-09-29 16:45:29.666828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.386 qpair failed and we were unable to recover it. 00:37:29.386 [2024-09-29 16:45:29.666947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.386 [2024-09-29 16:45:29.667000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.386 qpair failed and we were unable to recover it. 00:37:29.386 [2024-09-29 16:45:29.667199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.386 [2024-09-29 16:45:29.667236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.386 qpair failed and we were unable to recover it. 00:37:29.386 [2024-09-29 16:45:29.667512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.386 [2024-09-29 16:45:29.667582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.386 qpair failed and we were unable to recover it. 00:37:29.386 [2024-09-29 16:45:29.667746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.386 [2024-09-29 16:45:29.667781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.386 qpair failed and we were unable to recover it. 00:37:29.386 [2024-09-29 16:45:29.667933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.386 [2024-09-29 16:45:29.667968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.386 qpair failed and we were unable to recover it. 00:37:29.386 [2024-09-29 16:45:29.668104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.386 [2024-09-29 16:45:29.668153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.386 qpair failed and we were unable to recover it. 00:37:29.386 [2024-09-29 16:45:29.668326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.386 [2024-09-29 16:45:29.668362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.386 qpair failed and we were unable to recover it. 00:37:29.386 [2024-09-29 16:45:29.668488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.386 [2024-09-29 16:45:29.668524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.386 qpair failed and we were unable to recover it. 00:37:29.386 [2024-09-29 16:45:29.668685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.386 [2024-09-29 16:45:29.668718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.386 qpair failed and we were unable to recover it. 00:37:29.386 [2024-09-29 16:45:29.668886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.386 [2024-09-29 16:45:29.668934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.386 qpair failed and we were unable to recover it. 00:37:29.386 [2024-09-29 16:45:29.669068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.386 [2024-09-29 16:45:29.669106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.386 qpair failed and we were unable to recover it. 00:37:29.386 [2024-09-29 16:45:29.669260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.386 [2024-09-29 16:45:29.669326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.386 qpair failed and we were unable to recover it. 00:37:29.386 [2024-09-29 16:45:29.669484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.386 [2024-09-29 16:45:29.669533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.386 qpair failed and we were unable to recover it. 00:37:29.386 [2024-09-29 16:45:29.669691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.386 [2024-09-29 16:45:29.669742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.386 qpair failed and we were unable to recover it. 00:37:29.386 [2024-09-29 16:45:29.669858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.386 [2024-09-29 16:45:29.669891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.386 qpair failed and we were unable to recover it. 00:37:29.386 [2024-09-29 16:45:29.670005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.386 [2024-09-29 16:45:29.670037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.386 qpair failed and we were unable to recover it. 00:37:29.386 [2024-09-29 16:45:29.670223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.386 [2024-09-29 16:45:29.670265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.386 qpair failed and we were unable to recover it. 00:37:29.386 [2024-09-29 16:45:29.670399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.386 [2024-09-29 16:45:29.670436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.386 qpair failed and we were unable to recover it. 00:37:29.386 [2024-09-29 16:45:29.670624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.386 [2024-09-29 16:45:29.670657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.386 qpair failed and we were unable to recover it. 00:37:29.386 [2024-09-29 16:45:29.670781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.386 [2024-09-29 16:45:29.670813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.386 qpair failed and we were unable to recover it. 00:37:29.386 [2024-09-29 16:45:29.670938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.386 [2024-09-29 16:45:29.670975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.386 qpair failed and we were unable to recover it. 00:37:29.386 [2024-09-29 16:45:29.671191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.386 [2024-09-29 16:45:29.671257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.386 qpair failed and we were unable to recover it. 00:37:29.386 [2024-09-29 16:45:29.671452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.386 [2024-09-29 16:45:29.671489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.386 qpair failed and we were unable to recover it. 00:37:29.386 [2024-09-29 16:45:29.671662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.386 [2024-09-29 16:45:29.671703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.386 qpair failed and we were unable to recover it. 00:37:29.386 [2024-09-29 16:45:29.671818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.386 [2024-09-29 16:45:29.671851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.386 qpair failed and we were unable to recover it. 00:37:29.386 [2024-09-29 16:45:29.672032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.386 [2024-09-29 16:45:29.672084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.386 qpair failed and we were unable to recover it. 00:37:29.386 [2024-09-29 16:45:29.672301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.386 [2024-09-29 16:45:29.672380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.386 qpair failed and we were unable to recover it. 00:37:29.386 [2024-09-29 16:45:29.672520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.386 [2024-09-29 16:45:29.672569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.386 qpair failed and we were unable to recover it. 00:37:29.386 [2024-09-29 16:45:29.672718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.386 [2024-09-29 16:45:29.672753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.386 qpair failed and we were unable to recover it. 00:37:29.386 [2024-09-29 16:45:29.672899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.386 [2024-09-29 16:45:29.672933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.386 qpair failed and we were unable to recover it. 00:37:29.386 [2024-09-29 16:45:29.673101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.386 [2024-09-29 16:45:29.673138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.386 qpair failed and we were unable to recover it. 00:37:29.386 [2024-09-29 16:45:29.673369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.386 [2024-09-29 16:45:29.673403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.386 qpair failed and we were unable to recover it. 00:37:29.387 [2024-09-29 16:45:29.673582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.387 [2024-09-29 16:45:29.673620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.387 qpair failed and we were unable to recover it. 00:37:29.387 [2024-09-29 16:45:29.673771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.387 [2024-09-29 16:45:29.673806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.387 qpair failed and we were unable to recover it. 00:37:29.387 [2024-09-29 16:45:29.673967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.387 [2024-09-29 16:45:29.674018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.387 qpair failed and we were unable to recover it. 00:37:29.387 [2024-09-29 16:45:29.674229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.387 [2024-09-29 16:45:29.674266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.387 qpair failed and we were unable to recover it. 00:37:29.387 [2024-09-29 16:45:29.674475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.387 [2024-09-29 16:45:29.674512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.387 qpair failed and we were unable to recover it. 00:37:29.387 [2024-09-29 16:45:29.674687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.387 [2024-09-29 16:45:29.674738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.387 qpair failed and we were unable to recover it. 00:37:29.387 [2024-09-29 16:45:29.674900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.387 [2024-09-29 16:45:29.674947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.387 qpair failed and we were unable to recover it. 00:37:29.387 [2024-09-29 16:45:29.675110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.387 [2024-09-29 16:45:29.675163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.387 qpair failed and we were unable to recover it. 00:37:29.387 [2024-09-29 16:45:29.675437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.387 [2024-09-29 16:45:29.675495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.387 qpair failed and we were unable to recover it. 00:37:29.387 [2024-09-29 16:45:29.675617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.387 [2024-09-29 16:45:29.675652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.387 qpair failed and we were unable to recover it. 00:37:29.387 [2024-09-29 16:45:29.675803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.387 [2024-09-29 16:45:29.675837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.387 qpair failed and we were unable to recover it. 00:37:29.387 [2024-09-29 16:45:29.676031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.387 [2024-09-29 16:45:29.676071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.387 qpair failed and we were unable to recover it. 00:37:29.387 [2024-09-29 16:45:29.676232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.387 [2024-09-29 16:45:29.676297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.387 qpair failed and we were unable to recover it. 00:37:29.387 [2024-09-29 16:45:29.676448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.387 [2024-09-29 16:45:29.676520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.387 qpair failed and we were unable to recover it. 00:37:29.387 [2024-09-29 16:45:29.676682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.387 [2024-09-29 16:45:29.676716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.387 qpair failed and we were unable to recover it. 00:37:29.387 [2024-09-29 16:45:29.676848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.387 [2024-09-29 16:45:29.676885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.387 qpair failed and we were unable to recover it. 00:37:29.387 [2024-09-29 16:45:29.677017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.387 [2024-09-29 16:45:29.677054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.387 qpair failed and we were unable to recover it. 00:37:29.387 [2024-09-29 16:45:29.677176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.387 [2024-09-29 16:45:29.677213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.387 qpair failed and we were unable to recover it. 00:37:29.387 [2024-09-29 16:45:29.677343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.387 [2024-09-29 16:45:29.677382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.387 qpair failed and we were unable to recover it. 00:37:29.387 [2024-09-29 16:45:29.677577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.387 [2024-09-29 16:45:29.677610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.387 qpair failed and we were unable to recover it. 00:37:29.387 [2024-09-29 16:45:29.677748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.387 [2024-09-29 16:45:29.677784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.387 qpair failed and we were unable to recover it. 00:37:29.387 [2024-09-29 16:45:29.677944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.387 [2024-09-29 16:45:29.678000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.387 qpair failed and we were unable to recover it. 00:37:29.387 [2024-09-29 16:45:29.678189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.387 [2024-09-29 16:45:29.678240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.387 qpair failed and we were unable to recover it. 00:37:29.387 [2024-09-29 16:45:29.678407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.387 [2024-09-29 16:45:29.678459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.387 qpair failed and we were unable to recover it. 00:37:29.387 [2024-09-29 16:45:29.678638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.387 [2024-09-29 16:45:29.678683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.387 qpair failed and we were unable to recover it. 00:37:29.387 [2024-09-29 16:45:29.678802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.387 [2024-09-29 16:45:29.678835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.387 qpair failed and we were unable to recover it. 00:37:29.388 [2024-09-29 16:45:29.679001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.388 [2024-09-29 16:45:29.679039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.388 qpair failed and we were unable to recover it. 00:37:29.388 [2024-09-29 16:45:29.679257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.388 [2024-09-29 16:45:29.679294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.388 qpair failed and we were unable to recover it. 00:37:29.388 [2024-09-29 16:45:29.679527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.388 [2024-09-29 16:45:29.679584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.388 qpair failed and we were unable to recover it. 00:37:29.388 [2024-09-29 16:45:29.679759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.388 [2024-09-29 16:45:29.679794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.388 qpair failed and we were unable to recover it. 00:37:29.388 [2024-09-29 16:45:29.679936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.388 [2024-09-29 16:45:29.679984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.388 qpair failed and we were unable to recover it. 00:37:29.388 [2024-09-29 16:45:29.680221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.388 [2024-09-29 16:45:29.680279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.388 qpair failed and we were unable to recover it. 00:37:29.388 [2024-09-29 16:45:29.680574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.388 [2024-09-29 16:45:29.680639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.388 qpair failed and we were unable to recover it. 00:37:29.388 [2024-09-29 16:45:29.680795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.388 [2024-09-29 16:45:29.680830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.388 qpair failed and we were unable to recover it. 00:37:29.388 [2024-09-29 16:45:29.680989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.388 [2024-09-29 16:45:29.681026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.388 qpair failed and we were unable to recover it. 00:37:29.388 [2024-09-29 16:45:29.681242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.388 [2024-09-29 16:45:29.681325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.388 qpair failed and we were unable to recover it. 00:37:29.388 [2024-09-29 16:45:29.681611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.388 [2024-09-29 16:45:29.681698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.388 qpair failed and we were unable to recover it. 00:37:29.388 [2024-09-29 16:45:29.681859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.388 [2024-09-29 16:45:29.681907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.388 qpair failed and we were unable to recover it. 00:37:29.388 [2024-09-29 16:45:29.682045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.388 [2024-09-29 16:45:29.682082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.388 qpair failed and we were unable to recover it. 00:37:29.388 [2024-09-29 16:45:29.682216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.388 [2024-09-29 16:45:29.682256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.388 qpair failed and we were unable to recover it. 00:37:29.388 [2024-09-29 16:45:29.682436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.388 [2024-09-29 16:45:29.682474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.388 qpair failed and we were unable to recover it. 00:37:29.388 [2024-09-29 16:45:29.682629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.388 [2024-09-29 16:45:29.682666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.388 qpair failed and we were unable to recover it. 00:37:29.388 [2024-09-29 16:45:29.682861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.388 [2024-09-29 16:45:29.682908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.388 qpair failed and we were unable to recover it. 00:37:29.388 [2024-09-29 16:45:29.683087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.388 [2024-09-29 16:45:29.683143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.388 qpair failed and we were unable to recover it. 00:37:29.388 [2024-09-29 16:45:29.683320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.388 [2024-09-29 16:45:29.683374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.388 qpair failed and we were unable to recover it. 00:37:29.388 [2024-09-29 16:45:29.683547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.388 [2024-09-29 16:45:29.683592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.388 qpair failed and we were unable to recover it. 00:37:29.388 [2024-09-29 16:45:29.683761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.388 [2024-09-29 16:45:29.683796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.388 qpair failed and we were unable to recover it. 00:37:29.388 [2024-09-29 16:45:29.683948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.388 [2024-09-29 16:45:29.684001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.388 qpair failed and we were unable to recover it. 00:37:29.388 [2024-09-29 16:45:29.684172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.388 [2024-09-29 16:45:29.684211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.388 qpair failed and we were unable to recover it. 00:37:29.388 [2024-09-29 16:45:29.684439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.388 [2024-09-29 16:45:29.684512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.388 qpair failed and we were unable to recover it. 00:37:29.388 [2024-09-29 16:45:29.684688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.388 [2024-09-29 16:45:29.684724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.388 qpair failed and we were unable to recover it. 00:37:29.388 [2024-09-29 16:45:29.684845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.388 [2024-09-29 16:45:29.684881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.388 qpair failed and we were unable to recover it. 00:37:29.388 [2024-09-29 16:45:29.685002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.388 [2024-09-29 16:45:29.685035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.388 qpair failed and we were unable to recover it. 00:37:29.388 [2024-09-29 16:45:29.685199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.388 [2024-09-29 16:45:29.685275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.388 qpair failed and we were unable to recover it. 00:37:29.388 [2024-09-29 16:45:29.685436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.388 [2024-09-29 16:45:29.685474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.388 qpair failed and we were unable to recover it. 00:37:29.388 [2024-09-29 16:45:29.685629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.388 [2024-09-29 16:45:29.685666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.388 qpair failed and we were unable to recover it. 00:37:29.388 [2024-09-29 16:45:29.685848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.388 [2024-09-29 16:45:29.685884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.388 qpair failed and we were unable to recover it. 00:37:29.388 [2024-09-29 16:45:29.686051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.389 [2024-09-29 16:45:29.686105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.389 qpair failed and we were unable to recover it. 00:37:29.389 [2024-09-29 16:45:29.686300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.389 [2024-09-29 16:45:29.686353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.389 qpair failed and we were unable to recover it. 00:37:29.389 [2024-09-29 16:45:29.686499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.389 [2024-09-29 16:45:29.686533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.389 qpair failed and we were unable to recover it. 00:37:29.389 [2024-09-29 16:45:29.686684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.389 [2024-09-29 16:45:29.686718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.389 qpair failed and we were unable to recover it. 00:37:29.389 [2024-09-29 16:45:29.686860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.389 [2024-09-29 16:45:29.686894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.389 qpair failed and we were unable to recover it. 00:37:29.389 [2024-09-29 16:45:29.687066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.389 [2024-09-29 16:45:29.687100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.389 qpair failed and we were unable to recover it. 00:37:29.389 [2024-09-29 16:45:29.687218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.389 [2024-09-29 16:45:29.687252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.389 qpair failed and we were unable to recover it. 00:37:29.389 [2024-09-29 16:45:29.687548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.389 [2024-09-29 16:45:29.687615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.389 qpair failed and we were unable to recover it. 00:37:29.389 [2024-09-29 16:45:29.687790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.389 [2024-09-29 16:45:29.687825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.389 qpair failed and we were unable to recover it. 00:37:29.389 [2024-09-29 16:45:29.687959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.389 [2024-09-29 16:45:29.688013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.389 qpair failed and we were unable to recover it. 00:37:29.389 [2024-09-29 16:45:29.688276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.389 [2024-09-29 16:45:29.688358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.389 qpair failed and we were unable to recover it. 00:37:29.389 [2024-09-29 16:45:29.688518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.389 [2024-09-29 16:45:29.688586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.389 qpair failed and we were unable to recover it. 00:37:29.389 [2024-09-29 16:45:29.688738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.389 [2024-09-29 16:45:29.688772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.389 qpair failed and we were unable to recover it. 00:37:29.389 [2024-09-29 16:45:29.688941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.389 [2024-09-29 16:45:29.688974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.389 qpair failed and we were unable to recover it. 00:37:29.389 [2024-09-29 16:45:29.689134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.389 [2024-09-29 16:45:29.689169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.389 qpair failed and we were unable to recover it. 00:37:29.389 [2024-09-29 16:45:29.689399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.389 [2024-09-29 16:45:29.689464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.389 qpair failed and we were unable to recover it. 00:37:29.389 [2024-09-29 16:45:29.689629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.389 [2024-09-29 16:45:29.689661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.389 qpair failed and we were unable to recover it. 00:37:29.389 [2024-09-29 16:45:29.689804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.389 [2024-09-29 16:45:29.689836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.389 qpair failed and we were unable to recover it. 00:37:29.389 [2024-09-29 16:45:29.689982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.389 [2024-09-29 16:45:29.690031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.389 qpair failed and we were unable to recover it. 00:37:29.389 [2024-09-29 16:45:29.690174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.389 [2024-09-29 16:45:29.690209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.389 qpair failed and we were unable to recover it. 00:37:29.389 [2024-09-29 16:45:29.690368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.389 [2024-09-29 16:45:29.690404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.389 qpair failed and we were unable to recover it. 00:37:29.389 [2024-09-29 16:45:29.690541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.389 [2024-09-29 16:45:29.690578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.389 qpair failed and we were unable to recover it. 00:37:29.389 [2024-09-29 16:45:29.690741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.389 [2024-09-29 16:45:29.690775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.389 qpair failed and we were unable to recover it. 00:37:29.389 [2024-09-29 16:45:29.690891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.389 [2024-09-29 16:45:29.690923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.389 qpair failed and we were unable to recover it. 00:37:29.389 [2024-09-29 16:45:29.691085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.389 [2024-09-29 16:45:29.691121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.389 qpair failed and we were unable to recover it. 00:37:29.389 [2024-09-29 16:45:29.691335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.389 [2024-09-29 16:45:29.691372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.389 qpair failed and we were unable to recover it. 00:37:29.389 [2024-09-29 16:45:29.691491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.389 [2024-09-29 16:45:29.691526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.389 qpair failed and we were unable to recover it. 00:37:29.389 [2024-09-29 16:45:29.691705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.389 [2024-09-29 16:45:29.691738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.389 qpair failed and we were unable to recover it. 00:37:29.389 [2024-09-29 16:45:29.691896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.389 [2024-09-29 16:45:29.691929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.389 qpair failed and we were unable to recover it. 00:37:29.389 [2024-09-29 16:45:29.692119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.389 [2024-09-29 16:45:29.692156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.389 qpair failed and we were unable to recover it. 00:37:29.389 [2024-09-29 16:45:29.692337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.389 [2024-09-29 16:45:29.692374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.389 qpair failed and we were unable to recover it. 00:37:29.389 [2024-09-29 16:45:29.692554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.389 [2024-09-29 16:45:29.692591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.389 qpair failed and we were unable to recover it. 00:37:29.390 [2024-09-29 16:45:29.692755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.390 [2024-09-29 16:45:29.692787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.390 qpair failed and we were unable to recover it. 00:37:29.390 [2024-09-29 16:45:29.692906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.390 [2024-09-29 16:45:29.692938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.390 qpair failed and we were unable to recover it. 00:37:29.390 [2024-09-29 16:45:29.693094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.390 [2024-09-29 16:45:29.693126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.390 qpair failed and we were unable to recover it. 00:37:29.390 [2024-09-29 16:45:29.693269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.390 [2024-09-29 16:45:29.693301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.390 qpair failed and we were unable to recover it. 00:37:29.390 [2024-09-29 16:45:29.693472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.390 [2024-09-29 16:45:29.693508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.390 qpair failed and we were unable to recover it. 00:37:29.390 [2024-09-29 16:45:29.693661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.390 [2024-09-29 16:45:29.693703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.390 qpair failed and we were unable to recover it. 00:37:29.390 [2024-09-29 16:45:29.693829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.390 [2024-09-29 16:45:29.693861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.390 qpair failed and we were unable to recover it. 00:37:29.390 [2024-09-29 16:45:29.693978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.390 [2024-09-29 16:45:29.694009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.390 qpair failed and we were unable to recover it. 00:37:29.390 [2024-09-29 16:45:29.694172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.390 [2024-09-29 16:45:29.694208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.390 qpair failed and we were unable to recover it. 00:37:29.390 [2024-09-29 16:45:29.694349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.390 [2024-09-29 16:45:29.694384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.390 qpair failed and we were unable to recover it. 00:37:29.390 [2024-09-29 16:45:29.694520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.390 [2024-09-29 16:45:29.694570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.390 qpair failed and we were unable to recover it. 00:37:29.390 [2024-09-29 16:45:29.694730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.390 [2024-09-29 16:45:29.694763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.390 qpair failed and we were unable to recover it. 00:37:29.390 [2024-09-29 16:45:29.694908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.390 [2024-09-29 16:45:29.694941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.390 qpair failed and we were unable to recover it. 00:37:29.390 [2024-09-29 16:45:29.695120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.390 [2024-09-29 16:45:29.695157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.390 qpair failed and we were unable to recover it. 00:37:29.390 [2024-09-29 16:45:29.695366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.390 [2024-09-29 16:45:29.695401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.390 qpair failed and we were unable to recover it. 00:37:29.390 [2024-09-29 16:45:29.695554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.390 [2024-09-29 16:45:29.695594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.390 qpair failed and we were unable to recover it. 00:37:29.390 [2024-09-29 16:45:29.695729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.390 [2024-09-29 16:45:29.695762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.390 qpair failed and we were unable to recover it. 00:37:29.390 [2024-09-29 16:45:29.695895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.390 [2024-09-29 16:45:29.695942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.390 qpair failed and we were unable to recover it. 00:37:29.390 [2024-09-29 16:45:29.696128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.390 [2024-09-29 16:45:29.696181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.390 qpair failed and we were unable to recover it. 00:37:29.390 [2024-09-29 16:45:29.696421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.390 [2024-09-29 16:45:29.696479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.390 qpair failed and we were unable to recover it. 00:37:29.390 [2024-09-29 16:45:29.696650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.390 [2024-09-29 16:45:29.696690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.390 qpair failed and we were unable to recover it. 00:37:29.390 [2024-09-29 16:45:29.696839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.390 [2024-09-29 16:45:29.696873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.390 qpair failed and we were unable to recover it. 00:37:29.390 [2024-09-29 16:45:29.697036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.390 [2024-09-29 16:45:29.697083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.390 qpair failed and we were unable to recover it. 00:37:29.390 [2024-09-29 16:45:29.697212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.390 [2024-09-29 16:45:29.697249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.390 qpair failed and we were unable to recover it. 00:37:29.390 [2024-09-29 16:45:29.697486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.390 [2024-09-29 16:45:29.697543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.390 qpair failed and we were unable to recover it. 00:37:29.390 [2024-09-29 16:45:29.697724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.390 [2024-09-29 16:45:29.697759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.390 qpair failed and we were unable to recover it. 00:37:29.390 [2024-09-29 16:45:29.697901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.390 [2024-09-29 16:45:29.697935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.390 qpair failed and we were unable to recover it. 00:37:29.390 [2024-09-29 16:45:29.698251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.390 [2024-09-29 16:45:29.698305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.390 qpair failed and we were unable to recover it. 00:37:29.390 [2024-09-29 16:45:29.698456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.390 [2024-09-29 16:45:29.698510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.390 qpair failed and we were unable to recover it. 00:37:29.390 [2024-09-29 16:45:29.698669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.390 [2024-09-29 16:45:29.698730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.390 qpair failed and we were unable to recover it. 00:37:29.390 [2024-09-29 16:45:29.698848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.390 [2024-09-29 16:45:29.698882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.390 qpair failed and we were unable to recover it. 00:37:29.390 [2024-09-29 16:45:29.699030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.390 [2024-09-29 16:45:29.699083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.390 qpair failed and we were unable to recover it. 00:37:29.390 [2024-09-29 16:45:29.699245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.390 [2024-09-29 16:45:29.699283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.390 qpair failed and we were unable to recover it. 00:37:29.390 [2024-09-29 16:45:29.699424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.390 [2024-09-29 16:45:29.699477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.390 qpair failed and we were unable to recover it. 00:37:29.390 [2024-09-29 16:45:29.699632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.390 [2024-09-29 16:45:29.699669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.390 qpair failed and we were unable to recover it. 00:37:29.391 [2024-09-29 16:45:29.699859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.391 [2024-09-29 16:45:29.699891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.391 qpair failed and we were unable to recover it. 00:37:29.391 [2024-09-29 16:45:29.700027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.391 [2024-09-29 16:45:29.700060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.391 qpair failed and we were unable to recover it. 00:37:29.391 [2024-09-29 16:45:29.700229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.391 [2024-09-29 16:45:29.700266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.391 qpair failed and we were unable to recover it. 00:37:29.391 [2024-09-29 16:45:29.700439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.391 [2024-09-29 16:45:29.700492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.391 qpair failed and we were unable to recover it. 00:37:29.391 [2024-09-29 16:45:29.700727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.391 [2024-09-29 16:45:29.700764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.391 qpair failed and we were unable to recover it. 00:37:29.391 [2024-09-29 16:45:29.700888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.391 [2024-09-29 16:45:29.700922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.391 qpair failed and we were unable to recover it. 00:37:29.391 [2024-09-29 16:45:29.701118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.391 [2024-09-29 16:45:29.701156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.391 qpair failed and we were unable to recover it. 00:37:29.391 [2024-09-29 16:45:29.701484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.391 [2024-09-29 16:45:29.701556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.391 qpair failed and we were unable to recover it. 00:37:29.391 [2024-09-29 16:45:29.701724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.391 [2024-09-29 16:45:29.701772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.391 qpair failed and we were unable to recover it. 00:37:29.391 [2024-09-29 16:45:29.701897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.391 [2024-09-29 16:45:29.701933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.391 qpair failed and we were unable to recover it. 00:37:29.391 [2024-09-29 16:45:29.702081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.391 [2024-09-29 16:45:29.702117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.391 qpair failed and we were unable to recover it. 00:37:29.391 [2024-09-29 16:45:29.702282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.391 [2024-09-29 16:45:29.702320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.391 qpair failed and we were unable to recover it. 00:37:29.391 [2024-09-29 16:45:29.702509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.391 [2024-09-29 16:45:29.702548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.391 qpair failed and we were unable to recover it. 00:37:29.391 [2024-09-29 16:45:29.702733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.391 [2024-09-29 16:45:29.702769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.391 qpair failed and we were unable to recover it. 00:37:29.391 [2024-09-29 16:45:29.702919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.391 [2024-09-29 16:45:29.702956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.391 qpair failed and we were unable to recover it. 00:37:29.391 [2024-09-29 16:45:29.703215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.391 [2024-09-29 16:45:29.703274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.391 qpair failed and we were unable to recover it. 00:37:29.391 [2024-09-29 16:45:29.703559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.391 [2024-09-29 16:45:29.703615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.391 qpair failed and we were unable to recover it. 00:37:29.391 [2024-09-29 16:45:29.703814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.391 [2024-09-29 16:45:29.703849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.391 qpair failed and we were unable to recover it. 00:37:29.391 [2024-09-29 16:45:29.704000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.391 [2024-09-29 16:45:29.704035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.391 qpair failed and we were unable to recover it. 00:37:29.391 [2024-09-29 16:45:29.704176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.391 [2024-09-29 16:45:29.704210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.391 qpair failed and we were unable to recover it. 00:37:29.391 [2024-09-29 16:45:29.704329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.391 [2024-09-29 16:45:29.704363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.391 qpair failed and we were unable to recover it. 00:37:29.391 [2024-09-29 16:45:29.704541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.391 [2024-09-29 16:45:29.704594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.391 qpair failed and we were unable to recover it. 00:37:29.391 [2024-09-29 16:45:29.704754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.391 [2024-09-29 16:45:29.704801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.391 qpair failed and we were unable to recover it. 00:37:29.391 [2024-09-29 16:45:29.704928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.391 [2024-09-29 16:45:29.704982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.391 qpair failed and we were unable to recover it. 00:37:29.391 [2024-09-29 16:45:29.705205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.391 [2024-09-29 16:45:29.705268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.391 qpair failed and we were unable to recover it. 00:37:29.391 [2024-09-29 16:45:29.705519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.391 [2024-09-29 16:45:29.705577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.391 qpair failed and we were unable to recover it. 00:37:29.391 [2024-09-29 16:45:29.705700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.391 [2024-09-29 16:45:29.705751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.391 qpair failed and we were unable to recover it. 00:37:29.391 [2024-09-29 16:45:29.705858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.391 [2024-09-29 16:45:29.705891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.391 qpair failed and we were unable to recover it. 00:37:29.391 [2024-09-29 16:45:29.706028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.391 [2024-09-29 16:45:29.706061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.391 qpair failed and we were unable to recover it. 00:37:29.391 [2024-09-29 16:45:29.706259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.391 [2024-09-29 16:45:29.706317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.391 qpair failed and we were unable to recover it. 00:37:29.391 [2024-09-29 16:45:29.706552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.391 [2024-09-29 16:45:29.706613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.391 qpair failed and we were unable to recover it. 00:37:29.391 [2024-09-29 16:45:29.706756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.391 [2024-09-29 16:45:29.706790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.391 qpair failed and we were unable to recover it. 00:37:29.391 [2024-09-29 16:45:29.706935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.391 [2024-09-29 16:45:29.706969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.391 qpair failed and we were unable to recover it. 00:37:29.392 [2024-09-29 16:45:29.707080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.392 [2024-09-29 16:45:29.707116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.392 qpair failed and we were unable to recover it. 00:37:29.392 [2024-09-29 16:45:29.707416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.392 [2024-09-29 16:45:29.707485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.392 qpair failed and we were unable to recover it. 00:37:29.392 [2024-09-29 16:45:29.707634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.392 [2024-09-29 16:45:29.707698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.392 qpair failed and we were unable to recover it. 00:37:29.392 [2024-09-29 16:45:29.707838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.392 [2024-09-29 16:45:29.707875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.392 qpair failed and we were unable to recover it. 00:37:29.392 [2024-09-29 16:45:29.708042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.392 [2024-09-29 16:45:29.708092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.392 qpair failed and we were unable to recover it. 00:37:29.392 [2024-09-29 16:45:29.708219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.392 [2024-09-29 16:45:29.708272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.392 qpair failed and we were unable to recover it. 00:37:29.392 [2024-09-29 16:45:29.708463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.392 [2024-09-29 16:45:29.708522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.392 qpair failed and we were unable to recover it. 00:37:29.392 [2024-09-29 16:45:29.708667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.392 [2024-09-29 16:45:29.708709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.392 qpair failed and we were unable to recover it. 00:37:29.392 [2024-09-29 16:45:29.708844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.392 [2024-09-29 16:45:29.708891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.392 qpair failed and we were unable to recover it. 00:37:29.392 [2024-09-29 16:45:29.709083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.392 [2024-09-29 16:45:29.709136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.392 qpair failed and we were unable to recover it. 00:37:29.392 [2024-09-29 16:45:29.709296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.392 [2024-09-29 16:45:29.709376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.392 qpair failed and we were unable to recover it. 00:37:29.392 [2024-09-29 16:45:29.709599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.392 [2024-09-29 16:45:29.709638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.392 qpair failed and we were unable to recover it. 00:37:29.392 [2024-09-29 16:45:29.709808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.392 [2024-09-29 16:45:29.709843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.392 qpair failed and we were unable to recover it. 00:37:29.392 [2024-09-29 16:45:29.710033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.392 [2024-09-29 16:45:29.710098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.392 qpair failed and we were unable to recover it. 00:37:29.392 [2024-09-29 16:45:29.710218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.392 [2024-09-29 16:45:29.710259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.392 qpair failed and we were unable to recover it. 00:37:29.392 [2024-09-29 16:45:29.710476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.392 [2024-09-29 16:45:29.710512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.392 qpair failed and we were unable to recover it. 00:37:29.392 [2024-09-29 16:45:29.710699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.392 [2024-09-29 16:45:29.710748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.392 qpair failed and we were unable to recover it. 00:37:29.392 [2024-09-29 16:45:29.710942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.392 [2024-09-29 16:45:29.710995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.392 qpair failed and we were unable to recover it. 00:37:29.392 [2024-09-29 16:45:29.711131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.392 [2024-09-29 16:45:29.711172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.392 qpair failed and we were unable to recover it. 00:37:29.392 [2024-09-29 16:45:29.711313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.392 [2024-09-29 16:45:29.711347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.392 qpair failed and we were unable to recover it. 00:37:29.392 [2024-09-29 16:45:29.711513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.392 [2024-09-29 16:45:29.711563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.392 qpair failed and we were unable to recover it. 00:37:29.392 [2024-09-29 16:45:29.711677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.392 [2024-09-29 16:45:29.711713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.392 qpair failed and we were unable to recover it. 00:37:29.392 [2024-09-29 16:45:29.711850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.392 [2024-09-29 16:45:29.711898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.392 qpair failed and we were unable to recover it. 00:37:29.392 [2024-09-29 16:45:29.712049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.392 [2024-09-29 16:45:29.712101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.392 qpair failed and we were unable to recover it. 00:37:29.392 [2024-09-29 16:45:29.712238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.392 [2024-09-29 16:45:29.712276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.392 qpair failed and we were unable to recover it. 00:37:29.393 [2024-09-29 16:45:29.712449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.393 [2024-09-29 16:45:29.712486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.393 qpair failed and we were unable to recover it. 00:37:29.393 [2024-09-29 16:45:29.712668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.393 [2024-09-29 16:45:29.712730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.393 qpair failed and we were unable to recover it. 00:37:29.393 [2024-09-29 16:45:29.712899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.393 [2024-09-29 16:45:29.712946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.393 qpair failed and we were unable to recover it. 00:37:29.393 [2024-09-29 16:45:29.713143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.393 [2024-09-29 16:45:29.713181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.393 qpair failed and we were unable to recover it. 00:37:29.393 [2024-09-29 16:45:29.713434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.393 [2024-09-29 16:45:29.713493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.393 qpair failed and we were unable to recover it. 00:37:29.393 [2024-09-29 16:45:29.713659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.393 [2024-09-29 16:45:29.713704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.393 qpair failed and we were unable to recover it. 00:37:29.393 [2024-09-29 16:45:29.713821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.393 [2024-09-29 16:45:29.713855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.393 qpair failed and we were unable to recover it. 00:37:29.393 [2024-09-29 16:45:29.714029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.393 [2024-09-29 16:45:29.714063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.393 qpair failed and we were unable to recover it. 00:37:29.393 [2024-09-29 16:45:29.714287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.393 [2024-09-29 16:45:29.714325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.393 qpair failed and we were unable to recover it. 00:37:29.393 [2024-09-29 16:45:29.714515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.393 [2024-09-29 16:45:29.714552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.393 qpair failed and we were unable to recover it. 00:37:29.393 [2024-09-29 16:45:29.714722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.393 [2024-09-29 16:45:29.714769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.393 qpair failed and we were unable to recover it. 00:37:29.393 [2024-09-29 16:45:29.714902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.393 [2024-09-29 16:45:29.714949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.393 qpair failed and we were unable to recover it. 00:37:29.393 [2024-09-29 16:45:29.715120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.393 [2024-09-29 16:45:29.715173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.393 qpair failed and we were unable to recover it. 00:37:29.393 [2024-09-29 16:45:29.715369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.393 [2024-09-29 16:45:29.715409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.393 qpair failed and we were unable to recover it. 00:37:29.393 [2024-09-29 16:45:29.715532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.393 [2024-09-29 16:45:29.715570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.393 qpair failed and we were unable to recover it. 00:37:29.393 [2024-09-29 16:45:29.715709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.393 [2024-09-29 16:45:29.715743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.393 qpair failed and we were unable to recover it. 00:37:29.393 [2024-09-29 16:45:29.715861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.393 [2024-09-29 16:45:29.715901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.393 qpair failed and we were unable to recover it. 00:37:29.393 [2024-09-29 16:45:29.716093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.393 [2024-09-29 16:45:29.716131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.393 qpair failed and we were unable to recover it. 00:37:29.393 [2024-09-29 16:45:29.716309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.393 [2024-09-29 16:45:29.716347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.393 qpair failed and we were unable to recover it. 00:37:29.393 [2024-09-29 16:45:29.716472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.393 [2024-09-29 16:45:29.716511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.393 qpair failed and we were unable to recover it. 00:37:29.393 [2024-09-29 16:45:29.716645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.393 [2024-09-29 16:45:29.716689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.393 qpair failed and we were unable to recover it. 00:37:29.393 [2024-09-29 16:45:29.716837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.393 [2024-09-29 16:45:29.716875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.393 qpair failed and we were unable to recover it. 00:37:29.393 [2024-09-29 16:45:29.717110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.393 [2024-09-29 16:45:29.717169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.393 qpair failed and we were unable to recover it. 00:37:29.393 [2024-09-29 16:45:29.717296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.393 [2024-09-29 16:45:29.717349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.393 qpair failed and we were unable to recover it. 00:37:29.393 [2024-09-29 16:45:29.717468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.393 [2024-09-29 16:45:29.717502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.393 qpair failed and we were unable to recover it. 00:37:29.393 [2024-09-29 16:45:29.717648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.393 [2024-09-29 16:45:29.717690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.393 qpair failed and we were unable to recover it. 00:37:29.393 [2024-09-29 16:45:29.717850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.393 [2024-09-29 16:45:29.717901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.393 qpair failed and we were unable to recover it. 00:37:29.393 [2024-09-29 16:45:29.718041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.393 [2024-09-29 16:45:29.718075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.393 qpair failed and we were unable to recover it. 00:37:29.393 [2024-09-29 16:45:29.718223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.393 [2024-09-29 16:45:29.718258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.393 qpair failed and we were unable to recover it. 00:37:29.393 [2024-09-29 16:45:29.718368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.393 [2024-09-29 16:45:29.718406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.393 qpair failed and we were unable to recover it. 00:37:29.393 [2024-09-29 16:45:29.718551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.393 [2024-09-29 16:45:29.718585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.393 qpair failed and we were unable to recover it. 00:37:29.393 [2024-09-29 16:45:29.718725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.393 [2024-09-29 16:45:29.718759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.393 qpair failed and we were unable to recover it. 00:37:29.393 [2024-09-29 16:45:29.718948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.393 [2024-09-29 16:45:29.718995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.393 qpair failed and we were unable to recover it. 00:37:29.393 [2024-09-29 16:45:29.719175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.393 [2024-09-29 16:45:29.719243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.393 qpair failed and we were unable to recover it. 00:37:29.393 [2024-09-29 16:45:29.719468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.393 [2024-09-29 16:45:29.719503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.393 qpair failed and we were unable to recover it. 00:37:29.393 [2024-09-29 16:45:29.719715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.394 [2024-09-29 16:45:29.719749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.394 qpair failed and we were unable to recover it. 00:37:29.394 [2024-09-29 16:45:29.719907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.394 [2024-09-29 16:45:29.719959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.394 qpair failed and we were unable to recover it. 00:37:29.394 [2024-09-29 16:45:29.720115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.394 [2024-09-29 16:45:29.720167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.394 qpair failed and we were unable to recover it. 00:37:29.394 [2024-09-29 16:45:29.720308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.394 [2024-09-29 16:45:29.720342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.394 qpair failed and we were unable to recover it. 00:37:29.394 [2024-09-29 16:45:29.720485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.394 [2024-09-29 16:45:29.720519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.394 qpair failed and we were unable to recover it. 00:37:29.394 [2024-09-29 16:45:29.720727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.394 [2024-09-29 16:45:29.720780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.394 qpair failed and we were unable to recover it. 00:37:29.394 [2024-09-29 16:45:29.720943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.394 [2024-09-29 16:45:29.720982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.394 qpair failed and we were unable to recover it. 00:37:29.394 [2024-09-29 16:45:29.721124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.394 [2024-09-29 16:45:29.721158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.394 qpair failed and we were unable to recover it. 00:37:29.394 [2024-09-29 16:45:29.721336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.394 [2024-09-29 16:45:29.721372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.394 qpair failed and we were unable to recover it. 00:37:29.394 [2024-09-29 16:45:29.721528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.394 [2024-09-29 16:45:29.721565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.394 qpair failed and we were unable to recover it. 00:37:29.394 [2024-09-29 16:45:29.721754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.394 [2024-09-29 16:45:29.721802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.394 qpair failed and we were unable to recover it. 00:37:29.394 [2024-09-29 16:45:29.721945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.394 [2024-09-29 16:45:29.721986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.394 qpair failed and we were unable to recover it. 00:37:29.394 [2024-09-29 16:45:29.722115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.394 [2024-09-29 16:45:29.722154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.394 qpair failed and we were unable to recover it. 00:37:29.394 [2024-09-29 16:45:29.722282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.394 [2024-09-29 16:45:29.722320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.394 qpair failed and we were unable to recover it. 00:37:29.394 [2024-09-29 16:45:29.722506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.394 [2024-09-29 16:45:29.722561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.394 qpair failed and we were unable to recover it. 00:37:29.394 [2024-09-29 16:45:29.722704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.394 [2024-09-29 16:45:29.722739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.394 qpair failed and we were unable to recover it. 00:37:29.394 [2024-09-29 16:45:29.722869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.394 [2024-09-29 16:45:29.722907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.394 qpair failed and we were unable to recover it. 00:37:29.394 [2024-09-29 16:45:29.723050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.394 [2024-09-29 16:45:29.723102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.394 qpair failed and we were unable to recover it. 00:37:29.394 [2024-09-29 16:45:29.723265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.394 [2024-09-29 16:45:29.723317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.394 qpair failed and we were unable to recover it. 00:37:29.394 [2024-09-29 16:45:29.723442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.394 [2024-09-29 16:45:29.723477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.394 qpair failed and we were unable to recover it. 00:37:29.394 [2024-09-29 16:45:29.723619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.394 [2024-09-29 16:45:29.723655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.394 qpair failed and we were unable to recover it. 00:37:29.394 [2024-09-29 16:45:29.723805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.394 [2024-09-29 16:45:29.723852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.394 qpair failed and we were unable to recover it. 00:37:29.394 [2024-09-29 16:45:29.724030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.394 [2024-09-29 16:45:29.724069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.394 qpair failed and we were unable to recover it. 00:37:29.394 [2024-09-29 16:45:29.724278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.394 [2024-09-29 16:45:29.724315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.394 qpair failed and we were unable to recover it. 00:37:29.394 [2024-09-29 16:45:29.724463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.394 [2024-09-29 16:45:29.724500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.394 qpair failed and we were unable to recover it. 00:37:29.394 [2024-09-29 16:45:29.724700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.394 [2024-09-29 16:45:29.724751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.394 qpair failed and we were unable to recover it. 00:37:29.394 [2024-09-29 16:45:29.724964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.394 [2024-09-29 16:45:29.725028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.394 qpair failed and we were unable to recover it. 00:37:29.394 [2024-09-29 16:45:29.725204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.394 [2024-09-29 16:45:29.725263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.394 qpair failed and we were unable to recover it. 00:37:29.394 [2024-09-29 16:45:29.725496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.394 [2024-09-29 16:45:29.725552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.394 qpair failed and we were unable to recover it. 00:37:29.394 [2024-09-29 16:45:29.725679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.394 [2024-09-29 16:45:29.725715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.394 qpair failed and we were unable to recover it. 00:37:29.394 [2024-09-29 16:45:29.725837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.394 [2024-09-29 16:45:29.725872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.394 qpair failed and we were unable to recover it. 00:37:29.394 [2024-09-29 16:45:29.726071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.394 [2024-09-29 16:45:29.726133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.394 qpair failed and we were unable to recover it. 00:37:29.394 [2024-09-29 16:45:29.726370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.394 [2024-09-29 16:45:29.726426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.394 qpair failed and we were unable to recover it. 00:37:29.394 [2024-09-29 16:45:29.726599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.395 [2024-09-29 16:45:29.726652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.395 qpair failed and we were unable to recover it. 00:37:29.395 [2024-09-29 16:45:29.726826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.395 [2024-09-29 16:45:29.726889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.395 qpair failed and we were unable to recover it. 00:37:29.395 [2024-09-29 16:45:29.727050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.395 [2024-09-29 16:45:29.727104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.395 qpair failed and we were unable to recover it. 00:37:29.395 [2024-09-29 16:45:29.727237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.395 [2024-09-29 16:45:29.727275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.395 qpair failed and we were unable to recover it. 00:37:29.395 [2024-09-29 16:45:29.727433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.395 [2024-09-29 16:45:29.727466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.395 qpair failed and we were unable to recover it. 00:37:29.395 [2024-09-29 16:45:29.727632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.395 [2024-09-29 16:45:29.727688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.395 qpair failed and we were unable to recover it. 00:37:29.395 [2024-09-29 16:45:29.727860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.395 [2024-09-29 16:45:29.727913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.395 qpair failed and we were unable to recover it. 00:37:29.395 [2024-09-29 16:45:29.728069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.395 [2024-09-29 16:45:29.728109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.395 qpair failed and we were unable to recover it. 00:37:29.395 [2024-09-29 16:45:29.728233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.395 [2024-09-29 16:45:29.728269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.395 qpair failed and we were unable to recover it. 00:37:29.395 [2024-09-29 16:45:29.728388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.395 [2024-09-29 16:45:29.728425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.395 qpair failed and we were unable to recover it. 00:37:29.395 [2024-09-29 16:45:29.728620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.395 [2024-09-29 16:45:29.728663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.395 qpair failed and we were unable to recover it. 00:37:29.395 [2024-09-29 16:45:29.728838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.395 [2024-09-29 16:45:29.728891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.395 qpair failed and we were unable to recover it. 00:37:29.395 [2024-09-29 16:45:29.729070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.395 [2024-09-29 16:45:29.729126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.395 qpair failed and we were unable to recover it. 00:37:29.395 [2024-09-29 16:45:29.729272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.395 [2024-09-29 16:45:29.729307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.395 qpair failed and we were unable to recover it. 00:37:29.395 [2024-09-29 16:45:29.729443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.395 [2024-09-29 16:45:29.729480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.395 qpair failed and we were unable to recover it. 00:37:29.395 [2024-09-29 16:45:29.729654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.395 [2024-09-29 16:45:29.729694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.395 qpair failed and we were unable to recover it. 00:37:29.395 [2024-09-29 16:45:29.729846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.395 [2024-09-29 16:45:29.729881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.395 qpair failed and we were unable to recover it. 00:37:29.395 [2024-09-29 16:45:29.730052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.395 [2024-09-29 16:45:29.730122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.395 qpair failed and we were unable to recover it. 00:37:29.395 [2024-09-29 16:45:29.730301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.395 [2024-09-29 16:45:29.730338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.395 qpair failed and we were unable to recover it. 00:37:29.395 [2024-09-29 16:45:29.730490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.395 [2024-09-29 16:45:29.730526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.395 qpair failed and we were unable to recover it. 00:37:29.395 [2024-09-29 16:45:29.730698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.395 [2024-09-29 16:45:29.730731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.395 qpair failed and we were unable to recover it. 00:37:29.395 [2024-09-29 16:45:29.730890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.395 [2024-09-29 16:45:29.730937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.395 qpair failed and we were unable to recover it. 00:37:29.395 [2024-09-29 16:45:29.731132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.395 [2024-09-29 16:45:29.731185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.395 qpair failed and we were unable to recover it. 00:37:29.395 [2024-09-29 16:45:29.731368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.395 [2024-09-29 16:45:29.731435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.395 qpair failed and we were unable to recover it. 00:37:29.395 [2024-09-29 16:45:29.731594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.395 [2024-09-29 16:45:29.731632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.395 qpair failed and we were unable to recover it. 00:37:29.395 [2024-09-29 16:45:29.731824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.395 [2024-09-29 16:45:29.731859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.395 qpair failed and we were unable to recover it. 00:37:29.395 [2024-09-29 16:45:29.731997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.395 [2024-09-29 16:45:29.732035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.395 qpair failed and we were unable to recover it. 00:37:29.395 [2024-09-29 16:45:29.732258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.395 [2024-09-29 16:45:29.732314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.395 qpair failed and we were unable to recover it. 00:37:29.395 [2024-09-29 16:45:29.732539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.395 [2024-09-29 16:45:29.732576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.395 qpair failed and we were unable to recover it. 00:37:29.395 [2024-09-29 16:45:29.732730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.395 [2024-09-29 16:45:29.732781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.395 qpair failed and we were unable to recover it. 00:37:29.395 [2024-09-29 16:45:29.732933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.395 [2024-09-29 16:45:29.732990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.395 qpair failed and we were unable to recover it. 00:37:29.395 [2024-09-29 16:45:29.733200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.396 [2024-09-29 16:45:29.733236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.396 qpair failed and we were unable to recover it. 00:37:29.396 [2024-09-29 16:45:29.733371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.396 [2024-09-29 16:45:29.733408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.396 qpair failed and we were unable to recover it. 00:37:29.396 [2024-09-29 16:45:29.733531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.396 [2024-09-29 16:45:29.733567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.396 qpair failed and we were unable to recover it. 00:37:29.396 [2024-09-29 16:45:29.733732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.396 [2024-09-29 16:45:29.733765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.396 qpair failed and we were unable to recover it. 00:37:29.396 [2024-09-29 16:45:29.733902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.396 [2024-09-29 16:45:29.733951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.396 qpair failed and we were unable to recover it. 00:37:29.396 [2024-09-29 16:45:29.734120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.396 [2024-09-29 16:45:29.734159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.396 qpair failed and we were unable to recover it. 00:37:29.396 [2024-09-29 16:45:29.734320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.396 [2024-09-29 16:45:29.734359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.396 qpair failed and we were unable to recover it. 00:37:29.396 [2024-09-29 16:45:29.734517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.396 [2024-09-29 16:45:29.734556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.396 qpair failed and we were unable to recover it. 00:37:29.396 [2024-09-29 16:45:29.734754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.396 [2024-09-29 16:45:29.734789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.396 qpair failed and we were unable to recover it. 00:37:29.396 [2024-09-29 16:45:29.734951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.396 [2024-09-29 16:45:29.734998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.396 qpair failed and we were unable to recover it. 00:37:29.396 [2024-09-29 16:45:29.735145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.396 [2024-09-29 16:45:29.735206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.396 qpair failed and we were unable to recover it. 00:37:29.396 [2024-09-29 16:45:29.735373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.396 [2024-09-29 16:45:29.735425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.396 qpair failed and we were unable to recover it. 00:37:29.396 [2024-09-29 16:45:29.735561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.396 [2024-09-29 16:45:29.735594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.396 qpair failed and we were unable to recover it. 00:37:29.396 [2024-09-29 16:45:29.735768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.396 [2024-09-29 16:45:29.735816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.396 qpair failed and we were unable to recover it. 00:37:29.396 [2024-09-29 16:45:29.735967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.396 [2024-09-29 16:45:29.736003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.396 qpair failed and we were unable to recover it. 00:37:29.396 [2024-09-29 16:45:29.736178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.396 [2024-09-29 16:45:29.736214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.396 qpair failed and we were unable to recover it. 00:37:29.396 [2024-09-29 16:45:29.736337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.396 [2024-09-29 16:45:29.736372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.396 qpair failed and we were unable to recover it. 00:37:29.396 [2024-09-29 16:45:29.736525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.396 [2024-09-29 16:45:29.736573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.396 qpair failed and we were unable to recover it. 00:37:29.396 [2024-09-29 16:45:29.736727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.396 [2024-09-29 16:45:29.736762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.396 qpair failed and we were unable to recover it. 00:37:29.396 [2024-09-29 16:45:29.736905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.396 [2024-09-29 16:45:29.736939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.396 qpair failed and we were unable to recover it. 00:37:29.396 [2024-09-29 16:45:29.737085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.396 [2024-09-29 16:45:29.737135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.396 qpair failed and we were unable to recover it. 00:37:29.396 [2024-09-29 16:45:29.737280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.396 [2024-09-29 16:45:29.737313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.396 qpair failed and we were unable to recover it. 00:37:29.396 [2024-09-29 16:45:29.737464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.396 [2024-09-29 16:45:29.737501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.396 qpair failed and we were unable to recover it. 00:37:29.396 [2024-09-29 16:45:29.737668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.396 [2024-09-29 16:45:29.737706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.396 qpair failed and we were unable to recover it. 00:37:29.396 [2024-09-29 16:45:29.737835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.396 [2024-09-29 16:45:29.737867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.396 qpair failed and we were unable to recover it. 00:37:29.396 [2024-09-29 16:45:29.738003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.396 [2024-09-29 16:45:29.738041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.396 qpair failed and we were unable to recover it. 00:37:29.396 [2024-09-29 16:45:29.738198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.396 [2024-09-29 16:45:29.738235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.396 qpair failed and we were unable to recover it. 00:37:29.396 [2024-09-29 16:45:29.738392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.396 [2024-09-29 16:45:29.738429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.396 qpair failed and we were unable to recover it. 00:37:29.396 [2024-09-29 16:45:29.738548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.396 [2024-09-29 16:45:29.738584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.396 qpair failed and we were unable to recover it. 00:37:29.396 [2024-09-29 16:45:29.738717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.396 [2024-09-29 16:45:29.738750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.396 qpair failed and we were unable to recover it. 00:37:29.396 [2024-09-29 16:45:29.738915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.396 [2024-09-29 16:45:29.738965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.396 qpair failed and we were unable to recover it. 00:37:29.396 [2024-09-29 16:45:29.739122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.397 [2024-09-29 16:45:29.739172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.397 qpair failed and we were unable to recover it. 00:37:29.397 [2024-09-29 16:45:29.739414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.397 [2024-09-29 16:45:29.739451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.397 qpair failed and we were unable to recover it. 00:37:29.397 [2024-09-29 16:45:29.739605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.397 [2024-09-29 16:45:29.739640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.397 qpair failed and we were unable to recover it. 00:37:29.397 [2024-09-29 16:45:29.739849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.397 [2024-09-29 16:45:29.739896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.397 qpair failed and we were unable to recover it. 00:37:29.397 [2024-09-29 16:45:29.740086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.397 [2024-09-29 16:45:29.740165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.397 qpair failed and we were unable to recover it. 00:37:29.397 [2024-09-29 16:45:29.740385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.397 [2024-09-29 16:45:29.740440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.397 qpair failed and we were unable to recover it. 00:37:29.397 [2024-09-29 16:45:29.740591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.397 [2024-09-29 16:45:29.740627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.397 qpair failed and we were unable to recover it. 00:37:29.397 [2024-09-29 16:45:29.740789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.397 [2024-09-29 16:45:29.740827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.397 qpair failed and we were unable to recover it. 00:37:29.397 [2024-09-29 16:45:29.741005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.397 [2024-09-29 16:45:29.741059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.397 qpair failed and we were unable to recover it. 00:37:29.397 [2024-09-29 16:45:29.741294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.397 [2024-09-29 16:45:29.741354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.397 qpair failed and we were unable to recover it. 00:37:29.397 [2024-09-29 16:45:29.741600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.397 [2024-09-29 16:45:29.741659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.397 qpair failed and we were unable to recover it. 00:37:29.397 [2024-09-29 16:45:29.741854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.397 [2024-09-29 16:45:29.741894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.397 qpair failed and we were unable to recover it. 00:37:29.397 [2024-09-29 16:45:29.742024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.397 [2024-09-29 16:45:29.742063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.397 qpair failed and we were unable to recover it. 00:37:29.397 [2024-09-29 16:45:29.742229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.397 [2024-09-29 16:45:29.742266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.397 qpair failed and we were unable to recover it. 00:37:29.397 [2024-09-29 16:45:29.742437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.397 [2024-09-29 16:45:29.742474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.397 qpair failed and we were unable to recover it. 00:37:29.397 [2024-09-29 16:45:29.742650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.397 [2024-09-29 16:45:29.742731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.397 qpair failed and we were unable to recover it. 00:37:29.397 [2024-09-29 16:45:29.742873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.397 [2024-09-29 16:45:29.742909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.397 qpair failed and we were unable to recover it. 00:37:29.397 [2024-09-29 16:45:29.743208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.397 [2024-09-29 16:45:29.743269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.397 qpair failed and we were unable to recover it. 00:37:29.397 [2024-09-29 16:45:29.743544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.397 [2024-09-29 16:45:29.743602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.397 qpair failed and we were unable to recover it. 00:37:29.397 [2024-09-29 16:45:29.743744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.397 [2024-09-29 16:45:29.743783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.397 qpair failed and we were unable to recover it. 00:37:29.397 [2024-09-29 16:45:29.743918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.397 [2024-09-29 16:45:29.743954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.397 qpair failed and we were unable to recover it. 00:37:29.397 [2024-09-29 16:45:29.744200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.397 [2024-09-29 16:45:29.744257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.397 qpair failed and we were unable to recover it. 00:37:29.397 [2024-09-29 16:45:29.744489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.397 [2024-09-29 16:45:29.744525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.397 qpair failed and we were unable to recover it. 00:37:29.397 [2024-09-29 16:45:29.744687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.397 [2024-09-29 16:45:29.744719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.397 qpair failed and we were unable to recover it. 00:37:29.397 [2024-09-29 16:45:29.744877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.397 [2024-09-29 16:45:29.744912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.397 qpair failed and we were unable to recover it. 00:37:29.397 [2024-09-29 16:45:29.745086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.397 [2024-09-29 16:45:29.745137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.397 qpair failed and we were unable to recover it. 00:37:29.397 [2024-09-29 16:45:29.745405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.397 [2024-09-29 16:45:29.745443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.397 qpair failed and we were unable to recover it. 00:37:29.397 [2024-09-29 16:45:29.745610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.397 [2024-09-29 16:45:29.745646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.397 qpair failed and we were unable to recover it. 00:37:29.397 [2024-09-29 16:45:29.745792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.397 [2024-09-29 16:45:29.745825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.397 qpair failed and we were unable to recover it. 00:37:29.397 [2024-09-29 16:45:29.745957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.397 [2024-09-29 16:45:29.746005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.397 qpair failed and we were unable to recover it. 00:37:29.397 [2024-09-29 16:45:29.746221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.398 [2024-09-29 16:45:29.746294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.398 qpair failed and we were unable to recover it. 00:37:29.398 [2024-09-29 16:45:29.746580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.398 [2024-09-29 16:45:29.746647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.398 qpair failed and we were unable to recover it. 00:37:29.398 [2024-09-29 16:45:29.746826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.398 [2024-09-29 16:45:29.746860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.398 qpair failed and we were unable to recover it. 00:37:29.398 [2024-09-29 16:45:29.747037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.398 [2024-09-29 16:45:29.747090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.398 qpair failed and we were unable to recover it. 00:37:29.398 [2024-09-29 16:45:29.747260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.398 [2024-09-29 16:45:29.747314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.398 qpair failed and we were unable to recover it. 00:37:29.398 [2024-09-29 16:45:29.747560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.398 [2024-09-29 16:45:29.747598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.398 qpair failed and we were unable to recover it. 00:37:29.398 [2024-09-29 16:45:29.747784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.398 [2024-09-29 16:45:29.747822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.398 qpair failed and we were unable to recover it. 00:37:29.398 [2024-09-29 16:45:29.747976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.398 [2024-09-29 16:45:29.748014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.398 qpair failed and we were unable to recover it. 00:37:29.398 [2024-09-29 16:45:29.748181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.398 [2024-09-29 16:45:29.748218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.398 qpair failed and we were unable to recover it. 00:37:29.398 [2024-09-29 16:45:29.748360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.398 [2024-09-29 16:45:29.748394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.398 qpair failed and we were unable to recover it. 00:37:29.398 [2024-09-29 16:45:29.748574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.398 [2024-09-29 16:45:29.748607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.398 qpair failed and we were unable to recover it. 00:37:29.398 [2024-09-29 16:45:29.748724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.398 [2024-09-29 16:45:29.748757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.398 qpair failed and we were unable to recover it. 00:37:29.398 [2024-09-29 16:45:29.748931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.398 [2024-09-29 16:45:29.748980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.398 qpair failed and we were unable to recover it. 00:37:29.398 [2024-09-29 16:45:29.749179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.398 [2024-09-29 16:45:29.749217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.398 qpair failed and we were unable to recover it. 00:37:29.398 [2024-09-29 16:45:29.749369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.398 [2024-09-29 16:45:29.749406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.398 qpair failed and we were unable to recover it. 00:37:29.398 [2024-09-29 16:45:29.749580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.398 [2024-09-29 16:45:29.749633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.398 qpair failed and we were unable to recover it. 00:37:29.398 [2024-09-29 16:45:29.749836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.398 [2024-09-29 16:45:29.749884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.398 qpair failed and we were unable to recover it. 00:37:29.398 [2024-09-29 16:45:29.750123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.398 [2024-09-29 16:45:29.750175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.398 qpair failed and we were unable to recover it. 00:37:29.398 [2024-09-29 16:45:29.750424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.398 [2024-09-29 16:45:29.750483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.398 qpair failed and we were unable to recover it. 00:37:29.398 [2024-09-29 16:45:29.750637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.398 [2024-09-29 16:45:29.750685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.398 qpair failed and we were unable to recover it. 00:37:29.398 [2024-09-29 16:45:29.750846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.398 [2024-09-29 16:45:29.750880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.398 qpair failed and we were unable to recover it. 00:37:29.398 [2024-09-29 16:45:29.751019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.398 [2024-09-29 16:45:29.751084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.398 qpair failed and we were unable to recover it. 00:37:29.398 [2024-09-29 16:45:29.751247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.398 [2024-09-29 16:45:29.751287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.398 qpair failed and we were unable to recover it. 00:37:29.398 [2024-09-29 16:45:29.751495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.398 [2024-09-29 16:45:29.751533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.398 qpair failed and we were unable to recover it. 00:37:29.398 [2024-09-29 16:45:29.751690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.398 [2024-09-29 16:45:29.751742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.398 qpair failed and we were unable to recover it. 00:37:29.398 [2024-09-29 16:45:29.751886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.398 [2024-09-29 16:45:29.751920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.398 qpair failed and we were unable to recover it. 00:37:29.398 [2024-09-29 16:45:29.752110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.398 [2024-09-29 16:45:29.752148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.398 qpair failed and we were unable to recover it. 00:37:29.398 [2024-09-29 16:45:29.752288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.398 [2024-09-29 16:45:29.752338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.398 qpair failed and we were unable to recover it. 00:37:29.399 [2024-09-29 16:45:29.752514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.399 [2024-09-29 16:45:29.752566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.399 qpair failed and we were unable to recover it. 00:37:29.399 [2024-09-29 16:45:29.752758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.399 [2024-09-29 16:45:29.752812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.399 qpair failed and we were unable to recover it. 00:37:29.399 [2024-09-29 16:45:29.752968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.399 [2024-09-29 16:45:29.753003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.399 qpair failed and we were unable to recover it. 00:37:29.399 [2024-09-29 16:45:29.753171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.399 [2024-09-29 16:45:29.753205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.399 qpair failed and we were unable to recover it. 00:37:29.399 [2024-09-29 16:45:29.753341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.399 [2024-09-29 16:45:29.753375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.399 qpair failed and we were unable to recover it. 00:37:29.399 [2024-09-29 16:45:29.753503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.399 [2024-09-29 16:45:29.753540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.399 qpair failed and we were unable to recover it. 00:37:29.399 [2024-09-29 16:45:29.753692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.399 [2024-09-29 16:45:29.753747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.399 qpair failed and we were unable to recover it. 00:37:29.399 [2024-09-29 16:45:29.753870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.399 [2024-09-29 16:45:29.753903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.399 qpair failed and we were unable to recover it. 00:37:29.399 [2024-09-29 16:45:29.754066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.399 [2024-09-29 16:45:29.754103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.399 qpair failed and we were unable to recover it. 00:37:29.399 [2024-09-29 16:45:29.754256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.399 [2024-09-29 16:45:29.754294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.399 qpair failed and we were unable to recover it. 00:37:29.399 [2024-09-29 16:45:29.754420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.399 [2024-09-29 16:45:29.754458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.399 qpair failed and we were unable to recover it. 00:37:29.399 [2024-09-29 16:45:29.754654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.399 [2024-09-29 16:45:29.754703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.399 qpair failed and we were unable to recover it. 00:37:29.399 [2024-09-29 16:45:29.754846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.399 [2024-09-29 16:45:29.754880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.399 qpair failed and we were unable to recover it. 00:37:29.399 [2024-09-29 16:45:29.755044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.399 [2024-09-29 16:45:29.755091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.399 qpair failed and we were unable to recover it. 00:37:29.399 [2024-09-29 16:45:29.755302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.399 [2024-09-29 16:45:29.755341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.399 qpair failed and we were unable to recover it. 00:37:29.399 [2024-09-29 16:45:29.755497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.399 [2024-09-29 16:45:29.755569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.399 qpair failed and we were unable to recover it. 00:37:29.399 [2024-09-29 16:45:29.755702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.399 [2024-09-29 16:45:29.755735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.399 qpair failed and we were unable to recover it. 00:37:29.399 [2024-09-29 16:45:29.755856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.399 [2024-09-29 16:45:29.755888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.399 qpair failed and we were unable to recover it. 00:37:29.399 [2024-09-29 16:45:29.756109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.399 [2024-09-29 16:45:29.756146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.399 qpair failed and we were unable to recover it. 00:37:29.399 [2024-09-29 16:45:29.756388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.399 [2024-09-29 16:45:29.756448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.399 qpair failed and we were unable to recover it. 00:37:29.399 [2024-09-29 16:45:29.756600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.399 [2024-09-29 16:45:29.756636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.399 qpair failed and we were unable to recover it. 00:37:29.399 [2024-09-29 16:45:29.756794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.399 [2024-09-29 16:45:29.756828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.399 qpair failed and we were unable to recover it. 00:37:29.399 [2024-09-29 16:45:29.756986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.399 [2024-09-29 16:45:29.757052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.399 qpair failed and we were unable to recover it. 00:37:29.399 [2024-09-29 16:45:29.757218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.399 [2024-09-29 16:45:29.757271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.399 qpair failed and we were unable to recover it. 00:37:29.399 [2024-09-29 16:45:29.757404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.399 [2024-09-29 16:45:29.757457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.399 qpair failed and we were unable to recover it. 00:37:29.399 [2024-09-29 16:45:29.757626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.399 [2024-09-29 16:45:29.757660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.399 qpair failed and we were unable to recover it. 00:37:29.399 [2024-09-29 16:45:29.757807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.399 [2024-09-29 16:45:29.757841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.399 qpair failed and we were unable to recover it. 00:37:29.399 [2024-09-29 16:45:29.757955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.399 [2024-09-29 16:45:29.757990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.399 qpair failed and we were unable to recover it. 00:37:29.399 [2024-09-29 16:45:29.758165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.399 [2024-09-29 16:45:29.758217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.399 qpair failed and we were unable to recover it. 00:37:29.399 [2024-09-29 16:45:29.758380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.399 [2024-09-29 16:45:29.758434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.399 qpair failed and we were unable to recover it. 00:37:29.399 [2024-09-29 16:45:29.758616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.399 [2024-09-29 16:45:29.758664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.399 qpair failed and we were unable to recover it. 00:37:29.399 [2024-09-29 16:45:29.758807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.399 [2024-09-29 16:45:29.758854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.399 qpair failed and we were unable to recover it. 00:37:29.399 [2024-09-29 16:45:29.759001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.399 [2024-09-29 16:45:29.759054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.399 qpair failed and we were unable to recover it. 00:37:29.399 [2024-09-29 16:45:29.759311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.400 [2024-09-29 16:45:29.759371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.400 qpair failed and we were unable to recover it. 00:37:29.400 [2024-09-29 16:45:29.759569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.400 [2024-09-29 16:45:29.759603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.400 qpair failed and we were unable to recover it. 00:37:29.400 [2024-09-29 16:45:29.759829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.400 [2024-09-29 16:45:29.759864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.400 qpair failed and we were unable to recover it. 00:37:29.400 [2024-09-29 16:45:29.760097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.400 [2024-09-29 16:45:29.760149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.400 qpair failed and we were unable to recover it. 00:37:29.400 [2024-09-29 16:45:29.760375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.400 [2024-09-29 16:45:29.760434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.400 qpair failed and we were unable to recover it. 00:37:29.400 [2024-09-29 16:45:29.760624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.400 [2024-09-29 16:45:29.760662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.400 qpair failed and we were unable to recover it. 00:37:29.400 [2024-09-29 16:45:29.760844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.400 [2024-09-29 16:45:29.760882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.400 qpair failed and we were unable to recover it. 00:37:29.400 [2024-09-29 16:45:29.761037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.400 [2024-09-29 16:45:29.761090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.400 qpair failed and we were unable to recover it. 00:37:29.400 [2024-09-29 16:45:29.761335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.400 [2024-09-29 16:45:29.761401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.400 qpair failed and we were unable to recover it. 00:37:29.400 [2024-09-29 16:45:29.761589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.400 [2024-09-29 16:45:29.761637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.400 qpair failed and we were unable to recover it. 00:37:29.400 [2024-09-29 16:45:29.761796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.400 [2024-09-29 16:45:29.761831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.400 qpair failed and we were unable to recover it. 00:37:29.400 [2024-09-29 16:45:29.761965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.400 [2024-09-29 16:45:29.762011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.400 qpair failed and we were unable to recover it. 00:37:29.400 [2024-09-29 16:45:29.762228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.400 [2024-09-29 16:45:29.762288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.400 qpair failed and we were unable to recover it. 00:37:29.400 [2024-09-29 16:45:29.762423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.400 [2024-09-29 16:45:29.762462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.400 qpair failed and we were unable to recover it. 00:37:29.400 [2024-09-29 16:45:29.762608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.400 [2024-09-29 16:45:29.762641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.400 qpair failed and we were unable to recover it. 00:37:29.400 [2024-09-29 16:45:29.762787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.400 [2024-09-29 16:45:29.762834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.400 qpair failed and we were unable to recover it. 00:37:29.400 [2024-09-29 16:45:29.762959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.400 [2024-09-29 16:45:29.762997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.400 qpair failed and we were unable to recover it. 00:37:29.400 [2024-09-29 16:45:29.763192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.400 [2024-09-29 16:45:29.763245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.400 qpair failed and we were unable to recover it. 00:37:29.400 [2024-09-29 16:45:29.763422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.400 [2024-09-29 16:45:29.763492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.400 qpair failed and we were unable to recover it. 00:37:29.400 [2024-09-29 16:45:29.763638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.400 [2024-09-29 16:45:29.763681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.400 qpair failed and we were unable to recover it. 00:37:29.400 [2024-09-29 16:45:29.763867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.400 [2024-09-29 16:45:29.763901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.400 qpair failed and we were unable to recover it. 00:37:29.400 [2024-09-29 16:45:29.764087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.400 [2024-09-29 16:45:29.764121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.400 qpair failed and we were unable to recover it. 00:37:29.400 [2024-09-29 16:45:29.764244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.400 [2024-09-29 16:45:29.764279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.400 qpair failed and we were unable to recover it. 00:37:29.400 [2024-09-29 16:45:29.764428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.400 [2024-09-29 16:45:29.764461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.400 qpair failed and we were unable to recover it. 00:37:29.400 [2024-09-29 16:45:29.764606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.400 [2024-09-29 16:45:29.764640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.400 qpair failed and we were unable to recover it. 00:37:29.400 [2024-09-29 16:45:29.764793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.400 [2024-09-29 16:45:29.764831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.400 qpair failed and we were unable to recover it. 00:37:29.400 [2024-09-29 16:45:29.764953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.400 [2024-09-29 16:45:29.764987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.400 qpair failed and we were unable to recover it. 00:37:29.400 [2024-09-29 16:45:29.765155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.400 [2024-09-29 16:45:29.765188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.400 qpair failed and we were unable to recover it. 00:37:29.400 [2024-09-29 16:45:29.765355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.400 [2024-09-29 16:45:29.765389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.400 qpair failed and we were unable to recover it. 00:37:29.400 [2024-09-29 16:45:29.765537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.400 [2024-09-29 16:45:29.765573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.400 qpair failed and we were unable to recover it. 00:37:29.401 [2024-09-29 16:45:29.765704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.401 [2024-09-29 16:45:29.765752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.401 qpair failed and we were unable to recover it. 00:37:29.401 [2024-09-29 16:45:29.765930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.401 [2024-09-29 16:45:29.765985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.401 qpair failed and we were unable to recover it. 00:37:29.401 [2024-09-29 16:45:29.766150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.401 [2024-09-29 16:45:29.766201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.401 qpair failed and we were unable to recover it. 00:37:29.401 [2024-09-29 16:45:29.766410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.401 [2024-09-29 16:45:29.766467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.401 qpair failed and we were unable to recover it. 00:37:29.401 [2024-09-29 16:45:29.766614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.401 [2024-09-29 16:45:29.766648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.401 qpair failed and we were unable to recover it. 00:37:29.401 [2024-09-29 16:45:29.766809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.401 [2024-09-29 16:45:29.766861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.401 qpair failed and we were unable to recover it. 00:37:29.401 [2024-09-29 16:45:29.767061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.401 [2024-09-29 16:45:29.767101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.401 qpair failed and we were unable to recover it. 00:37:29.401 [2024-09-29 16:45:29.767239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.401 [2024-09-29 16:45:29.767279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.401 qpair failed and we were unable to recover it. 00:37:29.401 [2024-09-29 16:45:29.767510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.401 [2024-09-29 16:45:29.767570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.401 qpair failed and we were unable to recover it. 00:37:29.401 [2024-09-29 16:45:29.767709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.401 [2024-09-29 16:45:29.767744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.401 qpair failed and we were unable to recover it. 00:37:29.401 [2024-09-29 16:45:29.767942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.401 [2024-09-29 16:45:29.767994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.401 qpair failed and we were unable to recover it. 00:37:29.401 [2024-09-29 16:45:29.768220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.401 [2024-09-29 16:45:29.768280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.401 qpair failed and we were unable to recover it. 00:37:29.401 [2024-09-29 16:45:29.768511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.401 [2024-09-29 16:45:29.768567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.401 qpair failed and we were unable to recover it. 00:37:29.401 [2024-09-29 16:45:29.768759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.401 [2024-09-29 16:45:29.768793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.401 qpair failed and we were unable to recover it. 00:37:29.401 [2024-09-29 16:45:29.768933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.401 [2024-09-29 16:45:29.768985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.401 qpair failed and we were unable to recover it. 00:37:29.401 [2024-09-29 16:45:29.769215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.401 [2024-09-29 16:45:29.769275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.401 qpair failed and we were unable to recover it. 00:37:29.401 [2024-09-29 16:45:29.769556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.401 [2024-09-29 16:45:29.769624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.401 qpair failed and we were unable to recover it. 00:37:29.401 [2024-09-29 16:45:29.769775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.401 [2024-09-29 16:45:29.769808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.401 qpair failed and we were unable to recover it. 00:37:29.401 [2024-09-29 16:45:29.769973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.401 [2024-09-29 16:45:29.770032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.401 qpair failed and we were unable to recover it. 00:37:29.401 [2024-09-29 16:45:29.770196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.401 [2024-09-29 16:45:29.770235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.401 qpair failed and we were unable to recover it. 00:37:29.401 [2024-09-29 16:45:29.770404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.401 [2024-09-29 16:45:29.770468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.401 qpair failed and we were unable to recover it. 00:37:29.401 [2024-09-29 16:45:29.770653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.401 [2024-09-29 16:45:29.770698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.401 qpair failed and we were unable to recover it. 00:37:29.401 [2024-09-29 16:45:29.770851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.401 [2024-09-29 16:45:29.770899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.401 qpair failed and we were unable to recover it. 00:37:29.401 [2024-09-29 16:45:29.771068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.401 [2024-09-29 16:45:29.771126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.401 qpair failed and we were unable to recover it. 00:37:29.401 [2024-09-29 16:45:29.771284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.401 [2024-09-29 16:45:29.771336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.401 qpair failed and we were unable to recover it. 00:37:29.401 [2024-09-29 16:45:29.771476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.401 [2024-09-29 16:45:29.771509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.401 qpair failed and we were unable to recover it. 00:37:29.401 [2024-09-29 16:45:29.771645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.401 [2024-09-29 16:45:29.771706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.401 qpair failed and we were unable to recover it. 00:37:29.401 [2024-09-29 16:45:29.771897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.401 [2024-09-29 16:45:29.771949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.401 qpair failed and we were unable to recover it. 00:37:29.401 [2024-09-29 16:45:29.772143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.401 [2024-09-29 16:45:29.772182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.401 qpair failed and we were unable to recover it. 00:37:29.401 [2024-09-29 16:45:29.772416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.402 [2024-09-29 16:45:29.772473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.402 qpair failed and we were unable to recover it. 00:37:29.402 [2024-09-29 16:45:29.772629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.402 [2024-09-29 16:45:29.772664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.402 qpair failed and we were unable to recover it. 00:37:29.402 [2024-09-29 16:45:29.772816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.402 [2024-09-29 16:45:29.772849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.402 qpair failed and we were unable to recover it. 00:37:29.402 [2024-09-29 16:45:29.773045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.402 [2024-09-29 16:45:29.773092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.402 qpair failed and we were unable to recover it. 00:37:29.402 [2024-09-29 16:45:29.773314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.402 [2024-09-29 16:45:29.773374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.402 qpair failed and we were unable to recover it. 00:37:29.402 [2024-09-29 16:45:29.773516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.402 [2024-09-29 16:45:29.773550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.402 qpair failed and we were unable to recover it. 00:37:29.402 [2024-09-29 16:45:29.773662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.402 [2024-09-29 16:45:29.773702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.402 qpair failed and we were unable to recover it. 00:37:29.402 [2024-09-29 16:45:29.773876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.402 [2024-09-29 16:45:29.773910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.402 qpair failed and we were unable to recover it. 00:37:29.402 [2024-09-29 16:45:29.774068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.402 [2024-09-29 16:45:29.774105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.402 qpair failed and we were unable to recover it. 00:37:29.402 [2024-09-29 16:45:29.774340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.402 [2024-09-29 16:45:29.774400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.402 qpair failed and we were unable to recover it. 00:37:29.402 [2024-09-29 16:45:29.774567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.402 [2024-09-29 16:45:29.774604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.402 qpair failed and we were unable to recover it. 00:37:29.402 [2024-09-29 16:45:29.774798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.402 [2024-09-29 16:45:29.774833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.402 qpair failed and we were unable to recover it. 00:37:29.402 [2024-09-29 16:45:29.775006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.402 [2024-09-29 16:45:29.775039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.402 qpair failed and we were unable to recover it. 00:37:29.402 [2024-09-29 16:45:29.775221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.402 [2024-09-29 16:45:29.775285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.402 qpair failed and we were unable to recover it. 00:37:29.402 [2024-09-29 16:45:29.775453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.402 [2024-09-29 16:45:29.775509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.402 qpair failed and we were unable to recover it. 00:37:29.402 [2024-09-29 16:45:29.775722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.402 [2024-09-29 16:45:29.775758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.402 qpair failed and we were unable to recover it. 00:37:29.402 [2024-09-29 16:45:29.775932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.402 [2024-09-29 16:45:29.775970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.402 qpair failed and we were unable to recover it. 00:37:29.402 [2024-09-29 16:45:29.776128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.402 [2024-09-29 16:45:29.776165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.402 qpair failed and we were unable to recover it. 00:37:29.402 [2024-09-29 16:45:29.776306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.402 [2024-09-29 16:45:29.776345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.402 qpair failed and we were unable to recover it. 00:37:29.402 [2024-09-29 16:45:29.776563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.402 [2024-09-29 16:45:29.776596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.402 qpair failed and we were unable to recover it. 00:37:29.402 [2024-09-29 16:45:29.776718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.402 [2024-09-29 16:45:29.776752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.402 qpair failed and we were unable to recover it. 00:37:29.402 [2024-09-29 16:45:29.776892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.402 [2024-09-29 16:45:29.776926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.402 qpair failed and we were unable to recover it. 00:37:29.402 [2024-09-29 16:45:29.777065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.402 [2024-09-29 16:45:29.777102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.402 qpair failed and we were unable to recover it. 00:37:29.402 [2024-09-29 16:45:29.777236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.402 [2024-09-29 16:45:29.777287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.402 qpair failed and we were unable to recover it. 00:37:29.402 [2024-09-29 16:45:29.777435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.402 [2024-09-29 16:45:29.777472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.402 qpair failed and we were unable to recover it. 00:37:29.402 [2024-09-29 16:45:29.777601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.402 [2024-09-29 16:45:29.777634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.402 qpair failed and we were unable to recover it. 00:37:29.402 [2024-09-29 16:45:29.777833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.402 [2024-09-29 16:45:29.777882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.402 qpair failed and we were unable to recover it. 00:37:29.402 [2024-09-29 16:45:29.778071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.402 [2024-09-29 16:45:29.778124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.402 qpair failed and we were unable to recover it. 00:37:29.402 [2024-09-29 16:45:29.778360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.402 [2024-09-29 16:45:29.778399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.402 qpair failed and we were unable to recover it. 00:37:29.402 [2024-09-29 16:45:29.778550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.402 [2024-09-29 16:45:29.778594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.402 qpair failed and we were unable to recover it. 00:37:29.402 [2024-09-29 16:45:29.778773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.402 [2024-09-29 16:45:29.778807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.402 qpair failed and we were unable to recover it. 00:37:29.402 [2024-09-29 16:45:29.778947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.402 [2024-09-29 16:45:29.778980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.402 qpair failed and we were unable to recover it. 00:37:29.402 [2024-09-29 16:45:29.779217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.403 [2024-09-29 16:45:29.779273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.403 qpair failed and we were unable to recover it. 00:37:29.403 [2024-09-29 16:45:29.779440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.403 [2024-09-29 16:45:29.779505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.403 qpair failed and we were unable to recover it. 00:37:29.403 [2024-09-29 16:45:29.779641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.403 [2024-09-29 16:45:29.779704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.403 qpair failed and we were unable to recover it. 00:37:29.403 [2024-09-29 16:45:29.779842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.403 [2024-09-29 16:45:29.779875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.403 qpair failed and we were unable to recover it. 00:37:29.403 [2024-09-29 16:45:29.780057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.403 [2024-09-29 16:45:29.780110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.403 qpair failed and we were unable to recover it. 00:37:29.403 [2024-09-29 16:45:29.780291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.403 [2024-09-29 16:45:29.780331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.403 qpair failed and we were unable to recover it. 00:37:29.403 [2024-09-29 16:45:29.780469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.403 [2024-09-29 16:45:29.780508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.403 qpair failed and we were unable to recover it. 00:37:29.403 [2024-09-29 16:45:29.780692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.403 [2024-09-29 16:45:29.780748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.403 qpair failed and we were unable to recover it. 00:37:29.403 [2024-09-29 16:45:29.780914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.403 [2024-09-29 16:45:29.780947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.403 qpair failed and we were unable to recover it. 00:37:29.403 [2024-09-29 16:45:29.781077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.403 [2024-09-29 16:45:29.781113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.403 qpair failed and we were unable to recover it. 00:37:29.403 [2024-09-29 16:45:29.781283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.403 [2024-09-29 16:45:29.781317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.403 qpair failed and we were unable to recover it. 00:37:29.403 [2024-09-29 16:45:29.781499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.403 [2024-09-29 16:45:29.781536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.403 qpair failed and we were unable to recover it. 00:37:29.403 [2024-09-29 16:45:29.781696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.403 [2024-09-29 16:45:29.781749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.403 qpair failed and we were unable to recover it. 00:37:29.403 [2024-09-29 16:45:29.781860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.403 [2024-09-29 16:45:29.781894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.403 qpair failed and we were unable to recover it. 00:37:29.403 [2024-09-29 16:45:29.782024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.403 [2024-09-29 16:45:29.782071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.403 qpair failed and we were unable to recover it. 00:37:29.403 [2024-09-29 16:45:29.782220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.403 [2024-09-29 16:45:29.782279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.403 qpair failed and we were unable to recover it. 00:37:29.403 [2024-09-29 16:45:29.782490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.403 [2024-09-29 16:45:29.782546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.403 qpair failed and we were unable to recover it. 00:37:29.403 [2024-09-29 16:45:29.782688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.403 [2024-09-29 16:45:29.782723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.403 qpair failed and we were unable to recover it. 00:37:29.403 [2024-09-29 16:45:29.782883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.403 [2024-09-29 16:45:29.782936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.403 qpair failed and we were unable to recover it. 00:37:29.403 [2024-09-29 16:45:29.783222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.403 [2024-09-29 16:45:29.783297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.403 qpair failed and we were unable to recover it. 00:37:29.403 [2024-09-29 16:45:29.783569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.403 [2024-09-29 16:45:29.783603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.403 qpair failed and we were unable to recover it. 00:37:29.403 [2024-09-29 16:45:29.783750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.403 [2024-09-29 16:45:29.783814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.403 qpair failed and we were unable to recover it. 00:37:29.403 [2024-09-29 16:45:29.783950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.403 [2024-09-29 16:45:29.783984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.403 qpair failed and we were unable to recover it. 00:37:29.403 [2024-09-29 16:45:29.784143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.403 [2024-09-29 16:45:29.784180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.403 qpair failed and we were unable to recover it. 00:37:29.403 [2024-09-29 16:45:29.784432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.403 [2024-09-29 16:45:29.784484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.403 qpair failed and we were unable to recover it. 00:37:29.403 [2024-09-29 16:45:29.784649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.403 [2024-09-29 16:45:29.784688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.403 qpair failed and we were unable to recover it. 00:37:29.403 [2024-09-29 16:45:29.784834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.403 [2024-09-29 16:45:29.784867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.403 qpair failed and we were unable to recover it. 00:37:29.403 [2024-09-29 16:45:29.785032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.403 [2024-09-29 16:45:29.785075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.403 qpair failed and we were unable to recover it. 00:37:29.403 [2024-09-29 16:45:29.785278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.403 [2024-09-29 16:45:29.785315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.403 qpair failed and we were unable to recover it. 00:37:29.403 [2024-09-29 16:45:29.785585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.403 [2024-09-29 16:45:29.785658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.403 qpair failed and we were unable to recover it. 00:37:29.403 [2024-09-29 16:45:29.785841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.404 [2024-09-29 16:45:29.785876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.404 qpair failed and we were unable to recover it. 00:37:29.404 [2024-09-29 16:45:29.786194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.404 [2024-09-29 16:45:29.786258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.404 qpair failed and we were unable to recover it. 00:37:29.404 [2024-09-29 16:45:29.786487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.404 [2024-09-29 16:45:29.786546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.404 qpair failed and we were unable to recover it. 00:37:29.404 [2024-09-29 16:45:29.786690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.404 [2024-09-29 16:45:29.786747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.404 qpair failed and we were unable to recover it. 00:37:29.404 [2024-09-29 16:45:29.786885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.404 [2024-09-29 16:45:29.786933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.404 qpair failed and we were unable to recover it. 00:37:29.404 [2024-09-29 16:45:29.787120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.404 [2024-09-29 16:45:29.787202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.404 qpair failed and we were unable to recover it. 00:37:29.404 [2024-09-29 16:45:29.787378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.404 [2024-09-29 16:45:29.787443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.404 qpair failed and we were unable to recover it. 00:37:29.404 [2024-09-29 16:45:29.787590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.404 [2024-09-29 16:45:29.787634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.404 qpair failed and we were unable to recover it. 00:37:29.404 [2024-09-29 16:45:29.787779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.404 [2024-09-29 16:45:29.787812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.404 qpair failed and we were unable to recover it. 00:37:29.404 [2024-09-29 16:45:29.787982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.404 [2024-09-29 16:45:29.788034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.404 qpair failed and we were unable to recover it. 00:37:29.404 [2024-09-29 16:45:29.788235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.404 [2024-09-29 16:45:29.788301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.404 qpair failed and we were unable to recover it. 00:37:29.404 [2024-09-29 16:45:29.788587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.404 [2024-09-29 16:45:29.788644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.404 qpair failed and we were unable to recover it. 00:37:29.404 [2024-09-29 16:45:29.788794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.404 [2024-09-29 16:45:29.788828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.404 qpair failed and we were unable to recover it. 00:37:29.404 [2024-09-29 16:45:29.789017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.404 [2024-09-29 16:45:29.789069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.404 qpair failed and we were unable to recover it. 00:37:29.404 [2024-09-29 16:45:29.789290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.404 [2024-09-29 16:45:29.789350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.404 qpair failed and we were unable to recover it. 00:37:29.404 [2024-09-29 16:45:29.789573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.404 [2024-09-29 16:45:29.789611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.404 qpair failed and we were unable to recover it. 00:37:29.404 [2024-09-29 16:45:29.789788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.404 [2024-09-29 16:45:29.789822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.404 qpair failed and we were unable to recover it. 00:37:29.404 [2024-09-29 16:45:29.789983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.404 [2024-09-29 16:45:29.790031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.404 qpair failed and we were unable to recover it. 00:37:29.404 [2024-09-29 16:45:29.790205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.404 [2024-09-29 16:45:29.790263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.404 qpair failed and we were unable to recover it. 00:37:29.404 [2024-09-29 16:45:29.790543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.404 [2024-09-29 16:45:29.790607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.404 qpair failed and we were unable to recover it. 00:37:29.404 [2024-09-29 16:45:29.790728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.404 [2024-09-29 16:45:29.790764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.404 qpair failed and we were unable to recover it. 00:37:29.404 [2024-09-29 16:45:29.790906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.404 [2024-09-29 16:45:29.790960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.404 qpair failed and we were unable to recover it. 00:37:29.404 [2024-09-29 16:45:29.791159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.404 [2024-09-29 16:45:29.791212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.404 qpair failed and we were unable to recover it. 00:37:29.404 [2024-09-29 16:45:29.791373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.404 [2024-09-29 16:45:29.791428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.404 qpair failed and we were unable to recover it. 00:37:29.404 [2024-09-29 16:45:29.791565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.404 [2024-09-29 16:45:29.791600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.404 qpair failed and we were unable to recover it. 00:37:29.404 [2024-09-29 16:45:29.791770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.404 [2024-09-29 16:45:29.791824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.404 qpair failed and we were unable to recover it. 00:37:29.404 [2024-09-29 16:45:29.791985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.404 [2024-09-29 16:45:29.792038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.404 qpair failed and we were unable to recover it. 00:37:29.404 [2024-09-29 16:45:29.792165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.404 [2024-09-29 16:45:29.792218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.404 qpair failed and we were unable to recover it. 00:37:29.404 [2024-09-29 16:45:29.792336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.404 [2024-09-29 16:45:29.792370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.404 qpair failed and we were unable to recover it. 00:37:29.404 [2024-09-29 16:45:29.792513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.404 [2024-09-29 16:45:29.792547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.404 qpair failed and we were unable to recover it. 00:37:29.404 [2024-09-29 16:45:29.792695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.404 [2024-09-29 16:45:29.792732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.404 qpair failed and we were unable to recover it. 00:37:29.404 [2024-09-29 16:45:29.792901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.404 [2024-09-29 16:45:29.792940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.404 qpair failed and we were unable to recover it. 00:37:29.404 [2024-09-29 16:45:29.793091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.405 [2024-09-29 16:45:29.793129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.405 qpair failed and we were unable to recover it. 00:37:29.405 [2024-09-29 16:45:29.793263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.405 [2024-09-29 16:45:29.793296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.405 qpair failed and we were unable to recover it. 00:37:29.405 [2024-09-29 16:45:29.793512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.405 [2024-09-29 16:45:29.793546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.405 qpair failed and we were unable to recover it. 00:37:29.405 [2024-09-29 16:45:29.793678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.405 [2024-09-29 16:45:29.793718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.405 qpair failed and we were unable to recover it. 00:37:29.405 [2024-09-29 16:45:29.793884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.405 [2024-09-29 16:45:29.793947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.405 qpair failed and we were unable to recover it. 00:37:29.405 [2024-09-29 16:45:29.794169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.405 [2024-09-29 16:45:29.794224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.405 qpair failed and we were unable to recover it. 00:37:29.405 [2024-09-29 16:45:29.794384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.405 [2024-09-29 16:45:29.794419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.405 qpair failed and we were unable to recover it. 00:37:29.405 [2024-09-29 16:45:29.794588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.405 [2024-09-29 16:45:29.794634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.405 qpair failed and we were unable to recover it. 00:37:29.405 [2024-09-29 16:45:29.794818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.405 [2024-09-29 16:45:29.794870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.405 qpair failed and we were unable to recover it. 00:37:29.405 [2024-09-29 16:45:29.795013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.405 [2024-09-29 16:45:29.795066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.405 qpair failed and we were unable to recover it. 00:37:29.405 [2024-09-29 16:45:29.795233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.405 [2024-09-29 16:45:29.795267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.405 qpair failed and we were unable to recover it. 00:37:29.405 [2024-09-29 16:45:29.795437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.405 [2024-09-29 16:45:29.795471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.405 qpair failed and we were unable to recover it. 00:37:29.405 [2024-09-29 16:45:29.795610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.405 [2024-09-29 16:45:29.795645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.405 qpair failed and we were unable to recover it. 00:37:29.405 [2024-09-29 16:45:29.795823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.405 [2024-09-29 16:45:29.795857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.405 qpair failed and we were unable to recover it. 00:37:29.405 [2024-09-29 16:45:29.796034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.405 [2024-09-29 16:45:29.796073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.405 qpair failed and we were unable to recover it. 00:37:29.405 [2024-09-29 16:45:29.796244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.405 [2024-09-29 16:45:29.796288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.405 qpair failed and we were unable to recover it. 00:37:29.405 [2024-09-29 16:45:29.796445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.405 [2024-09-29 16:45:29.796482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.405 qpair failed and we were unable to recover it. 00:37:29.405 [2024-09-29 16:45:29.796621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.405 [2024-09-29 16:45:29.796656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.405 qpair failed and we were unable to recover it. 00:37:29.405 [2024-09-29 16:45:29.796830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.405 [2024-09-29 16:45:29.796863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.405 qpair failed and we were unable to recover it. 00:37:29.405 [2024-09-29 16:45:29.797001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.405 [2024-09-29 16:45:29.797038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.405 qpair failed and we were unable to recover it. 00:37:29.405 [2024-09-29 16:45:29.797186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.405 [2024-09-29 16:45:29.797223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.405 qpair failed and we were unable to recover it. 00:37:29.405 [2024-09-29 16:45:29.797401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.405 [2024-09-29 16:45:29.797437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.405 qpair failed and we were unable to recover it. 00:37:29.405 [2024-09-29 16:45:29.797592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.405 [2024-09-29 16:45:29.797628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.405 qpair failed and we were unable to recover it. 00:37:29.405 [2024-09-29 16:45:29.797805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.405 [2024-09-29 16:45:29.797838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.405 qpair failed and we were unable to recover it. 00:37:29.405 [2024-09-29 16:45:29.797995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.405 [2024-09-29 16:45:29.798042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.405 qpair failed and we were unable to recover it. 00:37:29.405 [2024-09-29 16:45:29.798196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.405 [2024-09-29 16:45:29.798235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.405 qpair failed and we were unable to recover it. 00:37:29.405 [2024-09-29 16:45:29.798442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.405 [2024-09-29 16:45:29.798480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.405 qpair failed and we were unable to recover it. 00:37:29.405 [2024-09-29 16:45:29.798631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.405 [2024-09-29 16:45:29.798668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.405 qpair failed and we were unable to recover it. 00:37:29.405 [2024-09-29 16:45:29.798806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.405 [2024-09-29 16:45:29.798840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.405 qpair failed and we were unable to recover it. 00:37:29.405 [2024-09-29 16:45:29.798990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.405 [2024-09-29 16:45:29.799023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.405 qpair failed and we were unable to recover it. 00:37:29.405 [2024-09-29 16:45:29.799205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.405 [2024-09-29 16:45:29.799264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.405 qpair failed and we were unable to recover it. 00:37:29.405 [2024-09-29 16:45:29.799470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.405 [2024-09-29 16:45:29.799519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.405 qpair failed and we were unable to recover it. 00:37:29.405 [2024-09-29 16:45:29.799708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.405 [2024-09-29 16:45:29.799742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.405 qpair failed and we were unable to recover it. 00:37:29.405 [2024-09-29 16:45:29.799861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.405 [2024-09-29 16:45:29.799894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.405 qpair failed and we were unable to recover it. 00:37:29.405 [2024-09-29 16:45:29.800073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.405 [2024-09-29 16:45:29.800127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.406 qpair failed and we were unable to recover it. 00:37:29.406 [2024-09-29 16:45:29.800453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.406 [2024-09-29 16:45:29.800513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.406 qpair failed and we were unable to recover it. 00:37:29.406 [2024-09-29 16:45:29.800711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.406 [2024-09-29 16:45:29.800744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.406 qpair failed and we were unable to recover it. 00:37:29.406 [2024-09-29 16:45:29.800890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.406 [2024-09-29 16:45:29.800926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.406 qpair failed and we were unable to recover it. 00:37:29.406 [2024-09-29 16:45:29.801074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.406 [2024-09-29 16:45:29.801125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.406 qpair failed and we were unable to recover it. 00:37:29.406 [2024-09-29 16:45:29.801405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.406 [2024-09-29 16:45:29.801462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.406 qpair failed and we were unable to recover it. 00:37:29.406 [2024-09-29 16:45:29.801623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.406 [2024-09-29 16:45:29.801660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.406 qpair failed and we were unable to recover it. 00:37:29.406 [2024-09-29 16:45:29.801813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.406 [2024-09-29 16:45:29.801845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.406 qpair failed and we were unable to recover it. 00:37:29.406 [2024-09-29 16:45:29.802018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.406 [2024-09-29 16:45:29.802071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.406 qpair failed and we were unable to recover it. 00:37:29.406 [2024-09-29 16:45:29.802270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.406 [2024-09-29 16:45:29.802309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.406 qpair failed and we were unable to recover it. 00:37:29.406 [2024-09-29 16:45:29.802471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.406 [2024-09-29 16:45:29.802509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.406 qpair failed and we were unable to recover it. 00:37:29.406 [2024-09-29 16:45:29.802658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.406 [2024-09-29 16:45:29.802722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.406 qpair failed and we were unable to recover it. 00:37:29.406 [2024-09-29 16:45:29.802870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.406 [2024-09-29 16:45:29.802905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.406 qpair failed and we were unable to recover it. 00:37:29.406 [2024-09-29 16:45:29.803124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.406 [2024-09-29 16:45:29.803162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.406 qpair failed and we were unable to recover it. 00:37:29.406 [2024-09-29 16:45:29.803351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.406 [2024-09-29 16:45:29.803388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.406 qpair failed and we were unable to recover it. 00:37:29.406 [2024-09-29 16:45:29.803546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.406 [2024-09-29 16:45:29.803584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.406 qpair failed and we were unable to recover it. 00:37:29.406 [2024-09-29 16:45:29.803748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.406 [2024-09-29 16:45:29.803782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.406 qpair failed and we were unable to recover it. 00:37:29.406 [2024-09-29 16:45:29.803895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.406 [2024-09-29 16:45:29.803930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.406 qpair failed and we were unable to recover it. 00:37:29.406 [2024-09-29 16:45:29.804135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.406 [2024-09-29 16:45:29.804189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.406 qpair failed and we were unable to recover it. 00:37:29.406 [2024-09-29 16:45:29.804456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.406 [2024-09-29 16:45:29.804493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.406 qpair failed and we were unable to recover it. 00:37:29.406 [2024-09-29 16:45:29.804653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.406 [2024-09-29 16:45:29.804700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.406 qpair failed and we were unable to recover it. 00:37:29.406 [2024-09-29 16:45:29.804856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.406 [2024-09-29 16:45:29.804893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.406 qpair failed and we were unable to recover it. 00:37:29.406 [2024-09-29 16:45:29.805049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.406 [2024-09-29 16:45:29.805081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.406 qpair failed and we were unable to recover it. 00:37:29.406 [2024-09-29 16:45:29.805312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.406 [2024-09-29 16:45:29.805349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.406 qpair failed and we were unable to recover it. 00:37:29.406 [2024-09-29 16:45:29.805507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.406 [2024-09-29 16:45:29.805544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.406 qpair failed and we were unable to recover it. 00:37:29.406 [2024-09-29 16:45:29.805718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.406 [2024-09-29 16:45:29.805753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.406 qpair failed and we were unable to recover it. 00:37:29.406 [2024-09-29 16:45:29.805895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.406 [2024-09-29 16:45:29.805932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.406 qpair failed and we were unable to recover it. 00:37:29.406 [2024-09-29 16:45:29.806109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.406 [2024-09-29 16:45:29.806144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.406 qpair failed and we were unable to recover it. 00:37:29.406 [2024-09-29 16:45:29.806364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.406 [2024-09-29 16:45:29.806400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.406 qpair failed and we were unable to recover it. 00:37:29.406 [2024-09-29 16:45:29.806565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.406 [2024-09-29 16:45:29.806597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.406 qpair failed and we were unable to recover it. 00:37:29.406 [2024-09-29 16:45:29.806739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.406 [2024-09-29 16:45:29.806773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.406 qpair failed and we were unable to recover it. 00:37:29.406 [2024-09-29 16:45:29.806903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.407 [2024-09-29 16:45:29.806951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.407 qpair failed and we were unable to recover it. 00:37:29.407 [2024-09-29 16:45:29.807100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.407 [2024-09-29 16:45:29.807155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.407 qpair failed and we were unable to recover it. 00:37:29.407 [2024-09-29 16:45:29.807347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.407 [2024-09-29 16:45:29.807400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.407 qpair failed and we were unable to recover it. 00:37:29.407 [2024-09-29 16:45:29.807541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.407 [2024-09-29 16:45:29.807575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.407 qpair failed and we were unable to recover it. 00:37:29.407 [2024-09-29 16:45:29.807779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.407 [2024-09-29 16:45:29.807827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.407 qpair failed and we were unable to recover it. 00:37:29.407 [2024-09-29 16:45:29.808002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.407 [2024-09-29 16:45:29.808042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.407 qpair failed and we were unable to recover it. 00:37:29.407 [2024-09-29 16:45:29.808203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.407 [2024-09-29 16:45:29.808253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.407 qpair failed and we were unable to recover it. 00:37:29.407 [2024-09-29 16:45:29.808458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.407 [2024-09-29 16:45:29.808523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.407 qpair failed and we were unable to recover it. 00:37:29.407 [2024-09-29 16:45:29.808686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.407 [2024-09-29 16:45:29.808738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.407 qpair failed and we were unable to recover it. 00:37:29.407 [2024-09-29 16:45:29.808853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.407 [2024-09-29 16:45:29.808887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.407 qpair failed and we were unable to recover it. 00:37:29.407 [2024-09-29 16:45:29.809047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.407 [2024-09-29 16:45:29.809096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.407 qpair failed and we were unable to recover it. 00:37:29.407 [2024-09-29 16:45:29.809211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.407 [2024-09-29 16:45:29.809243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.407 qpair failed and we were unable to recover it. 00:37:29.407 [2024-09-29 16:45:29.809415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.407 [2024-09-29 16:45:29.809448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.407 qpair failed and we were unable to recover it. 00:37:29.407 [2024-09-29 16:45:29.809583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.407 [2024-09-29 16:45:29.809635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.407 qpair failed and we were unable to recover it. 00:37:29.407 [2024-09-29 16:45:29.809815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.407 [2024-09-29 16:45:29.809848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.407 qpair failed and we were unable to recover it. 00:37:29.407 [2024-09-29 16:45:29.809960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.407 [2024-09-29 16:45:29.809992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.407 qpair failed and we were unable to recover it. 00:37:29.407 [2024-09-29 16:45:29.810121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.407 [2024-09-29 16:45:29.810168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.407 qpair failed and we were unable to recover it. 00:37:29.407 [2024-09-29 16:45:29.810328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.407 [2024-09-29 16:45:29.810365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.407 qpair failed and we were unable to recover it. 00:37:29.407 [2024-09-29 16:45:29.810487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.407 [2024-09-29 16:45:29.810521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.407 qpair failed and we were unable to recover it. 00:37:29.407 [2024-09-29 16:45:29.810638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.407 [2024-09-29 16:45:29.810678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.407 qpair failed and we were unable to recover it. 00:37:29.407 [2024-09-29 16:45:29.810785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.407 [2024-09-29 16:45:29.810819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.407 qpair failed and we were unable to recover it. 00:37:29.407 [2024-09-29 16:45:29.811007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.407 [2024-09-29 16:45:29.811055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.407 qpair failed and we were unable to recover it. 00:37:29.407 [2024-09-29 16:45:29.811199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.407 [2024-09-29 16:45:29.811234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.407 qpair failed and we were unable to recover it. 00:37:29.407 [2024-09-29 16:45:29.811388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.407 [2024-09-29 16:45:29.811422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.407 qpair failed and we were unable to recover it. 00:37:29.407 [2024-09-29 16:45:29.811564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.407 [2024-09-29 16:45:29.811596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.407 qpair failed and we were unable to recover it. 00:37:29.407 [2024-09-29 16:45:29.811754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.407 [2024-09-29 16:45:29.811787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.407 qpair failed and we were unable to recover it. 00:37:29.407 [2024-09-29 16:45:29.811901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.407 [2024-09-29 16:45:29.811953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.407 qpair failed and we were unable to recover it. 00:37:29.407 [2024-09-29 16:45:29.812171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.407 [2024-09-29 16:45:29.812232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.407 qpair failed and we were unable to recover it. 00:37:29.407 [2024-09-29 16:45:29.812371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.407 [2024-09-29 16:45:29.812422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.407 qpair failed and we were unable to recover it. 00:37:29.407 [2024-09-29 16:45:29.812570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.407 [2024-09-29 16:45:29.812604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.407 qpair failed and we were unable to recover it. 00:37:29.407 [2024-09-29 16:45:29.812762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.407 [2024-09-29 16:45:29.812805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.407 qpair failed and we were unable to recover it. 00:37:29.407 [2024-09-29 16:45:29.812936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.407 [2024-09-29 16:45:29.812974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.407 qpair failed and we were unable to recover it. 00:37:29.407 [2024-09-29 16:45:29.813154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.407 [2024-09-29 16:45:29.813189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.408 qpair failed and we were unable to recover it. 00:37:29.408 [2024-09-29 16:45:29.813320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.408 [2024-09-29 16:45:29.813356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.408 qpair failed and we were unable to recover it. 00:37:29.408 [2024-09-29 16:45:29.813494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.408 [2024-09-29 16:45:29.813527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.408 qpair failed and we were unable to recover it. 00:37:29.408 [2024-09-29 16:45:29.813685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.408 [2024-09-29 16:45:29.813718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.408 qpair failed and we were unable to recover it. 00:37:29.408 [2024-09-29 16:45:29.813851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.408 [2024-09-29 16:45:29.813887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.408 qpair failed and we were unable to recover it. 00:37:29.408 [2024-09-29 16:45:29.814056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.408 [2024-09-29 16:45:29.814109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.408 qpair failed and we were unable to recover it. 00:37:29.408 [2024-09-29 16:45:29.814249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.408 [2024-09-29 16:45:29.814290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.408 qpair failed and we were unable to recover it. 00:37:29.408 [2024-09-29 16:45:29.814421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.408 [2024-09-29 16:45:29.814456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.408 qpair failed and we were unable to recover it. 00:37:29.408 [2024-09-29 16:45:29.814609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.408 [2024-09-29 16:45:29.814643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.408 qpair failed and we were unable to recover it. 00:37:29.408 [2024-09-29 16:45:29.814803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.408 [2024-09-29 16:45:29.814857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.408 qpair failed and we were unable to recover it. 00:37:29.408 [2024-09-29 16:45:29.815051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.408 [2024-09-29 16:45:29.815103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.408 qpair failed and we were unable to recover it. 00:37:29.408 [2024-09-29 16:45:29.815232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.408 [2024-09-29 16:45:29.815269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.408 qpair failed and we were unable to recover it. 00:37:29.408 [2024-09-29 16:45:29.815432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.408 [2024-09-29 16:45:29.815466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.408 qpair failed and we were unable to recover it. 00:37:29.408 [2024-09-29 16:45:29.815577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.408 [2024-09-29 16:45:29.815611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.408 qpair failed and we were unable to recover it. 00:37:29.408 [2024-09-29 16:45:29.815804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.408 [2024-09-29 16:45:29.815852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.408 qpair failed and we were unable to recover it. 00:37:29.408 [2024-09-29 16:45:29.815993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.408 [2024-09-29 16:45:29.816028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.408 qpair failed and we were unable to recover it. 00:37:29.408 [2024-09-29 16:45:29.816197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.408 [2024-09-29 16:45:29.816230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.408 qpair failed and we were unable to recover it. 00:37:29.408 [2024-09-29 16:45:29.816450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.408 [2024-09-29 16:45:29.816506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.408 qpair failed and we were unable to recover it. 00:37:29.408 [2024-09-29 16:45:29.816656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.408 [2024-09-29 16:45:29.816719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.408 qpair failed and we were unable to recover it. 00:37:29.408 [2024-09-29 16:45:29.816877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.408 [2024-09-29 16:45:29.816913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.408 qpair failed and we were unable to recover it. 00:37:29.408 [2024-09-29 16:45:29.817168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.408 [2024-09-29 16:45:29.817204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.408 qpair failed and we were unable to recover it. 00:37:29.408 [2024-09-29 16:45:29.817357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.408 [2024-09-29 16:45:29.817393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.408 qpair failed and we were unable to recover it. 00:37:29.408 [2024-09-29 16:45:29.817509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.408 [2024-09-29 16:45:29.817546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.408 qpair failed and we were unable to recover it. 00:37:29.408 [2024-09-29 16:45:29.817716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.408 [2024-09-29 16:45:29.817752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.408 qpair failed and we were unable to recover it. 00:37:29.408 [2024-09-29 16:45:29.817936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.408 [2024-09-29 16:45:29.817989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.408 qpair failed and we were unable to recover it. 00:37:29.408 [2024-09-29 16:45:29.818158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.408 [2024-09-29 16:45:29.818204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.408 qpair failed and we were unable to recover it. 00:37:29.408 [2024-09-29 16:45:29.818390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.408 [2024-09-29 16:45:29.818461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.408 qpair failed and we were unable to recover it. 00:37:29.408 [2024-09-29 16:45:29.818621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.408 [2024-09-29 16:45:29.818656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.408 qpair failed and we were unable to recover it. 00:37:29.408 [2024-09-29 16:45:29.818786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.408 [2024-09-29 16:45:29.818820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.408 qpair failed and we were unable to recover it. 00:37:29.408 [2024-09-29 16:45:29.818939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.408 [2024-09-29 16:45:29.818974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.408 qpair failed and we were unable to recover it. 00:37:29.409 [2024-09-29 16:45:29.819114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.409 [2024-09-29 16:45:29.819150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.409 qpair failed and we were unable to recover it. 00:37:29.409 [2024-09-29 16:45:29.819392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.409 [2024-09-29 16:45:29.819464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.409 qpair failed and we were unable to recover it. 00:37:29.409 [2024-09-29 16:45:29.819589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.409 [2024-09-29 16:45:29.819625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.409 qpair failed and we were unable to recover it. 00:37:29.409 [2024-09-29 16:45:29.819824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.409 [2024-09-29 16:45:29.819871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.409 qpair failed and we were unable to recover it. 00:37:29.409 [2024-09-29 16:45:29.820006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.409 [2024-09-29 16:45:29.820053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.409 qpair failed and we were unable to recover it. 00:37:29.409 [2024-09-29 16:45:29.820281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.409 [2024-09-29 16:45:29.820346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.409 qpair failed and we were unable to recover it. 00:37:29.409 [2024-09-29 16:45:29.820572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.409 [2024-09-29 16:45:29.820606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.409 qpair failed and we were unable to recover it. 00:37:29.409 [2024-09-29 16:45:29.820750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.409 [2024-09-29 16:45:29.820784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.409 qpair failed and we were unable to recover it. 00:37:29.409 [2024-09-29 16:45:29.820929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.409 [2024-09-29 16:45:29.820963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.409 qpair failed and we were unable to recover it. 00:37:29.409 [2024-09-29 16:45:29.821143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.409 [2024-09-29 16:45:29.821208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.409 qpair failed and we were unable to recover it. 00:37:29.409 [2024-09-29 16:45:29.821406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.409 [2024-09-29 16:45:29.821469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.409 qpair failed and we were unable to recover it. 00:37:29.409 [2024-09-29 16:45:29.821604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.409 [2024-09-29 16:45:29.821637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.409 qpair failed and we were unable to recover it. 00:37:29.409 [2024-09-29 16:45:29.821790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.409 [2024-09-29 16:45:29.821825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.409 qpair failed and we were unable to recover it. 00:37:29.409 [2024-09-29 16:45:29.821983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.409 [2024-09-29 16:45:29.822031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.409 qpair failed and we were unable to recover it. 00:37:29.409 [2024-09-29 16:45:29.822222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.409 [2024-09-29 16:45:29.822296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.409 qpair failed and we were unable to recover it. 00:37:29.409 [2024-09-29 16:45:29.822526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.409 [2024-09-29 16:45:29.822585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.409 qpair failed and we were unable to recover it. 00:37:29.409 [2024-09-29 16:45:29.822696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.409 [2024-09-29 16:45:29.822730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.409 qpair failed and we were unable to recover it. 00:37:29.409 [2024-09-29 16:45:29.822888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.409 [2024-09-29 16:45:29.822940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.409 qpair failed and we were unable to recover it. 00:37:29.409 [2024-09-29 16:45:29.823214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.409 [2024-09-29 16:45:29.823266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.409 qpair failed and we were unable to recover it. 00:37:29.409 [2024-09-29 16:45:29.823453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.409 [2024-09-29 16:45:29.823504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.409 qpair failed and we were unable to recover it. 00:37:29.409 [2024-09-29 16:45:29.823683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.409 [2024-09-29 16:45:29.823717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.409 qpair failed and we were unable to recover it. 00:37:29.409 [2024-09-29 16:45:29.823852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.409 [2024-09-29 16:45:29.823886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.409 qpair failed and we were unable to recover it. 00:37:29.409 [2024-09-29 16:45:29.824053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.409 [2024-09-29 16:45:29.824108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.409 qpair failed and we were unable to recover it. 00:37:29.409 [2024-09-29 16:45:29.824238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.409 [2024-09-29 16:45:29.824290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.409 qpair failed and we were unable to recover it. 00:37:29.409 [2024-09-29 16:45:29.824549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.409 [2024-09-29 16:45:29.824611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.409 qpair failed and we were unable to recover it. 00:37:29.409 [2024-09-29 16:45:29.824791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.409 [2024-09-29 16:45:29.824826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.409 qpair failed and we were unable to recover it. 00:37:29.409 [2024-09-29 16:45:29.824977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.409 [2024-09-29 16:45:29.825014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.409 qpair failed and we were unable to recover it. 00:37:29.409 [2024-09-29 16:45:29.825229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.409 [2024-09-29 16:45:29.825266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.409 qpair failed and we were unable to recover it. 00:37:29.409 [2024-09-29 16:45:29.825492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.409 [2024-09-29 16:45:29.825548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.409 qpair failed and we were unable to recover it. 00:37:29.409 [2024-09-29 16:45:29.825691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.409 [2024-09-29 16:45:29.825726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.409 qpair failed and we were unable to recover it. 00:37:29.410 [2024-09-29 16:45:29.825894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.410 [2024-09-29 16:45:29.825928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.410 qpair failed and we were unable to recover it. 00:37:29.410 [2024-09-29 16:45:29.826101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.410 [2024-09-29 16:45:29.826153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.410 qpair failed and we were unable to recover it. 00:37:29.410 [2024-09-29 16:45:29.826317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.410 [2024-09-29 16:45:29.826370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.410 qpair failed and we were unable to recover it. 00:37:29.410 [2024-09-29 16:45:29.826554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.410 [2024-09-29 16:45:29.826626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.410 qpair failed and we were unable to recover it. 00:37:29.410 [2024-09-29 16:45:29.826775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.410 [2024-09-29 16:45:29.826810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.410 qpair failed and we were unable to recover it. 00:37:29.410 [2024-09-29 16:45:29.826994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.410 [2024-09-29 16:45:29.827036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.410 qpair failed and we were unable to recover it. 00:37:29.410 [2024-09-29 16:45:29.827279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.410 [2024-09-29 16:45:29.827340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.410 qpair failed and we were unable to recover it. 00:37:29.410 [2024-09-29 16:45:29.827516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.410 [2024-09-29 16:45:29.827573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.410 qpair failed and we were unable to recover it. 00:37:29.410 [2024-09-29 16:45:29.827712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.410 [2024-09-29 16:45:29.827746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.410 qpair failed and we were unable to recover it. 00:37:29.410 [2024-09-29 16:45:29.827863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.410 [2024-09-29 16:45:29.827895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.410 qpair failed and we were unable to recover it. 00:37:29.410 [2024-09-29 16:45:29.828022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.410 [2024-09-29 16:45:29.828058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.410 qpair failed and we were unable to recover it. 00:37:29.410 [2024-09-29 16:45:29.828212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.410 [2024-09-29 16:45:29.828249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.410 qpair failed and we were unable to recover it. 00:37:29.410 [2024-09-29 16:45:29.828408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.410 [2024-09-29 16:45:29.828444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.410 qpair failed and we were unable to recover it. 00:37:29.410 [2024-09-29 16:45:29.828563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.410 [2024-09-29 16:45:29.828611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.410 qpair failed and we were unable to recover it. 00:37:29.410 [2024-09-29 16:45:29.828711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.410 [2024-09-29 16:45:29.828745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.410 qpair failed and we were unable to recover it. 00:37:29.410 [2024-09-29 16:45:29.828860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.410 [2024-09-29 16:45:29.828911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.410 qpair failed and we were unable to recover it. 00:37:29.410 [2024-09-29 16:45:29.829068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.410 [2024-09-29 16:45:29.829104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.410 qpair failed and we were unable to recover it. 00:37:29.410 [2024-09-29 16:45:29.829259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.410 [2024-09-29 16:45:29.829296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.410 qpair failed and we were unable to recover it. 00:37:29.410 [2024-09-29 16:45:29.829451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.410 [2024-09-29 16:45:29.829487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.410 qpair failed and we were unable to recover it. 00:37:29.410 [2024-09-29 16:45:29.829664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.410 [2024-09-29 16:45:29.829728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.410 qpair failed and we were unable to recover it. 00:37:29.410 [2024-09-29 16:45:29.829868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.410 [2024-09-29 16:45:29.829900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.410 qpair failed and we were unable to recover it. 00:37:29.410 [2024-09-29 16:45:29.830035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.410 [2024-09-29 16:45:29.830072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.410 qpair failed and we were unable to recover it. 00:37:29.410 [2024-09-29 16:45:29.830251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.410 [2024-09-29 16:45:29.830288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.410 qpair failed and we were unable to recover it. 00:37:29.410 [2024-09-29 16:45:29.830412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.410 [2024-09-29 16:45:29.830447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.410 qpair failed and we were unable to recover it. 00:37:29.410 [2024-09-29 16:45:29.830648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.410 [2024-09-29 16:45:29.830710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.410 qpair failed and we were unable to recover it. 00:37:29.410 [2024-09-29 16:45:29.830917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.410 [2024-09-29 16:45:29.830964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.410 qpair failed and we were unable to recover it. 00:37:29.410 [2024-09-29 16:45:29.831140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.410 [2024-09-29 16:45:29.831180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.410 qpair failed and we were unable to recover it. 00:37:29.410 [2024-09-29 16:45:29.831378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.410 [2024-09-29 16:45:29.831416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.410 qpair failed and we were unable to recover it. 00:37:29.410 [2024-09-29 16:45:29.831598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.410 [2024-09-29 16:45:29.831661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.410 qpair failed and we were unable to recover it. 00:37:29.410 [2024-09-29 16:45:29.831821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.410 [2024-09-29 16:45:29.831855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.410 qpair failed and we were unable to recover it. 00:37:29.410 [2024-09-29 16:45:29.831996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.410 [2024-09-29 16:45:29.832031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.410 qpair failed and we were unable to recover it. 00:37:29.410 [2024-09-29 16:45:29.832163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.410 [2024-09-29 16:45:29.832201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.410 qpair failed and we were unable to recover it. 00:37:29.410 [2024-09-29 16:45:29.832358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.410 [2024-09-29 16:45:29.832394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.411 qpair failed and we were unable to recover it. 00:37:29.411 [2024-09-29 16:45:29.832557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.411 [2024-09-29 16:45:29.832594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.411 qpair failed and we were unable to recover it. 00:37:29.411 [2024-09-29 16:45:29.832734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.411 [2024-09-29 16:45:29.832768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.411 qpair failed and we were unable to recover it. 00:37:29.411 [2024-09-29 16:45:29.832964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.411 [2024-09-29 16:45:29.832997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.411 qpair failed and we were unable to recover it. 00:37:29.411 [2024-09-29 16:45:29.833104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.411 [2024-09-29 16:45:29.833154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.411 qpair failed and we were unable to recover it. 00:37:29.411 [2024-09-29 16:45:29.833285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.411 [2024-09-29 16:45:29.833321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.411 qpair failed and we were unable to recover it. 00:37:29.411 [2024-09-29 16:45:29.833437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.411 [2024-09-29 16:45:29.833473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.411 qpair failed and we were unable to recover it. 00:37:29.411 [2024-09-29 16:45:29.833664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.411 [2024-09-29 16:45:29.833709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.411 qpair failed and we were unable to recover it. 00:37:29.411 [2024-09-29 16:45:29.833855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.411 [2024-09-29 16:45:29.833903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.411 qpair failed and we were unable to recover it. 00:37:29.411 [2024-09-29 16:45:29.834105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.411 [2024-09-29 16:45:29.834159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.411 qpair failed and we were unable to recover it. 00:37:29.411 [2024-09-29 16:45:29.834455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.411 [2024-09-29 16:45:29.834508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.411 qpair failed and we were unable to recover it. 00:37:29.411 [2024-09-29 16:45:29.834681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.411 [2024-09-29 16:45:29.834733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.411 qpair failed and we were unable to recover it. 00:37:29.411 [2024-09-29 16:45:29.834883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.411 [2024-09-29 16:45:29.834916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.411 qpair failed and we were unable to recover it. 00:37:29.411 [2024-09-29 16:45:29.835116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.411 [2024-09-29 16:45:29.835171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.411 qpair failed and we were unable to recover it. 00:37:29.411 [2024-09-29 16:45:29.835408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.411 [2024-09-29 16:45:29.835466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.411 qpair failed and we were unable to recover it. 00:37:29.411 [2024-09-29 16:45:29.835647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.411 [2024-09-29 16:45:29.835693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.411 qpair failed and we were unable to recover it. 00:37:29.411 [2024-09-29 16:45:29.835876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.411 [2024-09-29 16:45:29.835908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.411 qpair failed and we were unable to recover it. 00:37:29.411 [2024-09-29 16:45:29.836104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.411 [2024-09-29 16:45:29.836141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.411 qpair failed and we were unable to recover it. 00:37:29.411 [2024-09-29 16:45:29.836322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.411 [2024-09-29 16:45:29.836358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.411 qpair failed and we were unable to recover it. 00:37:29.411 [2024-09-29 16:45:29.836520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.411 [2024-09-29 16:45:29.836553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.411 qpair failed and we were unable to recover it. 00:37:29.411 [2024-09-29 16:45:29.836678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.411 [2024-09-29 16:45:29.836713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.411 qpair failed and we were unable to recover it. 00:37:29.411 [2024-09-29 16:45:29.836833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.411 [2024-09-29 16:45:29.836865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.411 qpair failed and we were unable to recover it. 00:37:29.411 [2024-09-29 16:45:29.837010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.411 [2024-09-29 16:45:29.837044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.411 qpair failed and we were unable to recover it. 00:37:29.411 [2024-09-29 16:45:29.837204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.411 [2024-09-29 16:45:29.837241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.411 qpair failed and we were unable to recover it. 00:37:29.411 [2024-09-29 16:45:29.837405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.411 [2024-09-29 16:45:29.837458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.411 qpair failed and we were unable to recover it. 00:37:29.411 [2024-09-29 16:45:29.837619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.411 [2024-09-29 16:45:29.837652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.411 qpair failed and we were unable to recover it. 00:37:29.411 [2024-09-29 16:45:29.837798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.411 [2024-09-29 16:45:29.837832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.411 qpair failed and we were unable to recover it. 00:37:29.411 [2024-09-29 16:45:29.838030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.411 [2024-09-29 16:45:29.838063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.411 qpair failed and we were unable to recover it. 00:37:29.411 [2024-09-29 16:45:29.838325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.411 [2024-09-29 16:45:29.838383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.411 qpair failed and we were unable to recover it. 00:37:29.411 [2024-09-29 16:45:29.838567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.411 [2024-09-29 16:45:29.838604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.411 qpair failed and we were unable to recover it. 00:37:29.411 [2024-09-29 16:45:29.838748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.411 [2024-09-29 16:45:29.838780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.411 qpair failed and we were unable to recover it. 00:37:29.411 [2024-09-29 16:45:29.838938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.411 [2024-09-29 16:45:29.838974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.411 qpair failed and we were unable to recover it. 00:37:29.411 [2024-09-29 16:45:29.839158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.411 [2024-09-29 16:45:29.839195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.411 qpair failed and we were unable to recover it. 00:37:29.411 [2024-09-29 16:45:29.839360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.411 [2024-09-29 16:45:29.839396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.411 qpair failed and we were unable to recover it. 00:37:29.411 [2024-09-29 16:45:29.839558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.411 [2024-09-29 16:45:29.839605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.411 qpair failed and we were unable to recover it. 00:37:29.411 [2024-09-29 16:45:29.839765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.411 [2024-09-29 16:45:29.839802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.411 qpair failed and we were unable to recover it. 00:37:29.411 [2024-09-29 16:45:29.839995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.411 [2024-09-29 16:45:29.840048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.411 qpair failed and we were unable to recover it. 00:37:29.411 [2024-09-29 16:45:29.840284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.411 [2024-09-29 16:45:29.840351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.412 qpair failed and we were unable to recover it. 00:37:29.412 [2024-09-29 16:45:29.840498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.412 [2024-09-29 16:45:29.840532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.412 qpair failed and we were unable to recover it. 00:37:29.412 [2024-09-29 16:45:29.840681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.412 [2024-09-29 16:45:29.840716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.412 qpair failed and we were unable to recover it. 00:37:29.412 [2024-09-29 16:45:29.840892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.412 [2024-09-29 16:45:29.840930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.412 qpair failed and we were unable to recover it. 00:37:29.412 [2024-09-29 16:45:29.841082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.412 [2024-09-29 16:45:29.841119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.412 qpair failed and we were unable to recover it. 00:37:29.412 [2024-09-29 16:45:29.841272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.412 [2024-09-29 16:45:29.841308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.412 qpair failed and we were unable to recover it. 00:37:29.412 [2024-09-29 16:45:29.841463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.412 [2024-09-29 16:45:29.841498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.412 qpair failed and we were unable to recover it. 00:37:29.412 [2024-09-29 16:45:29.841663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.412 [2024-09-29 16:45:29.841705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.412 qpair failed and we were unable to recover it. 00:37:29.412 [2024-09-29 16:45:29.841826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.412 [2024-09-29 16:45:29.841859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.412 qpair failed and we were unable to recover it. 00:37:29.412 [2024-09-29 16:45:29.841993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.412 [2024-09-29 16:45:29.842047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.412 qpair failed and we were unable to recover it. 00:37:29.412 [2024-09-29 16:45:29.842184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.412 [2024-09-29 16:45:29.842235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.412 qpair failed and we were unable to recover it. 00:37:29.412 [2024-09-29 16:45:29.842355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.412 [2024-09-29 16:45:29.842408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.412 qpair failed and we were unable to recover it. 00:37:29.412 [2024-09-29 16:45:29.842521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.412 [2024-09-29 16:45:29.842555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.412 qpair failed and we were unable to recover it. 00:37:29.412 [2024-09-29 16:45:29.842724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.412 [2024-09-29 16:45:29.842759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.412 qpair failed and we were unable to recover it. 00:37:29.412 [2024-09-29 16:45:29.842901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.412 [2024-09-29 16:45:29.842934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.412 qpair failed and we were unable to recover it. 00:37:29.412 [2024-09-29 16:45:29.843046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.412 [2024-09-29 16:45:29.843080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.412 qpair failed and we were unable to recover it. 00:37:29.412 [2024-09-29 16:45:29.843197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.412 [2024-09-29 16:45:29.843238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.412 qpair failed and we were unable to recover it. 00:37:29.412 [2024-09-29 16:45:29.843352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.412 [2024-09-29 16:45:29.843387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.412 qpair failed and we were unable to recover it. 00:37:29.412 [2024-09-29 16:45:29.843529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.412 [2024-09-29 16:45:29.843563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.412 qpair failed and we were unable to recover it. 00:37:29.412 [2024-09-29 16:45:29.843750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.412 [2024-09-29 16:45:29.843798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.412 qpair failed and we were unable to recover it. 00:37:29.412 [2024-09-29 16:45:29.843952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.412 [2024-09-29 16:45:29.843988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.412 qpair failed and we were unable to recover it. 00:37:29.412 [2024-09-29 16:45:29.844134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.412 [2024-09-29 16:45:29.844169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.412 qpair failed and we were unable to recover it. 00:37:29.412 [2024-09-29 16:45:29.844285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.412 [2024-09-29 16:45:29.844318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.412 qpair failed and we were unable to recover it. 00:37:29.412 [2024-09-29 16:45:29.844464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.412 [2024-09-29 16:45:29.844498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.412 qpair failed and we were unable to recover it. 00:37:29.412 [2024-09-29 16:45:29.844616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.412 [2024-09-29 16:45:29.844651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.412 qpair failed and we were unable to recover it. 00:37:29.412 [2024-09-29 16:45:29.844793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.412 [2024-09-29 16:45:29.844845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.412 qpair failed and we were unable to recover it. 00:37:29.412 [2024-09-29 16:45:29.845009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.412 [2024-09-29 16:45:29.845061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.412 qpair failed and we were unable to recover it. 00:37:29.412 [2024-09-29 16:45:29.845308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.412 [2024-09-29 16:45:29.845358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.412 qpair failed and we were unable to recover it. 00:37:29.412 [2024-09-29 16:45:29.845524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.412 [2024-09-29 16:45:29.845558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.412 qpair failed and we were unable to recover it. 00:37:29.412 [2024-09-29 16:45:29.845722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.412 [2024-09-29 16:45:29.845761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.412 qpair failed and we were unable to recover it. 00:37:29.412 [2024-09-29 16:45:29.845922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.412 [2024-09-29 16:45:29.845962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.412 qpair failed and we were unable to recover it. 00:37:29.412 [2024-09-29 16:45:29.846163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.412 [2024-09-29 16:45:29.846216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.412 qpair failed and we were unable to recover it. 00:37:29.412 [2024-09-29 16:45:29.846384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.412 [2024-09-29 16:45:29.846423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.412 qpair failed and we were unable to recover it. 00:37:29.412 [2024-09-29 16:45:29.846585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.412 [2024-09-29 16:45:29.846619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.412 qpair failed and we were unable to recover it. 00:37:29.412 [2024-09-29 16:45:29.846747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.412 [2024-09-29 16:45:29.846781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.412 qpair failed and we were unable to recover it. 00:37:29.412 [2024-09-29 16:45:29.846919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.412 [2024-09-29 16:45:29.846951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.412 qpair failed and we were unable to recover it. 00:37:29.412 [2024-09-29 16:45:29.847181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.412 [2024-09-29 16:45:29.847242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.412 qpair failed and we were unable to recover it. 00:37:29.412 [2024-09-29 16:45:29.847440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.413 [2024-09-29 16:45:29.847502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.413 qpair failed and we were unable to recover it. 00:37:29.413 [2024-09-29 16:45:29.847627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.413 [2024-09-29 16:45:29.847688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.413 qpair failed and we were unable to recover it. 00:37:29.413 [2024-09-29 16:45:29.847871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.413 [2024-09-29 16:45:29.847925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.413 qpair failed and we were unable to recover it. 00:37:29.413 [2024-09-29 16:45:29.848083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.413 [2024-09-29 16:45:29.848136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.413 qpair failed and we were unable to recover it. 00:37:29.413 [2024-09-29 16:45:29.848299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.413 [2024-09-29 16:45:29.848338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.413 qpair failed and we were unable to recover it. 00:37:29.413 [2024-09-29 16:45:29.848501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.413 [2024-09-29 16:45:29.848540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.413 qpair failed and we were unable to recover it. 00:37:29.413 [2024-09-29 16:45:29.848694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.413 [2024-09-29 16:45:29.848740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.413 qpair failed and we were unable to recover it. 00:37:29.413 [2024-09-29 16:45:29.848934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.413 [2024-09-29 16:45:29.848987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.413 qpair failed and we were unable to recover it. 00:37:29.413 [2024-09-29 16:45:29.849276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.413 [2024-09-29 16:45:29.849334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.413 qpair failed and we were unable to recover it. 00:37:29.413 [2024-09-29 16:45:29.849551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.413 [2024-09-29 16:45:29.849601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.413 qpair failed and we were unable to recover it. 00:37:29.413 [2024-09-29 16:45:29.849712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.413 [2024-09-29 16:45:29.849747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.413 qpair failed and we were unable to recover it. 00:37:29.413 [2024-09-29 16:45:29.849939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.413 [2024-09-29 16:45:29.849991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.413 qpair failed and we were unable to recover it. 00:37:29.413 [2024-09-29 16:45:29.850135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.413 [2024-09-29 16:45:29.850186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.413 qpair failed and we were unable to recover it. 00:37:29.413 [2024-09-29 16:45:29.850336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.413 [2024-09-29 16:45:29.850373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.413 qpair failed and we were unable to recover it. 00:37:29.413 [2024-09-29 16:45:29.850558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.413 [2024-09-29 16:45:29.850593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.413 qpair failed and we were unable to recover it. 00:37:29.413 [2024-09-29 16:45:29.850733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.413 [2024-09-29 16:45:29.850766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.413 qpair failed and we were unable to recover it. 00:37:29.413 [2024-09-29 16:45:29.850880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.413 [2024-09-29 16:45:29.850912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.413 qpair failed and we were unable to recover it. 00:37:29.413 [2024-09-29 16:45:29.851079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.413 [2024-09-29 16:45:29.851115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.413 qpair failed and we were unable to recover it. 00:37:29.413 [2024-09-29 16:45:29.851238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.413 [2024-09-29 16:45:29.851274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.413 qpair failed and we were unable to recover it. 00:37:29.413 [2024-09-29 16:45:29.851431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.413 [2024-09-29 16:45:29.851474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.413 qpair failed and we were unable to recover it. 00:37:29.413 [2024-09-29 16:45:29.851624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.413 [2024-09-29 16:45:29.851659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.413 qpair failed and we were unable to recover it. 00:37:29.413 [2024-09-29 16:45:29.851824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.413 [2024-09-29 16:45:29.851872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.413 qpair failed and we were unable to recover it. 00:37:29.413 [2024-09-29 16:45:29.852073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.413 [2024-09-29 16:45:29.852112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.413 qpair failed and we were unable to recover it. 00:37:29.413 [2024-09-29 16:45:29.852296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.413 [2024-09-29 16:45:29.852334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.413 qpair failed and we were unable to recover it. 00:37:29.413 [2024-09-29 16:45:29.852525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.413 [2024-09-29 16:45:29.852564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.413 qpair failed and we were unable to recover it. 00:37:29.413 [2024-09-29 16:45:29.852762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.413 [2024-09-29 16:45:29.852796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.413 qpair failed and we were unable to recover it. 00:37:29.413 [2024-09-29 16:45:29.852915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.413 [2024-09-29 16:45:29.852950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.413 qpair failed and we were unable to recover it. 00:37:29.413 [2024-09-29 16:45:29.853099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.413 [2024-09-29 16:45:29.853151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.413 qpair failed and we were unable to recover it. 00:37:29.413 [2024-09-29 16:45:29.853371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.413 [2024-09-29 16:45:29.853408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.413 qpair failed and we were unable to recover it. 00:37:29.413 [2024-09-29 16:45:29.853571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.413 [2024-09-29 16:45:29.853608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.413 qpair failed and we were unable to recover it. 00:37:29.413 [2024-09-29 16:45:29.853788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.413 [2024-09-29 16:45:29.853822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.413 qpair failed and we were unable to recover it. 00:37:29.413 [2024-09-29 16:45:29.853935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.413 [2024-09-29 16:45:29.853987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.413 qpair failed and we were unable to recover it. 00:37:29.413 [2024-09-29 16:45:29.854169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.413 [2024-09-29 16:45:29.854206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.413 qpair failed and we were unable to recover it. 00:37:29.413 [2024-09-29 16:45:29.854409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.413 [2024-09-29 16:45:29.854483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.413 qpair failed and we were unable to recover it. 00:37:29.413 [2024-09-29 16:45:29.854614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.413 [2024-09-29 16:45:29.854649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.413 qpair failed and we were unable to recover it. 00:37:29.413 [2024-09-29 16:45:29.854865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.413 [2024-09-29 16:45:29.854913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.413 qpair failed and we were unable to recover it. 00:37:29.413 [2024-09-29 16:45:29.855096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.413 [2024-09-29 16:45:29.855131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.413 qpair failed and we were unable to recover it. 00:37:29.413 [2024-09-29 16:45:29.855253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.414 [2024-09-29 16:45:29.855305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.414 qpair failed and we were unable to recover it. 00:37:29.414 [2024-09-29 16:45:29.855444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.414 [2024-09-29 16:45:29.855483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.414 qpair failed and we were unable to recover it. 00:37:29.414 [2024-09-29 16:45:29.855612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.414 [2024-09-29 16:45:29.855646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.414 qpair failed and we were unable to recover it. 00:37:29.414 [2024-09-29 16:45:29.855820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.414 [2024-09-29 16:45:29.855853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.414 qpair failed and we were unable to recover it. 00:37:29.414 [2024-09-29 16:45:29.856022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.414 [2024-09-29 16:45:29.856060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.414 qpair failed and we were unable to recover it. 00:37:29.414 [2024-09-29 16:45:29.856208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.414 [2024-09-29 16:45:29.856246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.414 qpair failed and we were unable to recover it. 00:37:29.414 [2024-09-29 16:45:29.856396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.414 [2024-09-29 16:45:29.856433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.414 qpair failed and we were unable to recover it. 00:37:29.414 [2024-09-29 16:45:29.856604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.414 [2024-09-29 16:45:29.856638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.414 qpair failed and we were unable to recover it. 00:37:29.414 [2024-09-29 16:45:29.856801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.414 [2024-09-29 16:45:29.856848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.414 qpair failed and we were unable to recover it. 00:37:29.414 [2024-09-29 16:45:29.857011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.414 [2024-09-29 16:45:29.857048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.414 qpair failed and we were unable to recover it. 00:37:29.414 [2024-09-29 16:45:29.857211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.414 [2024-09-29 16:45:29.857264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.414 qpair failed and we were unable to recover it. 00:37:29.414 [2024-09-29 16:45:29.857421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.414 [2024-09-29 16:45:29.857474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.414 qpair failed and we were unable to recover it. 00:37:29.414 [2024-09-29 16:45:29.857586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.414 [2024-09-29 16:45:29.857619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.414 qpair failed and we were unable to recover it. 00:37:29.414 [2024-09-29 16:45:29.857796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.414 [2024-09-29 16:45:29.857831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.414 qpair failed and we were unable to recover it. 00:37:29.414 [2024-09-29 16:45:29.858001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.414 [2024-09-29 16:45:29.858048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.414 qpair failed and we were unable to recover it. 00:37:29.414 [2024-09-29 16:45:29.858202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.414 [2024-09-29 16:45:29.858236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.414 qpair failed and we were unable to recover it. 00:37:29.414 [2024-09-29 16:45:29.858383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.414 [2024-09-29 16:45:29.858415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.414 qpair failed and we were unable to recover it. 00:37:29.414 [2024-09-29 16:45:29.858535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.414 [2024-09-29 16:45:29.858568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.414 qpair failed and we were unable to recover it. 00:37:29.414 [2024-09-29 16:45:29.858736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.414 [2024-09-29 16:45:29.858770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.414 qpair failed and we were unable to recover it. 00:37:29.414 [2024-09-29 16:45:29.858882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.414 [2024-09-29 16:45:29.858914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.414 qpair failed and we were unable to recover it. 00:37:29.414 [2024-09-29 16:45:29.859095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.414 [2024-09-29 16:45:29.859131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.414 qpair failed and we were unable to recover it. 00:37:29.414 [2024-09-29 16:45:29.859246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.414 [2024-09-29 16:45:29.859283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.414 qpair failed and we were unable to recover it. 00:37:29.414 [2024-09-29 16:45:29.859455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.414 [2024-09-29 16:45:29.859498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.414 qpair failed and we were unable to recover it. 00:37:29.414 [2024-09-29 16:45:29.859657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.414 [2024-09-29 16:45:29.859724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.414 qpair failed and we were unable to recover it. 00:37:29.414 [2024-09-29 16:45:29.859867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.414 [2024-09-29 16:45:29.859900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.414 qpair failed and we were unable to recover it. 00:37:29.414 [2024-09-29 16:45:29.860093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.414 [2024-09-29 16:45:29.860130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.414 qpair failed and we were unable to recover it. 00:37:29.414 [2024-09-29 16:45:29.860257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.414 [2024-09-29 16:45:29.860295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.414 qpair failed and we were unable to recover it. 00:37:29.414 [2024-09-29 16:45:29.860485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.414 [2024-09-29 16:45:29.860522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.414 qpair failed and we were unable to recover it. 00:37:29.414 [2024-09-29 16:45:29.860677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.414 [2024-09-29 16:45:29.860725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.414 qpair failed and we were unable to recover it. 00:37:29.414 [2024-09-29 16:45:29.860878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.414 [2024-09-29 16:45:29.860914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.414 qpair failed and we were unable to recover it. 00:37:29.414 [2024-09-29 16:45:29.861072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.414 [2024-09-29 16:45:29.861124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.414 qpair failed and we were unable to recover it. 00:37:29.414 [2024-09-29 16:45:29.861386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.415 [2024-09-29 16:45:29.861444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.415 qpair failed and we were unable to recover it. 00:37:29.415 [2024-09-29 16:45:29.861612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.415 [2024-09-29 16:45:29.861646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.415 qpair failed and we were unable to recover it. 00:37:29.415 [2024-09-29 16:45:29.861802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.415 [2024-09-29 16:45:29.861836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.415 qpair failed and we were unable to recover it. 00:37:29.415 [2024-09-29 16:45:29.862008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.415 [2024-09-29 16:45:29.862047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.415 qpair failed and we were unable to recover it. 00:37:29.415 [2024-09-29 16:45:29.862177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.415 [2024-09-29 16:45:29.862215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.415 qpair failed and we were unable to recover it. 00:37:29.415 [2024-09-29 16:45:29.862343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.415 [2024-09-29 16:45:29.862382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.415 qpair failed and we were unable to recover it. 00:37:29.415 [2024-09-29 16:45:29.862560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.415 [2024-09-29 16:45:29.862598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.415 qpair failed and we were unable to recover it. 00:37:29.415 [2024-09-29 16:45:29.862801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.415 [2024-09-29 16:45:29.862849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.415 qpair failed and we were unable to recover it. 00:37:29.415 [2024-09-29 16:45:29.862988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.415 [2024-09-29 16:45:29.863027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.415 qpair failed and we were unable to recover it. 00:37:29.415 [2024-09-29 16:45:29.863164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.415 [2024-09-29 16:45:29.863216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.415 qpair failed and we were unable to recover it. 00:37:29.415 [2024-09-29 16:45:29.863387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.415 [2024-09-29 16:45:29.863448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.415 qpair failed and we were unable to recover it. 00:37:29.415 [2024-09-29 16:45:29.863590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.415 [2024-09-29 16:45:29.863626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.415 qpair failed and we were unable to recover it. 00:37:29.415 [2024-09-29 16:45:29.863828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.415 [2024-09-29 16:45:29.863862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.415 qpair failed and we were unable to recover it. 00:37:29.415 [2024-09-29 16:45:29.864021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.415 [2024-09-29 16:45:29.864057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.415 qpair failed and we were unable to recover it. 00:37:29.415 [2024-09-29 16:45:29.864215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.415 [2024-09-29 16:45:29.864250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.415 qpair failed and we were unable to recover it. 00:37:29.415 [2024-09-29 16:45:29.864378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.415 [2024-09-29 16:45:29.864414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.415 qpair failed and we were unable to recover it. 00:37:29.415 [2024-09-29 16:45:29.864563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.415 [2024-09-29 16:45:29.864600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.415 qpair failed and we were unable to recover it. 00:37:29.415 [2024-09-29 16:45:29.864797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.415 [2024-09-29 16:45:29.864831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.415 qpair failed and we were unable to recover it. 00:37:29.415 [2024-09-29 16:45:29.864996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.415 [2024-09-29 16:45:29.865045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.415 qpair failed and we were unable to recover it. 00:37:29.415 [2024-09-29 16:45:29.865210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.415 [2024-09-29 16:45:29.865250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.415 qpair failed and we were unable to recover it. 00:37:29.415 [2024-09-29 16:45:29.865381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.415 [2024-09-29 16:45:29.865418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.415 qpair failed and we were unable to recover it. 00:37:29.415 [2024-09-29 16:45:29.865587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.415 [2024-09-29 16:45:29.865624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.415 qpair failed and we were unable to recover it. 00:37:29.415 [2024-09-29 16:45:29.865790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.415 [2024-09-29 16:45:29.865824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.415 qpair failed and we were unable to recover it. 00:37:29.415 [2024-09-29 16:45:29.865980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.415 [2024-09-29 16:45:29.866017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.415 qpair failed and we were unable to recover it. 00:37:29.415 [2024-09-29 16:45:29.866143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.415 [2024-09-29 16:45:29.866180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.415 qpair failed and we were unable to recover it. 00:37:29.415 [2024-09-29 16:45:29.866388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.415 [2024-09-29 16:45:29.866425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.415 qpair failed and we were unable to recover it. 00:37:29.415 [2024-09-29 16:45:29.866582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.415 [2024-09-29 16:45:29.866619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.415 qpair failed and we were unable to recover it. 00:37:29.415 [2024-09-29 16:45:29.866768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.415 [2024-09-29 16:45:29.866804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.415 qpair failed and we were unable to recover it. 00:37:29.415 [2024-09-29 16:45:29.866992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.415 [2024-09-29 16:45:29.867040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.415 qpair failed and we were unable to recover it. 00:37:29.415 [2024-09-29 16:45:29.867214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.415 [2024-09-29 16:45:29.867267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.415 qpair failed and we were unable to recover it. 00:37:29.415 [2024-09-29 16:45:29.867541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.415 [2024-09-29 16:45:29.867599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.415 qpair failed and we were unable to recover it. 00:37:29.415 [2024-09-29 16:45:29.867723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.415 [2024-09-29 16:45:29.867764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.415 qpair failed and we were unable to recover it. 00:37:29.415 [2024-09-29 16:45:29.867955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.415 [2024-09-29 16:45:29.868007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.415 qpair failed and we were unable to recover it. 00:37:29.415 [2024-09-29 16:45:29.868226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.415 [2024-09-29 16:45:29.868285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.415 qpair failed and we were unable to recover it. 00:37:29.415 [2024-09-29 16:45:29.868463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.415 [2024-09-29 16:45:29.868516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.415 qpair failed and we were unable to recover it. 00:37:29.415 [2024-09-29 16:45:29.868723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.415 [2024-09-29 16:45:29.868757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.415 qpair failed and we were unable to recover it. 00:37:29.415 [2024-09-29 16:45:29.868940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.415 [2024-09-29 16:45:29.868974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.415 qpair failed and we were unable to recover it. 00:37:29.415 [2024-09-29 16:45:29.869104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.415 [2024-09-29 16:45:29.869141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.416 qpair failed and we were unable to recover it. 00:37:29.416 [2024-09-29 16:45:29.869348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.416 [2024-09-29 16:45:29.869401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.416 qpair failed and we were unable to recover it. 00:37:29.416 [2024-09-29 16:45:29.869516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.416 [2024-09-29 16:45:29.869551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.416 qpair failed and we were unable to recover it. 00:37:29.416 [2024-09-29 16:45:29.869669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.416 [2024-09-29 16:45:29.869708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.416 qpair failed and we were unable to recover it. 00:37:29.416 [2024-09-29 16:45:29.869851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.416 [2024-09-29 16:45:29.869884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.416 qpair failed and we were unable to recover it. 00:37:29.416 [2024-09-29 16:45:29.870018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.416 [2024-09-29 16:45:29.870055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.416 qpair failed and we were unable to recover it. 00:37:29.416 [2024-09-29 16:45:29.870229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.416 [2024-09-29 16:45:29.870281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.416 qpair failed and we were unable to recover it. 00:37:29.416 [2024-09-29 16:45:29.870415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.416 [2024-09-29 16:45:29.870453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.416 qpair failed and we were unable to recover it. 00:37:29.416 [2024-09-29 16:45:29.870603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.416 [2024-09-29 16:45:29.870638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.416 qpair failed and we were unable to recover it. 00:37:29.416 [2024-09-29 16:45:29.870792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.416 [2024-09-29 16:45:29.870845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.416 qpair failed and we were unable to recover it. 00:37:29.416 [2024-09-29 16:45:29.871016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.416 [2024-09-29 16:45:29.871069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.416 qpair failed and we were unable to recover it. 00:37:29.416 [2024-09-29 16:45:29.871281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.416 [2024-09-29 16:45:29.871335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.416 qpair failed and we were unable to recover it. 00:37:29.416 [2024-09-29 16:45:29.871447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.416 [2024-09-29 16:45:29.871481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.416 qpair failed and we were unable to recover it. 00:37:29.416 [2024-09-29 16:45:29.871629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.416 [2024-09-29 16:45:29.871664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.416 qpair failed and we were unable to recover it. 00:37:29.416 [2024-09-29 16:45:29.871844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.416 [2024-09-29 16:45:29.871877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.416 qpair failed and we were unable to recover it. 00:37:29.416 [2024-09-29 16:45:29.872051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.416 [2024-09-29 16:45:29.872085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.416 qpair failed and we were unable to recover it. 00:37:29.416 [2024-09-29 16:45:29.872274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.416 [2024-09-29 16:45:29.872327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.416 qpair failed and we were unable to recover it. 00:37:29.416 [2024-09-29 16:45:29.872471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.416 [2024-09-29 16:45:29.872504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.416 qpair failed and we were unable to recover it. 00:37:29.416 [2024-09-29 16:45:29.872653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.416 [2024-09-29 16:45:29.872697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.416 qpair failed and we were unable to recover it. 00:37:29.416 [2024-09-29 16:45:29.872839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.416 [2024-09-29 16:45:29.872873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.416 qpair failed and we were unable to recover it. 00:37:29.416 [2024-09-29 16:45:29.873014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.416 [2024-09-29 16:45:29.873047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.416 qpair failed and we were unable to recover it. 00:37:29.416 [2024-09-29 16:45:29.873164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.416 [2024-09-29 16:45:29.873200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.416 qpair failed and we were unable to recover it. 00:37:29.416 [2024-09-29 16:45:29.873369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.416 [2024-09-29 16:45:29.873403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.416 qpair failed and we were unable to recover it. 00:37:29.416 [2024-09-29 16:45:29.873567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.416 [2024-09-29 16:45:29.873600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.416 qpair failed and we were unable to recover it. 00:37:29.416 [2024-09-29 16:45:29.873762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.416 [2024-09-29 16:45:29.873814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.416 qpair failed and we were unable to recover it. 00:37:29.416 [2024-09-29 16:45:29.873928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.416 [2024-09-29 16:45:29.873963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.416 qpair failed and we were unable to recover it. 00:37:29.416 [2024-09-29 16:45:29.874102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.416 [2024-09-29 16:45:29.874153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.416 qpair failed and we were unable to recover it. 00:37:29.416 [2024-09-29 16:45:29.874407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.416 [2024-09-29 16:45:29.874464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.416 qpair failed and we were unable to recover it. 00:37:29.416 [2024-09-29 16:45:29.874601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.416 [2024-09-29 16:45:29.874635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.416 qpair failed and we were unable to recover it. 00:37:29.416 [2024-09-29 16:45:29.874770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.416 [2024-09-29 16:45:29.874822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.416 qpair failed and we were unable to recover it. 00:37:29.416 [2024-09-29 16:45:29.875001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.416 [2024-09-29 16:45:29.875078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.416 qpair failed and we were unable to recover it. 00:37:29.416 [2024-09-29 16:45:29.875208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.416 [2024-09-29 16:45:29.875247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.416 qpair failed and we were unable to recover it. 00:37:29.416 [2024-09-29 16:45:29.875401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.416 [2024-09-29 16:45:29.875435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.416 qpair failed and we were unable to recover it. 00:37:29.416 [2024-09-29 16:45:29.875569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.416 [2024-09-29 16:45:29.875603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.417 qpair failed and we were unable to recover it. 00:37:29.417 [2024-09-29 16:45:29.875762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.417 [2024-09-29 16:45:29.875819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.417 qpair failed and we were unable to recover it. 00:37:29.417 [2024-09-29 16:45:29.875979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.417 [2024-09-29 16:45:29.876032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.417 qpair failed and we were unable to recover it. 00:37:29.417 [2024-09-29 16:45:29.876236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.417 [2024-09-29 16:45:29.876288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.417 qpair failed and we were unable to recover it. 00:37:29.417 [2024-09-29 16:45:29.876458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.417 [2024-09-29 16:45:29.876492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.417 qpair failed and we were unable to recover it. 00:37:29.417 [2024-09-29 16:45:29.876636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.417 [2024-09-29 16:45:29.876670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.417 qpair failed and we were unable to recover it. 00:37:29.417 [2024-09-29 16:45:29.876816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.417 [2024-09-29 16:45:29.876868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.417 qpair failed and we were unable to recover it. 00:37:29.417 [2024-09-29 16:45:29.877019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.417 [2024-09-29 16:45:29.877069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.417 qpair failed and we were unable to recover it. 00:37:29.417 [2024-09-29 16:45:29.877196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.417 [2024-09-29 16:45:29.877235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.417 qpair failed and we were unable to recover it. 00:37:29.417 [2024-09-29 16:45:29.877367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.417 [2024-09-29 16:45:29.877402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.417 qpair failed and we were unable to recover it. 00:37:29.417 [2024-09-29 16:45:29.877541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.417 [2024-09-29 16:45:29.877574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.417 qpair failed and we were unable to recover it. 00:37:29.417 [2024-09-29 16:45:29.877704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.417 [2024-09-29 16:45:29.877751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.417 qpair failed and we were unable to recover it. 00:37:29.417 [2024-09-29 16:45:29.877879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.417 [2024-09-29 16:45:29.877915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.417 qpair failed and we were unable to recover it. 00:37:29.417 [2024-09-29 16:45:29.878057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.417 [2024-09-29 16:45:29.878092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.417 qpair failed and we were unable to recover it. 00:37:29.417 [2024-09-29 16:45:29.878260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.417 [2024-09-29 16:45:29.878293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.417 qpair failed and we were unable to recover it. 00:37:29.417 [2024-09-29 16:45:29.878436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.417 [2024-09-29 16:45:29.878470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.417 qpair failed and we were unable to recover it. 00:37:29.417 [2024-09-29 16:45:29.878593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.417 [2024-09-29 16:45:29.878626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.417 qpair failed and we were unable to recover it. 00:37:29.417 [2024-09-29 16:45:29.878773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.417 [2024-09-29 16:45:29.878808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.417 qpair failed and we were unable to recover it. 00:37:29.417 [2024-09-29 16:45:29.878980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.417 [2024-09-29 16:45:29.879021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.417 qpair failed and we were unable to recover it. 00:37:29.417 [2024-09-29 16:45:29.879146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.417 [2024-09-29 16:45:29.879184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.417 qpair failed and we were unable to recover it. 00:37:29.417 [2024-09-29 16:45:29.879328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.417 [2024-09-29 16:45:29.879364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.417 qpair failed and we were unable to recover it. 00:37:29.417 [2024-09-29 16:45:29.879524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.417 [2024-09-29 16:45:29.879559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.417 qpair failed and we were unable to recover it. 00:37:29.417 [2024-09-29 16:45:29.879728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.417 [2024-09-29 16:45:29.879762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.417 qpair failed and we were unable to recover it. 00:37:29.417 [2024-09-29 16:45:29.879883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.417 [2024-09-29 16:45:29.879917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.417 qpair failed and we were unable to recover it. 00:37:29.417 [2024-09-29 16:45:29.880117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.417 [2024-09-29 16:45:29.880155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.417 qpair failed and we were unable to recover it. 00:37:29.417 [2024-09-29 16:45:29.880313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.417 [2024-09-29 16:45:29.880351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.417 qpair failed and we were unable to recover it. 00:37:29.417 [2024-09-29 16:45:29.880477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.417 [2024-09-29 16:45:29.880515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.417 qpair failed and we were unable to recover it. 00:37:29.417 [2024-09-29 16:45:29.880709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.417 [2024-09-29 16:45:29.880757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.417 qpair failed and we were unable to recover it. 00:37:29.417 [2024-09-29 16:45:29.880937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.417 [2024-09-29 16:45:29.881004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.417 qpair failed and we were unable to recover it. 00:37:29.417 [2024-09-29 16:45:29.881254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.417 [2024-09-29 16:45:29.881294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.417 qpair failed and we were unable to recover it. 00:37:29.417 [2024-09-29 16:45:29.881431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.417 [2024-09-29 16:45:29.881469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.417 qpair failed and we were unable to recover it. 00:37:29.417 [2024-09-29 16:45:29.881622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.417 [2024-09-29 16:45:29.881659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.417 qpair failed and we were unable to recover it. 00:37:29.417 [2024-09-29 16:45:29.881836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.417 [2024-09-29 16:45:29.881869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.417 qpair failed and we were unable to recover it. 00:37:29.417 [2024-09-29 16:45:29.882042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.417 [2024-09-29 16:45:29.882119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.417 qpair failed and we were unable to recover it. 00:37:29.417 [2024-09-29 16:45:29.882281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.417 [2024-09-29 16:45:29.882320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.417 qpair failed and we were unable to recover it. 00:37:29.417 [2024-09-29 16:45:29.882573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.418 [2024-09-29 16:45:29.882628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.418 qpair failed and we were unable to recover it. 00:37:29.418 [2024-09-29 16:45:29.882846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.418 [2024-09-29 16:45:29.882881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.418 qpair failed and we were unable to recover it. 00:37:29.418 [2024-09-29 16:45:29.883037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.418 [2024-09-29 16:45:29.883086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.418 qpair failed and we were unable to recover it. 00:37:29.418 [2024-09-29 16:45:29.883325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.418 [2024-09-29 16:45:29.883358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.418 qpair failed and we were unable to recover it. 00:37:29.418 [2024-09-29 16:45:29.883510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.418 [2024-09-29 16:45:29.883543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.418 qpair failed and we were unable to recover it. 00:37:29.418 [2024-09-29 16:45:29.883658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.418 [2024-09-29 16:45:29.883702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.418 qpair failed and we were unable to recover it. 00:37:29.418 [2024-09-29 16:45:29.883835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.418 [2024-09-29 16:45:29.883878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.418 qpair failed and we were unable to recover it. 00:37:29.418 [2024-09-29 16:45:29.884065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.418 [2024-09-29 16:45:29.884104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.418 qpair failed and we were unable to recover it. 00:37:29.418 [2024-09-29 16:45:29.884246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.418 [2024-09-29 16:45:29.884284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.418 qpair failed and we were unable to recover it. 00:37:29.418 [2024-09-29 16:45:29.884492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.418 [2024-09-29 16:45:29.884542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.418 qpair failed and we were unable to recover it. 00:37:29.418 [2024-09-29 16:45:29.884687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.418 [2024-09-29 16:45:29.884721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.418 qpair failed and we were unable to recover it. 00:37:29.418 [2024-09-29 16:45:29.884858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.418 [2024-09-29 16:45:29.884891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.418 qpair failed and we were unable to recover it. 00:37:29.418 [2024-09-29 16:45:29.885059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.418 [2024-09-29 16:45:29.885095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.418 qpair failed and we were unable to recover it. 00:37:29.418 [2024-09-29 16:45:29.885281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.418 [2024-09-29 16:45:29.885320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.418 qpair failed and we were unable to recover it. 00:37:29.418 [2024-09-29 16:45:29.885504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.418 [2024-09-29 16:45:29.885541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.418 qpair failed and we were unable to recover it. 00:37:29.418 [2024-09-29 16:45:29.885726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.418 [2024-09-29 16:45:29.885774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.418 qpair failed and we were unable to recover it. 00:37:29.418 [2024-09-29 16:45:29.885902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.418 [2024-09-29 16:45:29.885955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.418 qpair failed and we were unable to recover it. 00:37:29.418 [2024-09-29 16:45:29.886086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.418 [2024-09-29 16:45:29.886122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.418 qpair failed and we were unable to recover it. 00:37:29.418 [2024-09-29 16:45:29.886300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.418 [2024-09-29 16:45:29.886370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.418 qpair failed and we were unable to recover it. 00:37:29.418 [2024-09-29 16:45:29.886530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.418 [2024-09-29 16:45:29.886566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.418 qpair failed and we were unable to recover it. 00:37:29.418 [2024-09-29 16:45:29.886736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.418 [2024-09-29 16:45:29.886769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.418 qpair failed and we were unable to recover it. 00:37:29.418 [2024-09-29 16:45:29.886911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.418 [2024-09-29 16:45:29.886965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.418 qpair failed and we were unable to recover it. 00:37:29.418 [2024-09-29 16:45:29.887104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.418 [2024-09-29 16:45:29.887156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.418 qpair failed and we were unable to recover it. 00:37:29.418 [2024-09-29 16:45:29.887302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.418 [2024-09-29 16:45:29.887338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.418 qpair failed and we were unable to recover it. 00:37:29.418 [2024-09-29 16:45:29.887504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.418 [2024-09-29 16:45:29.887540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.418 qpair failed and we were unable to recover it. 00:37:29.418 [2024-09-29 16:45:29.887759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.418 [2024-09-29 16:45:29.887793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.418 qpair failed and we were unable to recover it. 00:37:29.418 [2024-09-29 16:45:29.887928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.418 [2024-09-29 16:45:29.887962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.418 qpair failed and we were unable to recover it. 00:37:29.418 [2024-09-29 16:45:29.888139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.418 [2024-09-29 16:45:29.888175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.418 qpair failed and we were unable to recover it. 00:37:29.418 [2024-09-29 16:45:29.888307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.418 [2024-09-29 16:45:29.888358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.418 qpair failed and we were unable to recover it. 00:37:29.418 [2024-09-29 16:45:29.888483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.418 [2024-09-29 16:45:29.888520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.418 qpair failed and we were unable to recover it. 00:37:29.418 [2024-09-29 16:45:29.888687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.418 [2024-09-29 16:45:29.888737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.418 qpair failed and we were unable to recover it. 00:37:29.418 [2024-09-29 16:45:29.888892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.418 [2024-09-29 16:45:29.888929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.418 qpair failed and we were unable to recover it. 00:37:29.418 [2024-09-29 16:45:29.889066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.418 [2024-09-29 16:45:29.889102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.418 qpair failed and we were unable to recover it. 00:37:29.419 [2024-09-29 16:45:29.889269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.419 [2024-09-29 16:45:29.889306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.419 qpair failed and we were unable to recover it. 00:37:29.419 [2024-09-29 16:45:29.889455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.419 [2024-09-29 16:45:29.889504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.419 qpair failed and we were unable to recover it. 00:37:29.419 [2024-09-29 16:45:29.889722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.419 [2024-09-29 16:45:29.889759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.419 qpair failed and we were unable to recover it. 00:37:29.419 [2024-09-29 16:45:29.889928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.419 [2024-09-29 16:45:29.889994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.419 qpair failed and we were unable to recover it. 00:37:29.419 [2024-09-29 16:45:29.890169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.419 [2024-09-29 16:45:29.890207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.419 qpair failed and we were unable to recover it. 00:37:29.419 [2024-09-29 16:45:29.890362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.419 [2024-09-29 16:45:29.890399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.419 qpair failed and we were unable to recover it. 00:37:29.419 [2024-09-29 16:45:29.890535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.419 [2024-09-29 16:45:29.890568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.419 qpair failed and we were unable to recover it. 00:37:29.419 [2024-09-29 16:45:29.890710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.419 [2024-09-29 16:45:29.890744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.419 qpair failed and we were unable to recover it. 00:37:29.419 [2024-09-29 16:45:29.890884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.419 [2024-09-29 16:45:29.890917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.419 qpair failed and we were unable to recover it. 00:37:29.419 [2024-09-29 16:45:29.891073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.419 [2024-09-29 16:45:29.891124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.419 qpair failed and we were unable to recover it. 00:37:29.419 [2024-09-29 16:45:29.891278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.419 [2024-09-29 16:45:29.891314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.419 qpair failed and we were unable to recover it. 00:37:29.419 [2024-09-29 16:45:29.891441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.419 [2024-09-29 16:45:29.891477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.419 qpair failed and we were unable to recover it. 00:37:29.419 [2024-09-29 16:45:29.891658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.419 [2024-09-29 16:45:29.891723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.419 qpair failed and we were unable to recover it. 00:37:29.419 [2024-09-29 16:45:29.891861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.419 [2024-09-29 16:45:29.891914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.419 qpair failed and we were unable to recover it. 00:37:29.419 [2024-09-29 16:45:29.892087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.419 [2024-09-29 16:45:29.892126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.419 qpair failed and we were unable to recover it. 00:37:29.419 [2024-09-29 16:45:29.892275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.419 [2024-09-29 16:45:29.892313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.419 qpair failed and we were unable to recover it. 00:37:29.419 [2024-09-29 16:45:29.892443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.419 [2024-09-29 16:45:29.892496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.419 qpair failed and we were unable to recover it. 00:37:29.419 [2024-09-29 16:45:29.892655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.419 [2024-09-29 16:45:29.892705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.419 qpair failed and we were unable to recover it. 00:37:29.419 [2024-09-29 16:45:29.892845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.419 [2024-09-29 16:45:29.892880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.419 qpair failed and we were unable to recover it. 00:37:29.419 [2024-09-29 16:45:29.893019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.419 [2024-09-29 16:45:29.893053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.419 qpair failed and we were unable to recover it. 00:37:29.419 [2024-09-29 16:45:29.893239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.419 [2024-09-29 16:45:29.893276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.419 qpair failed and we were unable to recover it. 00:37:29.419 [2024-09-29 16:45:29.893538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.419 [2024-09-29 16:45:29.893578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.419 qpair failed and we were unable to recover it. 00:37:29.419 [2024-09-29 16:45:29.893738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.419 [2024-09-29 16:45:29.893786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.419 qpair failed and we were unable to recover it. 00:37:29.419 [2024-09-29 16:45:29.893930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.419 [2024-09-29 16:45:29.893966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.419 qpair failed and we were unable to recover it. 00:37:29.419 [2024-09-29 16:45:29.894123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.419 [2024-09-29 16:45:29.894176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.419 qpair failed and we were unable to recover it. 00:37:29.419 [2024-09-29 16:45:29.894315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.419 [2024-09-29 16:45:29.894370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.419 qpair failed and we were unable to recover it. 00:37:29.419 [2024-09-29 16:45:29.894540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.419 [2024-09-29 16:45:29.894574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.419 qpair failed and we were unable to recover it. 00:37:29.419 [2024-09-29 16:45:29.894729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.419 [2024-09-29 16:45:29.894764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.419 qpair failed and we were unable to recover it. 00:37:29.419 [2024-09-29 16:45:29.894886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.419 [2024-09-29 16:45:29.894938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.419 qpair failed and we were unable to recover it. 00:37:29.419 [2024-09-29 16:45:29.895102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.419 [2024-09-29 16:45:29.895136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.419 qpair failed and we were unable to recover it. 00:37:29.419 [2024-09-29 16:45:29.895270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.419 [2024-09-29 16:45:29.895307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.419 qpair failed and we were unable to recover it. 00:37:29.419 [2024-09-29 16:45:29.895467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.419 [2024-09-29 16:45:29.895504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.419 qpair failed and we were unable to recover it. 00:37:29.419 [2024-09-29 16:45:29.895632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.419 [2024-09-29 16:45:29.895669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.419 qpair failed and we were unable to recover it. 00:37:29.419 [2024-09-29 16:45:29.895827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.419 [2024-09-29 16:45:29.895882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.419 qpair failed and we were unable to recover it. 00:37:29.419 [2024-09-29 16:45:29.896048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.419 [2024-09-29 16:45:29.896098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.419 qpair failed and we were unable to recover it. 00:37:29.419 [2024-09-29 16:45:29.896258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.419 [2024-09-29 16:45:29.896310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.419 qpair failed and we were unable to recover it. 00:37:29.419 [2024-09-29 16:45:29.896418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.419 [2024-09-29 16:45:29.896452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.419 qpair failed and we were unable to recover it. 00:37:29.419 [2024-09-29 16:45:29.896585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.419 [2024-09-29 16:45:29.896633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.419 qpair failed and we were unable to recover it. 00:37:29.419 [2024-09-29 16:45:29.896789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.419 [2024-09-29 16:45:29.896825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.419 qpair failed and we were unable to recover it. 00:37:29.419 [2024-09-29 16:45:29.897075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.419 [2024-09-29 16:45:29.897134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.419 qpair failed and we were unable to recover it. 00:37:29.419 [2024-09-29 16:45:29.897307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.419 [2024-09-29 16:45:29.897386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.419 qpair failed and we were unable to recover it. 00:37:29.419 [2024-09-29 16:45:29.897540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.420 [2024-09-29 16:45:29.897578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.420 qpair failed and we were unable to recover it. 00:37:29.420 [2024-09-29 16:45:29.897756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.420 [2024-09-29 16:45:29.897791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.420 qpair failed and we were unable to recover it. 00:37:29.420 [2024-09-29 16:45:29.897979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.420 [2024-09-29 16:45:29.898016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.420 qpair failed and we were unable to recover it. 00:37:29.420 [2024-09-29 16:45:29.898196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.420 [2024-09-29 16:45:29.898232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.420 qpair failed and we were unable to recover it. 00:37:29.420 [2024-09-29 16:45:29.898459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.420 [2024-09-29 16:45:29.898497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.420 qpair failed and we were unable to recover it. 00:37:29.420 [2024-09-29 16:45:29.898655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.420 [2024-09-29 16:45:29.898712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.420 qpair failed and we were unable to recover it. 00:37:29.420 [2024-09-29 16:45:29.898869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.420 [2024-09-29 16:45:29.898905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.420 qpair failed and we were unable to recover it. 00:37:29.420 [2024-09-29 16:45:29.899075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.420 [2024-09-29 16:45:29.899128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.420 qpair failed and we were unable to recover it. 00:37:29.420 [2024-09-29 16:45:29.899351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.420 [2024-09-29 16:45:29.899403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.420 qpair failed and we were unable to recover it. 00:37:29.420 [2024-09-29 16:45:29.899545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.420 [2024-09-29 16:45:29.899579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.420 qpair failed and we were unable to recover it. 00:37:29.420 [2024-09-29 16:45:29.899721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.420 [2024-09-29 16:45:29.899756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.420 qpair failed and we were unable to recover it. 00:37:29.420 [2024-09-29 16:45:29.899894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.420 [2024-09-29 16:45:29.899947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.420 qpair failed and we were unable to recover it. 00:37:29.420 [2024-09-29 16:45:29.900087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.420 [2024-09-29 16:45:29.900127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.420 qpair failed and we were unable to recover it. 00:37:29.420 [2024-09-29 16:45:29.900296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.420 [2024-09-29 16:45:29.900330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.420 qpair failed and we were unable to recover it. 00:37:29.420 [2024-09-29 16:45:29.900471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.420 [2024-09-29 16:45:29.900505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.420 qpair failed and we were unable to recover it. 00:37:29.420 [2024-09-29 16:45:29.900617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.420 [2024-09-29 16:45:29.900651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.420 qpair failed and we were unable to recover it. 00:37:29.420 [2024-09-29 16:45:29.900776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.420 [2024-09-29 16:45:29.900810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.420 qpair failed and we were unable to recover it. 00:37:29.420 [2024-09-29 16:45:29.900925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.420 [2024-09-29 16:45:29.900959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.420 qpair failed and we were unable to recover it. 00:37:29.420 [2024-09-29 16:45:29.901126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.420 [2024-09-29 16:45:29.901160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.420 qpair failed and we were unable to recover it. 00:37:29.420 [2024-09-29 16:45:29.901271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.420 [2024-09-29 16:45:29.901305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.420 qpair failed and we were unable to recover it. 00:37:29.420 [2024-09-29 16:45:29.901418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.420 [2024-09-29 16:45:29.901453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.420 qpair failed and we were unable to recover it. 00:37:29.420 [2024-09-29 16:45:29.901609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.420 [2024-09-29 16:45:29.901657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.420 qpair failed and we were unable to recover it. 00:37:29.420 [2024-09-29 16:45:29.901793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.420 [2024-09-29 16:45:29.901828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.420 qpair failed and we were unable to recover it. 00:37:29.420 [2024-09-29 16:45:29.901969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.420 [2024-09-29 16:45:29.902021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.420 qpair failed and we were unable to recover it. 00:37:29.420 [2024-09-29 16:45:29.902202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.420 [2024-09-29 16:45:29.902240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.420 qpair failed and we were unable to recover it. 00:37:29.420 [2024-09-29 16:45:29.902397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.420 [2024-09-29 16:45:29.902434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.420 qpair failed and we were unable to recover it. 00:37:29.420 [2024-09-29 16:45:29.902604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.420 [2024-09-29 16:45:29.902649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.420 qpair failed and we were unable to recover it. 00:37:29.420 [2024-09-29 16:45:29.902850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.420 [2024-09-29 16:45:29.902904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.420 qpair failed and we were unable to recover it. 00:37:29.420 [2024-09-29 16:45:29.903032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.420 [2024-09-29 16:45:29.903084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.420 qpair failed and we were unable to recover it. 00:37:29.420 [2024-09-29 16:45:29.903253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.420 [2024-09-29 16:45:29.903291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.420 qpair failed and we were unable to recover it. 00:37:29.712 [2024-09-29 16:45:29.903450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.712 [2024-09-29 16:45:29.903484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.712 qpair failed and we were unable to recover it. 00:37:29.712 [2024-09-29 16:45:29.903605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.712 [2024-09-29 16:45:29.903640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.712 qpair failed and we were unable to recover it. 00:37:29.712 [2024-09-29 16:45:29.903813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.712 [2024-09-29 16:45:29.903854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.712 qpair failed and we were unable to recover it. 00:37:29.712 [2024-09-29 16:45:29.904025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.712 [2024-09-29 16:45:29.904058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.712 qpair failed and we were unable to recover it. 00:37:29.712 [2024-09-29 16:45:29.904316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.712 [2024-09-29 16:45:29.904375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.712 qpair failed and we were unable to recover it. 00:37:29.712 [2024-09-29 16:45:29.904518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.712 [2024-09-29 16:45:29.904552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.712 qpair failed and we were unable to recover it. 00:37:29.712 [2024-09-29 16:45:29.904677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.712 [2024-09-29 16:45:29.904712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.712 qpair failed and we were unable to recover it. 00:37:29.712 [2024-09-29 16:45:29.904849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.712 [2024-09-29 16:45:29.904884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.712 qpair failed and we were unable to recover it. 00:37:29.712 [2024-09-29 16:45:29.905042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.712 [2024-09-29 16:45:29.905097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.712 qpair failed and we were unable to recover it. 00:37:29.712 [2024-09-29 16:45:29.905261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.712 [2024-09-29 16:45:29.905314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.712 qpair failed and we were unable to recover it. 00:37:29.712 [2024-09-29 16:45:29.905474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.712 [2024-09-29 16:45:29.905527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.712 qpair failed and we were unable to recover it. 00:37:29.712 [2024-09-29 16:45:29.905640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.712 [2024-09-29 16:45:29.905682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.712 qpair failed and we were unable to recover it. 00:37:29.712 [2024-09-29 16:45:29.905877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.712 [2024-09-29 16:45:29.905930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.712 qpair failed and we were unable to recover it. 00:37:29.712 [2024-09-29 16:45:29.906063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.712 [2024-09-29 16:45:29.906103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.712 qpair failed and we were unable to recover it. 00:37:29.712 [2024-09-29 16:45:29.906244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.712 [2024-09-29 16:45:29.906296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.712 qpair failed and we were unable to recover it. 00:37:29.712 [2024-09-29 16:45:29.906457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.712 [2024-09-29 16:45:29.906490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.712 qpair failed and we were unable to recover it. 00:37:29.712 [2024-09-29 16:45:29.906632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.712 [2024-09-29 16:45:29.906665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.712 qpair failed and we were unable to recover it. 00:37:29.712 [2024-09-29 16:45:29.906792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.712 [2024-09-29 16:45:29.906826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.712 qpair failed and we were unable to recover it. 00:37:29.712 [2024-09-29 16:45:29.906991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.712 [2024-09-29 16:45:29.907029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.712 qpair failed and we were unable to recover it. 00:37:29.712 [2024-09-29 16:45:29.907191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.712 [2024-09-29 16:45:29.907229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.712 qpair failed and we were unable to recover it. 00:37:29.712 [2024-09-29 16:45:29.907353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.712 [2024-09-29 16:45:29.907390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.712 qpair failed and we were unable to recover it. 00:37:29.712 [2024-09-29 16:45:29.907518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.712 [2024-09-29 16:45:29.907554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.712 qpair failed and we were unable to recover it. 00:37:29.712 [2024-09-29 16:45:29.907729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.712 [2024-09-29 16:45:29.907768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.712 qpair failed and we were unable to recover it. 00:37:29.712 [2024-09-29 16:45:29.907913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.712 [2024-09-29 16:45:29.907953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.712 qpair failed and we were unable to recover it. 00:37:29.712 [2024-09-29 16:45:29.908075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.712 [2024-09-29 16:45:29.908115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.712 qpair failed and we were unable to recover it. 00:37:29.712 [2024-09-29 16:45:29.908273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.712 [2024-09-29 16:45:29.908311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.712 qpair failed and we were unable to recover it. 00:37:29.712 [2024-09-29 16:45:29.908438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.712 [2024-09-29 16:45:29.908475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.712 qpair failed and we were unable to recover it. 00:37:29.712 [2024-09-29 16:45:29.908618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.712 [2024-09-29 16:45:29.908653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.712 qpair failed and we were unable to recover it. 00:37:29.712 [2024-09-29 16:45:29.908808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.712 [2024-09-29 16:45:29.908842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.712 qpair failed and we were unable to recover it. 00:37:29.712 [2024-09-29 16:45:29.908955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.712 [2024-09-29 16:45:29.908988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.712 qpair failed and we were unable to recover it. 00:37:29.712 [2024-09-29 16:45:29.909102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.713 [2024-09-29 16:45:29.909136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.713 qpair failed and we were unable to recover it. 00:37:29.713 [2024-09-29 16:45:29.909373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.713 [2024-09-29 16:45:29.909433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.713 qpair failed and we were unable to recover it. 00:37:29.713 [2024-09-29 16:45:29.909588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.713 [2024-09-29 16:45:29.909624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.713 qpair failed and we were unable to recover it. 00:37:29.713 [2024-09-29 16:45:29.909807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.713 [2024-09-29 16:45:29.909842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.713 qpair failed and we were unable to recover it. 00:37:29.713 [2024-09-29 16:45:29.909953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.713 [2024-09-29 16:45:29.909986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.713 qpair failed and we were unable to recover it. 00:37:29.713 [2024-09-29 16:45:29.910162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.713 [2024-09-29 16:45:29.910221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.713 qpair failed and we were unable to recover it. 00:37:29.713 [2024-09-29 16:45:29.910458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.713 [2024-09-29 16:45:29.910495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.713 qpair failed and we were unable to recover it. 00:37:29.713 [2024-09-29 16:45:29.910656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.713 [2024-09-29 16:45:29.910719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.713 qpair failed and we were unable to recover it. 00:37:29.713 [2024-09-29 16:45:29.910882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.713 [2024-09-29 16:45:29.910930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.713 qpair failed and we were unable to recover it. 00:37:29.713 [2024-09-29 16:45:29.911100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.713 [2024-09-29 16:45:29.911159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.713 qpair failed and we were unable to recover it. 00:37:29.713 [2024-09-29 16:45:29.911328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.713 [2024-09-29 16:45:29.911379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.713 qpair failed and we were unable to recover it. 00:37:29.713 [2024-09-29 16:45:29.911491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.713 [2024-09-29 16:45:29.911525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.713 qpair failed and we were unable to recover it. 00:37:29.713 [2024-09-29 16:45:29.911667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.713 [2024-09-29 16:45:29.911714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.713 qpair failed and we were unable to recover it. 00:37:29.713 [2024-09-29 16:45:29.911859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.713 [2024-09-29 16:45:29.911906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.713 qpair failed and we were unable to recover it. 00:37:29.713 [2024-09-29 16:45:29.912086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.713 [2024-09-29 16:45:29.912121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.713 qpair failed and we were unable to recover it. 00:37:29.713 [2024-09-29 16:45:29.912319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.713 [2024-09-29 16:45:29.912367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.713 qpair failed and we were unable to recover it. 00:37:29.713 [2024-09-29 16:45:29.912491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.713 [2024-09-29 16:45:29.912527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.713 qpair failed and we were unable to recover it. 00:37:29.713 [2024-09-29 16:45:29.912677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.713 [2024-09-29 16:45:29.912713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.713 qpair failed and we were unable to recover it. 00:37:29.713 [2024-09-29 16:45:29.912835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.713 [2024-09-29 16:45:29.912868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.713 qpair failed and we were unable to recover it. 00:37:29.713 [2024-09-29 16:45:29.913168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.713 [2024-09-29 16:45:29.913228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.713 qpair failed and we were unable to recover it. 00:37:29.713 [2024-09-29 16:45:29.913504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.713 [2024-09-29 16:45:29.913569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.713 qpair failed and we were unable to recover it. 00:37:29.713 [2024-09-29 16:45:29.913714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.713 [2024-09-29 16:45:29.913749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.713 qpair failed and we were unable to recover it. 00:37:29.713 [2024-09-29 16:45:29.913905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.713 [2024-09-29 16:45:29.913961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.713 qpair failed and we were unable to recover it. 00:37:29.713 [2024-09-29 16:45:29.914074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.713 [2024-09-29 16:45:29.914108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.713 qpair failed and we were unable to recover it. 00:37:29.713 [2024-09-29 16:45:29.914224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.713 [2024-09-29 16:45:29.914259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.713 qpair failed and we were unable to recover it. 00:37:29.713 [2024-09-29 16:45:29.914376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.713 [2024-09-29 16:45:29.914412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.713 qpair failed and we were unable to recover it. 00:37:29.713 [2024-09-29 16:45:29.914559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.713 [2024-09-29 16:45:29.914594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.713 qpair failed and we were unable to recover it. 00:37:29.713 [2024-09-29 16:45:29.914738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.713 [2024-09-29 16:45:29.914786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.713 qpair failed and we were unable to recover it. 00:37:29.713 [2024-09-29 16:45:29.914908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.713 [2024-09-29 16:45:29.914943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.713 qpair failed and we were unable to recover it. 00:37:29.713 [2024-09-29 16:45:29.915081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.713 [2024-09-29 16:45:29.915115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.713 qpair failed and we were unable to recover it. 00:37:29.713 [2024-09-29 16:45:29.915281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.713 [2024-09-29 16:45:29.915314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.713 qpair failed and we were unable to recover it. 00:37:29.713 [2024-09-29 16:45:29.915423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.713 [2024-09-29 16:45:29.915458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.713 qpair failed and we were unable to recover it. 00:37:29.713 [2024-09-29 16:45:29.915624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.713 [2024-09-29 16:45:29.915682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.713 qpair failed and we were unable to recover it. 00:37:29.713 [2024-09-29 16:45:29.915848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.713 [2024-09-29 16:45:29.915888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.714 qpair failed and we were unable to recover it. 00:37:29.714 [2024-09-29 16:45:29.916052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.714 [2024-09-29 16:45:29.916089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.714 qpair failed and we were unable to recover it. 00:37:29.714 [2024-09-29 16:45:29.916238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.714 [2024-09-29 16:45:29.916308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.714 qpair failed and we were unable to recover it. 00:37:29.714 [2024-09-29 16:45:29.916449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.714 [2024-09-29 16:45:29.916483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.714 qpair failed and we were unable to recover it. 00:37:29.714 [2024-09-29 16:45:29.916624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.714 [2024-09-29 16:45:29.916658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.714 qpair failed and we were unable to recover it. 00:37:29.714 [2024-09-29 16:45:29.916838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.714 [2024-09-29 16:45:29.916871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.714 qpair failed and we were unable to recover it. 00:37:29.714 [2024-09-29 16:45:29.917011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.714 [2024-09-29 16:45:29.917045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.714 qpair failed and we were unable to recover it. 00:37:29.714 [2024-09-29 16:45:29.917149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.714 [2024-09-29 16:45:29.917183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.714 qpair failed and we were unable to recover it. 00:37:29.714 [2024-09-29 16:45:29.917338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.714 [2024-09-29 16:45:29.917375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.714 qpair failed and we were unable to recover it. 00:37:29.714 [2024-09-29 16:45:29.917544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.714 [2024-09-29 16:45:29.917582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.714 qpair failed and we were unable to recover it. 00:37:29.714 [2024-09-29 16:45:29.917743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.714 [2024-09-29 16:45:29.917790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.714 qpair failed and we were unable to recover it. 00:37:29.714 [2024-09-29 16:45:29.917944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.714 [2024-09-29 16:45:29.917981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.714 qpair failed and we were unable to recover it. 00:37:29.714 [2024-09-29 16:45:29.918290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.714 [2024-09-29 16:45:29.918362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.714 qpair failed and we were unable to recover it. 00:37:29.714 [2024-09-29 16:45:29.918533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.714 [2024-09-29 16:45:29.918573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.714 qpair failed and we were unable to recover it. 00:37:29.714 [2024-09-29 16:45:29.918739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.714 [2024-09-29 16:45:29.918774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.714 qpair failed and we were unable to recover it. 00:37:29.714 [2024-09-29 16:45:29.918921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.714 [2024-09-29 16:45:29.918955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.714 qpair failed and we were unable to recover it. 00:37:29.714 [2024-09-29 16:45:29.919068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.714 [2024-09-29 16:45:29.919121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.714 qpair failed and we were unable to recover it. 00:37:29.714 [2024-09-29 16:45:29.919392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.714 [2024-09-29 16:45:29.919483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.714 qpair failed and we were unable to recover it. 00:37:29.714 [2024-09-29 16:45:29.919658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.714 [2024-09-29 16:45:29.919705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.714 qpair failed and we were unable to recover it. 00:37:29.714 [2024-09-29 16:45:29.919826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.714 [2024-09-29 16:45:29.919873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.714 qpair failed and we were unable to recover it. 00:37:29.714 [2024-09-29 16:45:29.920046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.714 [2024-09-29 16:45:29.920080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.714 qpair failed and we were unable to recover it. 00:37:29.714 [2024-09-29 16:45:29.920246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.714 [2024-09-29 16:45:29.920312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.714 qpair failed and we were unable to recover it. 00:37:29.714 [2024-09-29 16:45:29.920467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.714 [2024-09-29 16:45:29.920504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.714 qpair failed and we were unable to recover it. 00:37:29.714 [2024-09-29 16:45:29.920655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.714 [2024-09-29 16:45:29.920706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.714 qpair failed and we were unable to recover it. 00:37:29.714 [2024-09-29 16:45:29.920856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.714 [2024-09-29 16:45:29.920904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.714 qpair failed and we were unable to recover it. 00:37:29.714 [2024-09-29 16:45:29.921041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.714 [2024-09-29 16:45:29.921082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.714 qpair failed and we were unable to recover it. 00:37:29.714 [2024-09-29 16:45:29.921262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.714 [2024-09-29 16:45:29.921320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.714 qpair failed and we were unable to recover it. 00:37:29.714 [2024-09-29 16:45:29.921476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.714 [2024-09-29 16:45:29.921545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.714 qpair failed and we were unable to recover it. 00:37:29.714 [2024-09-29 16:45:29.921656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.714 [2024-09-29 16:45:29.921705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.714 qpair failed and we were unable to recover it. 00:37:29.714 [2024-09-29 16:45:29.921842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.714 [2024-09-29 16:45:29.921895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.714 qpair failed and we were unable to recover it. 00:37:29.714 [2024-09-29 16:45:29.922080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.714 [2024-09-29 16:45:29.922116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.714 qpair failed and we were unable to recover it. 00:37:29.714 [2024-09-29 16:45:29.922250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.714 [2024-09-29 16:45:29.922295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.714 qpair failed and we were unable to recover it. 00:37:29.714 [2024-09-29 16:45:29.922434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.715 [2024-09-29 16:45:29.922468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.715 qpair failed and we were unable to recover it. 00:37:29.715 [2024-09-29 16:45:29.922582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.715 [2024-09-29 16:45:29.922617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.715 qpair failed and we were unable to recover it. 00:37:29.715 [2024-09-29 16:45:29.922762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.715 [2024-09-29 16:45:29.922796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.715 qpair failed and we were unable to recover it. 00:37:29.715 [2024-09-29 16:45:29.922962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.715 [2024-09-29 16:45:29.923011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.715 qpair failed and we were unable to recover it. 00:37:29.715 [2024-09-29 16:45:29.923199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.715 [2024-09-29 16:45:29.923234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.715 qpair failed and we were unable to recover it. 00:37:29.715 [2024-09-29 16:45:29.923373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.715 [2024-09-29 16:45:29.923407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.715 qpair failed and we were unable to recover it. 00:37:29.715 [2024-09-29 16:45:29.923554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.715 [2024-09-29 16:45:29.923588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.715 qpair failed and we were unable to recover it. 00:37:29.715 [2024-09-29 16:45:29.923748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.715 [2024-09-29 16:45:29.923797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.715 qpair failed and we were unable to recover it. 00:37:29.715 [2024-09-29 16:45:29.923955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.715 [2024-09-29 16:45:29.923991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.715 qpair failed and we were unable to recover it. 00:37:29.715 [2024-09-29 16:45:29.924156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.715 [2024-09-29 16:45:29.924208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.715 qpair failed and we were unable to recover it. 00:37:29.715 [2024-09-29 16:45:29.924365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.715 [2024-09-29 16:45:29.924400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.715 qpair failed and we were unable to recover it. 00:37:29.715 [2024-09-29 16:45:29.924566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.715 [2024-09-29 16:45:29.924603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.715 qpair failed and we were unable to recover it. 00:37:29.715 [2024-09-29 16:45:29.924772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.715 [2024-09-29 16:45:29.924807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.715 qpair failed and we were unable to recover it. 00:37:29.715 [2024-09-29 16:45:29.924974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.715 [2024-09-29 16:45:29.925013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.715 qpair failed and we were unable to recover it. 00:37:29.715 [2024-09-29 16:45:29.925147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.715 [2024-09-29 16:45:29.925186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.715 qpair failed and we were unable to recover it. 00:37:29.715 [2024-09-29 16:45:29.925342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.715 [2024-09-29 16:45:29.925380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.715 qpair failed and we were unable to recover it. 00:37:29.715 [2024-09-29 16:45:29.925547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.715 [2024-09-29 16:45:29.925582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.715 qpair failed and we were unable to recover it. 00:37:29.715 [2024-09-29 16:45:29.925719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.715 [2024-09-29 16:45:29.925754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.715 qpair failed and we were unable to recover it. 00:37:29.715 [2024-09-29 16:45:29.925889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.715 [2024-09-29 16:45:29.925946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.715 qpair failed and we were unable to recover it. 00:37:29.715 [2024-09-29 16:45:29.926093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.715 [2024-09-29 16:45:29.926160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.715 qpair failed and we were unable to recover it. 00:37:29.715 [2024-09-29 16:45:29.926384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.715 [2024-09-29 16:45:29.926439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.715 qpair failed and we were unable to recover it. 00:37:29.715 [2024-09-29 16:45:29.926554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.715 [2024-09-29 16:45:29.926588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.715 qpair failed and we were unable to recover it. 00:37:29.715 [2024-09-29 16:45:29.926756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.715 [2024-09-29 16:45:29.926790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.715 qpair failed and we were unable to recover it. 00:37:29.715 [2024-09-29 16:45:29.926906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.715 [2024-09-29 16:45:29.926941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.715 qpair failed and we were unable to recover it. 00:37:29.715 [2024-09-29 16:45:29.927064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.715 [2024-09-29 16:45:29.927099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.715 qpair failed and we were unable to recover it. 00:37:29.715 [2024-09-29 16:45:29.927268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.715 [2024-09-29 16:45:29.927301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.715 qpair failed and we were unable to recover it. 00:37:29.715 [2024-09-29 16:45:29.927424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.715 [2024-09-29 16:45:29.927458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.715 qpair failed and we were unable to recover it. 00:37:29.715 [2024-09-29 16:45:29.927618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.715 [2024-09-29 16:45:29.927666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.715 qpair failed and we were unable to recover it. 00:37:29.715 [2024-09-29 16:45:29.927829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.715 [2024-09-29 16:45:29.927865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.715 qpair failed and we were unable to recover it. 00:37:29.715 [2024-09-29 16:45:29.928039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.715 [2024-09-29 16:45:29.928073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.715 qpair failed and we were unable to recover it. 00:37:29.716 [2024-09-29 16:45:29.928211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.716 [2024-09-29 16:45:29.928248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.716 qpair failed and we were unable to recover it. 00:37:29.716 [2024-09-29 16:45:29.928428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.716 [2024-09-29 16:45:29.928465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.716 qpair failed and we were unable to recover it. 00:37:29.716 [2024-09-29 16:45:29.928638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.716 [2024-09-29 16:45:29.928719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.716 qpair failed and we were unable to recover it. 00:37:29.716 [2024-09-29 16:45:29.928862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.716 [2024-09-29 16:45:29.928901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.716 qpair failed and we were unable to recover it. 00:37:29.716 [2024-09-29 16:45:29.929080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.716 [2024-09-29 16:45:29.929141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.716 qpair failed and we were unable to recover it. 00:37:29.716 [2024-09-29 16:45:29.929366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.716 [2024-09-29 16:45:29.929424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.716 qpair failed and we were unable to recover it. 00:37:29.716 [2024-09-29 16:45:29.929564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.716 [2024-09-29 16:45:29.929597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.716 qpair failed and we were unable to recover it. 00:37:29.716 [2024-09-29 16:45:29.929762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.716 [2024-09-29 16:45:29.929811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.716 qpair failed and we were unable to recover it. 00:37:29.716 [2024-09-29 16:45:29.929939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.716 [2024-09-29 16:45:29.929975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.716 qpair failed and we were unable to recover it. 00:37:29.716 [2024-09-29 16:45:29.930123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.716 [2024-09-29 16:45:29.930176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.716 qpair failed and we were unable to recover it. 00:37:29.716 [2024-09-29 16:45:29.930410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.716 [2024-09-29 16:45:29.930450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.716 qpair failed and we were unable to recover it. 00:37:29.716 [2024-09-29 16:45:29.930575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.716 [2024-09-29 16:45:29.930613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.716 qpair failed and we were unable to recover it. 00:37:29.716 [2024-09-29 16:45:29.930756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.716 [2024-09-29 16:45:29.930791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.716 qpair failed and we were unable to recover it. 00:37:29.716 [2024-09-29 16:45:29.930948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.716 [2024-09-29 16:45:29.930985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.716 qpair failed and we were unable to recover it. 00:37:29.716 [2024-09-29 16:45:29.931209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.716 [2024-09-29 16:45:29.931265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.716 qpair failed and we were unable to recover it. 00:37:29.716 [2024-09-29 16:45:29.931388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.716 [2024-09-29 16:45:29.931425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.716 qpair failed and we were unable to recover it. 00:37:29.716 [2024-09-29 16:45:29.931584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.716 [2024-09-29 16:45:29.931617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.716 qpair failed and we were unable to recover it. 00:37:29.716 [2024-09-29 16:45:29.931766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.716 [2024-09-29 16:45:29.931799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.716 qpair failed and we were unable to recover it. 00:37:29.716 [2024-09-29 16:45:29.931957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.716 [2024-09-29 16:45:29.931995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.716 qpair failed and we were unable to recover it. 00:37:29.716 [2024-09-29 16:45:29.932151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.716 [2024-09-29 16:45:29.932203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.716 qpair failed and we were unable to recover it. 00:37:29.716 [2024-09-29 16:45:29.932353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.716 [2024-09-29 16:45:29.932389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.716 qpair failed and we were unable to recover it. 00:37:29.716 [2024-09-29 16:45:29.932572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.716 [2024-09-29 16:45:29.932609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.716 qpair failed and we were unable to recover it. 00:37:29.716 [2024-09-29 16:45:29.932755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.716 [2024-09-29 16:45:29.932787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.716 qpair failed and we were unable to recover it. 00:37:29.716 [2024-09-29 16:45:29.932951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.716 [2024-09-29 16:45:29.932999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.716 qpair failed and we were unable to recover it. 00:37:29.716 [2024-09-29 16:45:29.933163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.716 [2024-09-29 16:45:29.933203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.716 qpair failed and we were unable to recover it. 00:37:29.716 [2024-09-29 16:45:29.933406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.716 [2024-09-29 16:45:29.933472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.716 qpair failed and we were unable to recover it. 00:37:29.716 [2024-09-29 16:45:29.933603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.716 [2024-09-29 16:45:29.933637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.716 qpair failed and we were unable to recover it. 00:37:29.716 [2024-09-29 16:45:29.933757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.716 [2024-09-29 16:45:29.933791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.716 qpair failed and we were unable to recover it. 00:37:29.716 [2024-09-29 16:45:29.933929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.716 [2024-09-29 16:45:29.933985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.716 qpair failed and we were unable to recover it. 00:37:29.716 [2024-09-29 16:45:29.934115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.716 [2024-09-29 16:45:29.934165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.716 qpair failed and we were unable to recover it. 00:37:29.716 [2024-09-29 16:45:29.934345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.716 [2024-09-29 16:45:29.934382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.716 qpair failed and we were unable to recover it. 00:37:29.716 [2024-09-29 16:45:29.934548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.716 [2024-09-29 16:45:29.934584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.716 qpair failed and we were unable to recover it. 00:37:29.716 [2024-09-29 16:45:29.934726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.716 [2024-09-29 16:45:29.934761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.716 qpair failed and we were unable to recover it. 00:37:29.716 [2024-09-29 16:45:29.934869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.716 [2024-09-29 16:45:29.934903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.717 qpair failed and we were unable to recover it. 00:37:29.717 [2024-09-29 16:45:29.935051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.717 [2024-09-29 16:45:29.935085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.717 qpair failed and we were unable to recover it. 00:37:29.717 [2024-09-29 16:45:29.935272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.717 [2024-09-29 16:45:29.935339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.717 qpair failed and we were unable to recover it. 00:37:29.717 [2024-09-29 16:45:29.935475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.717 [2024-09-29 16:45:29.935515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.717 qpair failed and we were unable to recover it. 00:37:29.717 [2024-09-29 16:45:29.935684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.717 [2024-09-29 16:45:29.935719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.717 qpair failed and we were unable to recover it. 00:37:29.717 [2024-09-29 16:45:29.935830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.717 [2024-09-29 16:45:29.935863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.717 qpair failed and we were unable to recover it. 00:37:29.717 [2024-09-29 16:45:29.935970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.717 [2024-09-29 16:45:29.936003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.717 qpair failed and we were unable to recover it. 00:37:29.717 [2024-09-29 16:45:29.936168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.717 [2024-09-29 16:45:29.936204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.717 qpair failed and we were unable to recover it. 00:37:29.717 [2024-09-29 16:45:29.936434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.717 [2024-09-29 16:45:29.936493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.717 qpair failed and we were unable to recover it. 00:37:29.717 [2024-09-29 16:45:29.936623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.717 [2024-09-29 16:45:29.936661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.717 qpair failed and we were unable to recover it. 00:37:29.717 [2024-09-29 16:45:29.936812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.717 [2024-09-29 16:45:29.936849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.717 qpair failed and we were unable to recover it. 00:37:29.717 [2024-09-29 16:45:29.936977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.717 [2024-09-29 16:45:29.937036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.717 qpair failed and we were unable to recover it. 00:37:29.717 [2024-09-29 16:45:29.937167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.717 [2024-09-29 16:45:29.937221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.717 qpair failed and we were unable to recover it. 00:37:29.717 [2024-09-29 16:45:29.937426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.717 [2024-09-29 16:45:29.937479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.717 qpair failed and we were unable to recover it. 00:37:29.717 [2024-09-29 16:45:29.937594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.717 [2024-09-29 16:45:29.937628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.717 qpair failed and we were unable to recover it. 00:37:29.717 [2024-09-29 16:45:29.937782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.717 [2024-09-29 16:45:29.937816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.717 qpair failed and we were unable to recover it. 00:37:29.717 [2024-09-29 16:45:29.937946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.717 [2024-09-29 16:45:29.937983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.717 qpair failed and we were unable to recover it. 00:37:29.717 [2024-09-29 16:45:29.938170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.717 [2024-09-29 16:45:29.938229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.717 qpair failed and we were unable to recover it. 00:37:29.717 [2024-09-29 16:45:29.938441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.717 [2024-09-29 16:45:29.938496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.717 qpair failed and we were unable to recover it. 00:37:29.717 [2024-09-29 16:45:29.938634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.717 [2024-09-29 16:45:29.938696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.717 qpair failed and we were unable to recover it. 00:37:29.717 [2024-09-29 16:45:29.938858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.717 [2024-09-29 16:45:29.938891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.717 qpair failed and we were unable to recover it. 00:37:29.717 [2024-09-29 16:45:29.939034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.717 [2024-09-29 16:45:29.939071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.717 qpair failed and we were unable to recover it. 00:37:29.717 [2024-09-29 16:45:29.939235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.717 [2024-09-29 16:45:29.939268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.717 qpair failed and we were unable to recover it. 00:37:29.717 [2024-09-29 16:45:29.939442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.717 [2024-09-29 16:45:29.939478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.717 qpair failed and we were unable to recover it. 00:37:29.717 [2024-09-29 16:45:29.939643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.717 [2024-09-29 16:45:29.939684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.717 qpair failed and we were unable to recover it. 00:37:29.717 [2024-09-29 16:45:29.939833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.717 [2024-09-29 16:45:29.939867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.717 qpair failed and we were unable to recover it. 00:37:29.717 [2024-09-29 16:45:29.940000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.717 [2024-09-29 16:45:29.940048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.717 qpair failed and we were unable to recover it. 00:37:29.717 [2024-09-29 16:45:29.940255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.717 [2024-09-29 16:45:29.940292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.717 qpair failed and we were unable to recover it. 00:37:29.717 [2024-09-29 16:45:29.940421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.717 [2024-09-29 16:45:29.940458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.717 qpair failed and we were unable to recover it. 00:37:29.717 [2024-09-29 16:45:29.940592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.718 [2024-09-29 16:45:29.940625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.718 qpair failed and we were unable to recover it. 00:37:29.718 [2024-09-29 16:45:29.940748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.718 [2024-09-29 16:45:29.940782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.718 qpair failed and we were unable to recover it. 00:37:29.718 [2024-09-29 16:45:29.940893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.718 [2024-09-29 16:45:29.940926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.718 qpair failed and we were unable to recover it. 00:37:29.718 [2024-09-29 16:45:29.941117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.718 [2024-09-29 16:45:29.941153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.718 qpair failed and we were unable to recover it. 00:37:29.718 [2024-09-29 16:45:29.941307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.718 [2024-09-29 16:45:29.941343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.718 qpair failed and we were unable to recover it. 00:37:29.718 [2024-09-29 16:45:29.941490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.718 [2024-09-29 16:45:29.941526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.718 qpair failed and we were unable to recover it. 00:37:29.718 [2024-09-29 16:45:29.941712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.718 [2024-09-29 16:45:29.941759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.718 qpair failed and we were unable to recover it. 00:37:29.718 [2024-09-29 16:45:29.941908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.718 [2024-09-29 16:45:29.941977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.718 qpair failed and we were unable to recover it. 00:37:29.718 [2024-09-29 16:45:29.942137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.718 [2024-09-29 16:45:29.942189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.718 qpair failed and we were unable to recover it. 00:37:29.718 [2024-09-29 16:45:29.942437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.718 [2024-09-29 16:45:29.942475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.718 qpair failed and we were unable to recover it. 00:37:29.718 [2024-09-29 16:45:29.942639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.718 [2024-09-29 16:45:29.942682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.718 qpair failed and we were unable to recover it. 00:37:29.718 [2024-09-29 16:45:29.942826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.718 [2024-09-29 16:45:29.942859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.718 qpair failed and we were unable to recover it. 00:37:29.718 [2024-09-29 16:45:29.943015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.718 [2024-09-29 16:45:29.943053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.718 qpair failed and we were unable to recover it. 00:37:29.718 [2024-09-29 16:45:29.943225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.718 [2024-09-29 16:45:29.943262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.718 qpair failed and we were unable to recover it. 00:37:29.718 [2024-09-29 16:45:29.943466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.718 [2024-09-29 16:45:29.943504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.718 qpair failed and we were unable to recover it. 00:37:29.718 [2024-09-29 16:45:29.943661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.718 [2024-09-29 16:45:29.943707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.718 qpair failed and we were unable to recover it. 00:37:29.718 [2024-09-29 16:45:29.943865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.718 [2024-09-29 16:45:29.943913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.718 qpair failed and we were unable to recover it. 00:37:29.718 [2024-09-29 16:45:29.944083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.718 [2024-09-29 16:45:29.944138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.718 qpair failed and we were unable to recover it. 00:37:29.718 [2024-09-29 16:45:29.944425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.718 [2024-09-29 16:45:29.944481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.718 qpair failed and we were unable to recover it. 00:37:29.718 [2024-09-29 16:45:29.944622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.718 [2024-09-29 16:45:29.944656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.718 qpair failed and we were unable to recover it. 00:37:29.718 [2024-09-29 16:45:29.944833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.718 [2024-09-29 16:45:29.944867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.718 qpair failed and we were unable to recover it. 00:37:29.718 [2024-09-29 16:45:29.944996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.718 [2024-09-29 16:45:29.945043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.718 qpair failed and we were unable to recover it. 00:37:29.718 [2024-09-29 16:45:29.945159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.718 [2024-09-29 16:45:29.945198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.718 qpair failed and we were unable to recover it. 00:37:29.718 [2024-09-29 16:45:29.945317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.718 [2024-09-29 16:45:29.945352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.718 qpair failed and we were unable to recover it. 00:37:29.718 [2024-09-29 16:45:29.945494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.718 [2024-09-29 16:45:29.945528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.718 qpair failed and we were unable to recover it. 00:37:29.718 [2024-09-29 16:45:29.945696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.718 [2024-09-29 16:45:29.945743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.718 qpair failed and we were unable to recover it. 00:37:29.718 [2024-09-29 16:45:29.945876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.718 [2024-09-29 16:45:29.945912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.718 qpair failed and we were unable to recover it. 00:37:29.718 [2024-09-29 16:45:29.946098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.718 [2024-09-29 16:45:29.946136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.718 qpair failed and we were unable to recover it. 00:37:29.718 [2024-09-29 16:45:29.946343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.718 [2024-09-29 16:45:29.946381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.718 qpair failed and we were unable to recover it. 00:37:29.718 [2024-09-29 16:45:29.946538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.718 [2024-09-29 16:45:29.946573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.718 qpair failed and we were unable to recover it. 00:37:29.718 [2024-09-29 16:45:29.946742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.718 [2024-09-29 16:45:29.946775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.718 qpair failed and we were unable to recover it. 00:37:29.718 [2024-09-29 16:45:29.946919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.718 [2024-09-29 16:45:29.946952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.718 qpair failed and we were unable to recover it. 00:37:29.718 [2024-09-29 16:45:29.947070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.719 [2024-09-29 16:45:29.947103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.719 qpair failed and we were unable to recover it. 00:37:29.719 [2024-09-29 16:45:29.947232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.719 [2024-09-29 16:45:29.947268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.719 qpair failed and we were unable to recover it. 00:37:29.719 [2024-09-29 16:45:29.947393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.719 [2024-09-29 16:45:29.947430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.719 qpair failed and we were unable to recover it. 00:37:29.719 [2024-09-29 16:45:29.947573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.719 [2024-09-29 16:45:29.947606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.719 qpair failed and we were unable to recover it. 00:37:29.719 [2024-09-29 16:45:29.947727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.719 [2024-09-29 16:45:29.947760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.719 qpair failed and we were unable to recover it. 00:37:29.719 [2024-09-29 16:45:29.947932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.719 [2024-09-29 16:45:29.947965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.719 qpair failed and we were unable to recover it. 00:37:29.719 [2024-09-29 16:45:29.948109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.719 [2024-09-29 16:45:29.948144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.719 qpair failed and we were unable to recover it. 00:37:29.719 [2024-09-29 16:45:29.948324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.719 [2024-09-29 16:45:29.948361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.719 qpair failed and we were unable to recover it. 00:37:29.719 [2024-09-29 16:45:29.948504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.719 [2024-09-29 16:45:29.948553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.719 qpair failed and we were unable to recover it. 00:37:29.719 [2024-09-29 16:45:29.948686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.719 [2024-09-29 16:45:29.948737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.719 qpair failed and we were unable to recover it. 00:37:29.719 [2024-09-29 16:45:29.948854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.719 [2024-09-29 16:45:29.948886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.719 qpair failed and we were unable to recover it. 00:37:29.719 [2024-09-29 16:45:29.949072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.719 [2024-09-29 16:45:29.949108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.719 qpair failed and we were unable to recover it. 00:37:29.719 [2024-09-29 16:45:29.949245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.719 [2024-09-29 16:45:29.949282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.719 qpair failed and we were unable to recover it. 00:37:29.719 [2024-09-29 16:45:29.949526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.719 [2024-09-29 16:45:29.949562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.719 qpair failed and we were unable to recover it. 00:37:29.719 [2024-09-29 16:45:29.949749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.719 [2024-09-29 16:45:29.949796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.719 qpair failed and we were unable to recover it. 00:37:29.719 [2024-09-29 16:45:29.950002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.719 [2024-09-29 16:45:29.950042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.719 qpair failed and we were unable to recover it. 00:37:29.719 [2024-09-29 16:45:29.950241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.719 [2024-09-29 16:45:29.950299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.719 qpair failed and we were unable to recover it. 00:37:29.719 [2024-09-29 16:45:29.950491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.719 [2024-09-29 16:45:29.950528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.719 qpair failed and we were unable to recover it. 00:37:29.719 [2024-09-29 16:45:29.950668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.719 [2024-09-29 16:45:29.950711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.719 qpair failed and we were unable to recover it. 00:37:29.719 [2024-09-29 16:45:29.950857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.719 [2024-09-29 16:45:29.950890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.719 qpair failed and we were unable to recover it. 00:37:29.719 [2024-09-29 16:45:29.951044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.719 [2024-09-29 16:45:29.951081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.719 qpair failed and we were unable to recover it. 00:37:29.719 [2024-09-29 16:45:29.951266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.719 [2024-09-29 16:45:29.951303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.719 qpair failed and we were unable to recover it. 00:37:29.719 [2024-09-29 16:45:29.951522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.719 [2024-09-29 16:45:29.951559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.719 qpair failed and we were unable to recover it. 00:37:29.719 [2024-09-29 16:45:29.951732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.719 [2024-09-29 16:45:29.951766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.719 qpair failed and we were unable to recover it. 00:37:29.719 [2024-09-29 16:45:29.951880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.719 [2024-09-29 16:45:29.951914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.719 qpair failed and we were unable to recover it. 00:37:29.719 [2024-09-29 16:45:29.952053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.719 [2024-09-29 16:45:29.952088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.719 qpair failed and we were unable to recover it. 00:37:29.719 [2024-09-29 16:45:29.952262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.719 [2024-09-29 16:45:29.952301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.719 qpair failed and we were unable to recover it. 00:37:29.719 [2024-09-29 16:45:29.952451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.719 [2024-09-29 16:45:29.952488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.719 qpair failed and we were unable to recover it. 00:37:29.719 [2024-09-29 16:45:29.952648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.719 [2024-09-29 16:45:29.952688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.719 qpair failed and we were unable to recover it. 00:37:29.719 [2024-09-29 16:45:29.952828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.719 [2024-09-29 16:45:29.952860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.719 qpair failed and we were unable to recover it. 00:37:29.719 [2024-09-29 16:45:29.953014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.719 [2024-09-29 16:45:29.953068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.719 qpair failed and we were unable to recover it. 00:37:29.719 [2024-09-29 16:45:29.953303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.719 [2024-09-29 16:45:29.953362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.719 qpair failed and we were unable to recover it. 00:37:29.719 [2024-09-29 16:45:29.953485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.719 [2024-09-29 16:45:29.953522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.720 qpair failed and we were unable to recover it. 00:37:29.720 [2024-09-29 16:45:29.953681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.720 [2024-09-29 16:45:29.953736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.720 qpair failed and we were unable to recover it. 00:37:29.720 [2024-09-29 16:45:29.953850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.720 [2024-09-29 16:45:29.953885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.720 qpair failed and we were unable to recover it. 00:37:29.720 [2024-09-29 16:45:29.954034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.720 [2024-09-29 16:45:29.954068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.720 qpair failed and we were unable to recover it. 00:37:29.720 [2024-09-29 16:45:29.954243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.720 [2024-09-29 16:45:29.954301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.720 qpair failed and we were unable to recover it. 00:37:29.720 [2024-09-29 16:45:29.954554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.720 [2024-09-29 16:45:29.954591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.720 qpair failed and we were unable to recover it. 00:37:29.720 [2024-09-29 16:45:29.954763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.720 [2024-09-29 16:45:29.954797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.720 qpair failed and we were unable to recover it. 00:37:29.720 [2024-09-29 16:45:29.955013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.720 [2024-09-29 16:45:29.955070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.720 qpair failed and we were unable to recover it. 00:37:29.720 [2024-09-29 16:45:29.955333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.720 [2024-09-29 16:45:29.955390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.720 qpair failed and we were unable to recover it. 00:37:29.720 [2024-09-29 16:45:29.955508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.720 [2024-09-29 16:45:29.955545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.720 qpair failed and we were unable to recover it. 00:37:29.720 [2024-09-29 16:45:29.955712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.720 [2024-09-29 16:45:29.955745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.720 qpair failed and we were unable to recover it. 00:37:29.720 [2024-09-29 16:45:29.955901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.720 [2024-09-29 16:45:29.955949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.720 qpair failed and we were unable to recover it. 00:37:29.720 [2024-09-29 16:45:29.956120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.720 [2024-09-29 16:45:29.956175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.720 qpair failed and we were unable to recover it. 00:37:29.720 [2024-09-29 16:45:29.956388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.720 [2024-09-29 16:45:29.956427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.720 qpair failed and we were unable to recover it. 00:37:29.720 [2024-09-29 16:45:29.956585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.720 [2024-09-29 16:45:29.956622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.720 qpair failed and we were unable to recover it. 00:37:29.720 [2024-09-29 16:45:29.956791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.720 [2024-09-29 16:45:29.956825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.720 qpair failed and we were unable to recover it. 00:37:29.720 [2024-09-29 16:45:29.956991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.720 [2024-09-29 16:45:29.957029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.720 qpair failed and we were unable to recover it. 00:37:29.720 [2024-09-29 16:45:29.957208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.720 [2024-09-29 16:45:29.957245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.720 qpair failed and we were unable to recover it. 00:37:29.720 [2024-09-29 16:45:29.957510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.720 [2024-09-29 16:45:29.957568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.720 qpair failed and we were unable to recover it. 00:37:29.720 [2024-09-29 16:45:29.957727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.720 [2024-09-29 16:45:29.957761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.720 qpair failed and we were unable to recover it. 00:37:29.720 [2024-09-29 16:45:29.957892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.720 [2024-09-29 16:45:29.957926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.720 qpair failed and we were unable to recover it. 00:37:29.720 [2024-09-29 16:45:29.958095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.720 [2024-09-29 16:45:29.958132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.720 qpair failed and we were unable to recover it. 00:37:29.720 [2024-09-29 16:45:29.958343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.720 [2024-09-29 16:45:29.958407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.720 qpair failed and we were unable to recover it. 00:37:29.720 [2024-09-29 16:45:29.958562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.720 [2024-09-29 16:45:29.958597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.720 qpair failed and we were unable to recover it. 00:37:29.720 [2024-09-29 16:45:29.958745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.720 [2024-09-29 16:45:29.958777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.720 qpair failed and we were unable to recover it. 00:37:29.720 [2024-09-29 16:45:29.958925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.720 [2024-09-29 16:45:29.958957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.720 qpair failed and we were unable to recover it. 00:37:29.720 [2024-09-29 16:45:29.959118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.720 [2024-09-29 16:45:29.959155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.720 qpair failed and we were unable to recover it. 00:37:29.720 [2024-09-29 16:45:29.959397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.720 [2024-09-29 16:45:29.959454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.720 qpair failed and we were unable to recover it. 00:37:29.720 [2024-09-29 16:45:29.959586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.720 [2024-09-29 16:45:29.959618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.720 qpair failed and we were unable to recover it. 00:37:29.720 [2024-09-29 16:45:29.959771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.720 [2024-09-29 16:45:29.959803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.720 qpair failed and we were unable to recover it. 00:37:29.720 [2024-09-29 16:45:29.959941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.720 [2024-09-29 16:45:29.959992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.721 qpair failed and we were unable to recover it. 00:37:29.721 [2024-09-29 16:45:29.960130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.721 [2024-09-29 16:45:29.960163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.721 qpair failed and we were unable to recover it. 00:37:29.721 [2024-09-29 16:45:29.960282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.721 [2024-09-29 16:45:29.960314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.721 qpair failed and we were unable to recover it. 00:37:29.721 [2024-09-29 16:45:29.960496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.721 [2024-09-29 16:45:29.960527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.721 qpair failed and we were unable to recover it. 00:37:29.721 [2024-09-29 16:45:29.960714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.721 [2024-09-29 16:45:29.960747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.721 qpair failed and we were unable to recover it. 00:37:29.721 [2024-09-29 16:45:29.960889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.721 [2024-09-29 16:45:29.960921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.721 qpair failed and we were unable to recover it. 00:37:29.721 [2024-09-29 16:45:29.961062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.721 [2024-09-29 16:45:29.961111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.721 qpair failed and we were unable to recover it. 00:37:29.721 [2024-09-29 16:45:29.961287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.721 [2024-09-29 16:45:29.961324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.721 qpair failed and we were unable to recover it. 00:37:29.721 [2024-09-29 16:45:29.961481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.721 [2024-09-29 16:45:29.961522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.721 qpair failed and we were unable to recover it. 00:37:29.721 [2024-09-29 16:45:29.961651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.721 [2024-09-29 16:45:29.961698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.721 qpair failed and we were unable to recover it. 00:37:29.721 [2024-09-29 16:45:29.961872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.721 [2024-09-29 16:45:29.961920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.721 qpair failed and we were unable to recover it. 00:37:29.721 [2024-09-29 16:45:29.962091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.721 [2024-09-29 16:45:29.962145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.721 qpair failed and we were unable to recover it. 00:37:29.721 [2024-09-29 16:45:29.962306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.721 [2024-09-29 16:45:29.962358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.721 qpair failed and we were unable to recover it. 00:37:29.721 [2024-09-29 16:45:29.962460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.721 [2024-09-29 16:45:29.962494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.721 qpair failed and we were unable to recover it. 00:37:29.721 [2024-09-29 16:45:29.962637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.721 [2024-09-29 16:45:29.962677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.721 qpair failed and we were unable to recover it. 00:37:29.721 [2024-09-29 16:45:29.962863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.721 [2024-09-29 16:45:29.962916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.721 qpair failed and we were unable to recover it. 00:37:29.721 [2024-09-29 16:45:29.963061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.721 [2024-09-29 16:45:29.963095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.721 qpair failed and we were unable to recover it. 00:37:29.721 [2024-09-29 16:45:29.963209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.721 [2024-09-29 16:45:29.963242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.721 qpair failed and we were unable to recover it. 00:37:29.721 [2024-09-29 16:45:29.963405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.721 [2024-09-29 16:45:29.963437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.721 qpair failed and we were unable to recover it. 00:37:29.721 [2024-09-29 16:45:29.963594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.721 [2024-09-29 16:45:29.963627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.721 qpair failed and we were unable to recover it. 00:37:29.721 [2024-09-29 16:45:29.963759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.721 [2024-09-29 16:45:29.963793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.721 qpair failed and we were unable to recover it. 00:37:29.721 [2024-09-29 16:45:29.963932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.721 [2024-09-29 16:45:29.963964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.721 qpair failed and we were unable to recover it. 00:37:29.721 [2024-09-29 16:45:29.964126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.721 [2024-09-29 16:45:29.964162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.721 qpair failed and we were unable to recover it. 00:37:29.721 [2024-09-29 16:45:29.964319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.721 [2024-09-29 16:45:29.964355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.721 qpair failed and we were unable to recover it. 00:37:29.721 [2024-09-29 16:45:29.964473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.721 [2024-09-29 16:45:29.964509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.721 qpair failed and we were unable to recover it. 00:37:29.721 [2024-09-29 16:45:29.964682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.721 [2024-09-29 16:45:29.964718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.721 qpair failed and we were unable to recover it. 00:37:29.721 [2024-09-29 16:45:29.964833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.721 [2024-09-29 16:45:29.964867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.721 qpair failed and we were unable to recover it. 00:37:29.721 [2024-09-29 16:45:29.964999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.721 [2024-09-29 16:45:29.965052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.721 qpair failed and we were unable to recover it. 00:37:29.721 [2024-09-29 16:45:29.965215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.721 [2024-09-29 16:45:29.965265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.721 qpair failed and we were unable to recover it. 00:37:29.721 [2024-09-29 16:45:29.965460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.721 [2024-09-29 16:45:29.965509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.721 qpair failed and we were unable to recover it. 00:37:29.721 [2024-09-29 16:45:29.965660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.721 [2024-09-29 16:45:29.965706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.721 qpair failed and we were unable to recover it. 00:37:29.721 [2024-09-29 16:45:29.965855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.721 [2024-09-29 16:45:29.965889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.721 qpair failed and we were unable to recover it. 00:37:29.721 [2024-09-29 16:45:29.966079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.721 [2024-09-29 16:45:29.966116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.721 qpair failed and we were unable to recover it. 00:37:29.721 [2024-09-29 16:45:29.966267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.722 [2024-09-29 16:45:29.966303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.722 qpair failed and we were unable to recover it. 00:37:29.722 [2024-09-29 16:45:29.966460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.722 [2024-09-29 16:45:29.966502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.722 qpair failed and we were unable to recover it. 00:37:29.722 [2024-09-29 16:45:29.966705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.722 [2024-09-29 16:45:29.966741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.722 qpair failed and we were unable to recover it. 00:37:29.722 [2024-09-29 16:45:29.966883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.722 [2024-09-29 16:45:29.966935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.722 qpair failed and we were unable to recover it. 00:37:29.722 [2024-09-29 16:45:29.967101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.722 [2024-09-29 16:45:29.967153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.722 qpair failed and we were unable to recover it. 00:37:29.722 [2024-09-29 16:45:29.967278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.722 [2024-09-29 16:45:29.967332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.722 qpair failed and we were unable to recover it. 00:37:29.722 [2024-09-29 16:45:29.967479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.722 [2024-09-29 16:45:29.967513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.722 qpair failed and we were unable to recover it. 00:37:29.722 [2024-09-29 16:45:29.967700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.722 [2024-09-29 16:45:29.967748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.722 qpair failed and we were unable to recover it. 00:37:29.722 [2024-09-29 16:45:29.967881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.722 [2024-09-29 16:45:29.967928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.722 qpair failed and we were unable to recover it. 00:37:29.722 [2024-09-29 16:45:29.968110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.722 [2024-09-29 16:45:29.968144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.722 qpair failed and we were unable to recover it. 00:37:29.722 [2024-09-29 16:45:29.968371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.722 [2024-09-29 16:45:29.968409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.722 qpair failed and we were unable to recover it. 00:37:29.722 [2024-09-29 16:45:29.968581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.722 [2024-09-29 16:45:29.968615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.722 qpair failed and we were unable to recover it. 00:37:29.722 [2024-09-29 16:45:29.968756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.722 [2024-09-29 16:45:29.968789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.722 qpair failed and we were unable to recover it. 00:37:29.722 [2024-09-29 16:45:29.968963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.722 [2024-09-29 16:45:29.968997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.722 qpair failed and we were unable to recover it. 00:37:29.722 [2024-09-29 16:45:29.969152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.722 [2024-09-29 16:45:29.969204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.722 qpair failed and we were unable to recover it. 00:37:29.722 [2024-09-29 16:45:29.969364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.722 [2024-09-29 16:45:29.969422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.722 qpair failed and we were unable to recover it. 00:37:29.722 [2024-09-29 16:45:29.969565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.722 [2024-09-29 16:45:29.969598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.722 qpair failed and we were unable to recover it. 00:37:29.722 [2024-09-29 16:45:29.969747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.722 [2024-09-29 16:45:29.969796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.722 qpair failed and we were unable to recover it. 00:37:29.722 [2024-09-29 16:45:29.969949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.722 [2024-09-29 16:45:29.969985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.722 qpair failed and we were unable to recover it. 00:37:29.722 [2024-09-29 16:45:29.970131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.722 [2024-09-29 16:45:29.970165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.722 qpair failed and we were unable to recover it. 00:37:29.722 [2024-09-29 16:45:29.970275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.722 [2024-09-29 16:45:29.970308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.722 qpair failed and we were unable to recover it. 00:37:29.722 [2024-09-29 16:45:29.970446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.722 [2024-09-29 16:45:29.970478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.722 qpair failed and we were unable to recover it. 00:37:29.722 [2024-09-29 16:45:29.970586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.722 [2024-09-29 16:45:29.970619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.722 qpair failed and we were unable to recover it. 00:37:29.722 [2024-09-29 16:45:29.970795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.722 [2024-09-29 16:45:29.970830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.722 qpair failed and we were unable to recover it. 00:37:29.722 [2024-09-29 16:45:29.971021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.722 [2024-09-29 16:45:29.971075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.722 qpair failed and we were unable to recover it. 00:37:29.722 [2024-09-29 16:45:29.971301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.723 [2024-09-29 16:45:29.971353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.723 qpair failed and we were unable to recover it. 00:37:29.723 [2024-09-29 16:45:29.971492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.723 [2024-09-29 16:45:29.971525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.723 qpair failed and we were unable to recover it. 00:37:29.723 [2024-09-29 16:45:29.971665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.723 [2024-09-29 16:45:29.971706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.723 qpair failed and we were unable to recover it. 00:37:29.723 [2024-09-29 16:45:29.971882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.723 [2024-09-29 16:45:29.971915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.723 qpair failed and we were unable to recover it. 00:37:29.723 [2024-09-29 16:45:29.972068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.723 [2024-09-29 16:45:29.972102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.723 qpair failed and we were unable to recover it. 00:37:29.723 [2024-09-29 16:45:29.972272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.723 [2024-09-29 16:45:29.972307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.723 qpair failed and we were unable to recover it. 00:37:29.723 [2024-09-29 16:45:29.972475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.723 [2024-09-29 16:45:29.972509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.723 qpair failed and we were unable to recover it. 00:37:29.723 [2024-09-29 16:45:29.972627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.723 [2024-09-29 16:45:29.972684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.723 qpair failed and we were unable to recover it. 00:37:29.723 [2024-09-29 16:45:29.972809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.723 [2024-09-29 16:45:29.972844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.723 qpair failed and we were unable to recover it. 00:37:29.723 [2024-09-29 16:45:29.972990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.723 [2024-09-29 16:45:29.973026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.723 qpair failed and we were unable to recover it. 00:37:29.723 [2024-09-29 16:45:29.973215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.723 [2024-09-29 16:45:29.973253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.723 qpair failed and we were unable to recover it. 00:37:29.723 [2024-09-29 16:45:29.973384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.723 [2024-09-29 16:45:29.973421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.723 qpair failed and we were unable to recover it. 00:37:29.723 [2024-09-29 16:45:29.973541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.723 [2024-09-29 16:45:29.973579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.723 qpair failed and we were unable to recover it. 00:37:29.723 [2024-09-29 16:45:29.973767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.723 [2024-09-29 16:45:29.973803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.723 qpair failed and we were unable to recover it. 00:37:29.723 [2024-09-29 16:45:29.973967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.723 [2024-09-29 16:45:29.974032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.723 qpair failed and we were unable to recover it. 00:37:29.723 [2024-09-29 16:45:29.974194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.723 [2024-09-29 16:45:29.974246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.723 qpair failed and we were unable to recover it. 00:37:29.723 [2024-09-29 16:45:29.974461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.723 [2024-09-29 16:45:29.974496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.723 qpair failed and we were unable to recover it. 00:37:29.723 [2024-09-29 16:45:29.974625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.723 [2024-09-29 16:45:29.974663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.723 qpair failed and we were unable to recover it. 00:37:29.723 [2024-09-29 16:45:29.974862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.723 [2024-09-29 16:45:29.974910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.723 qpair failed and we were unable to recover it. 00:37:29.723 [2024-09-29 16:45:29.975078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.723 [2024-09-29 16:45:29.975116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.723 qpair failed and we were unable to recover it. 00:37:29.723 [2024-09-29 16:45:29.975241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.723 [2024-09-29 16:45:29.975276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.723 qpair failed and we were unable to recover it. 00:37:29.723 [2024-09-29 16:45:29.975443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.723 [2024-09-29 16:45:29.975475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.723 qpair failed and we were unable to recover it. 00:37:29.723 [2024-09-29 16:45:29.975583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.723 [2024-09-29 16:45:29.975616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.723 qpair failed and we were unable to recover it. 00:37:29.723 [2024-09-29 16:45:29.975764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.723 [2024-09-29 16:45:29.975797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.723 qpair failed and we were unable to recover it. 00:37:29.723 [2024-09-29 16:45:29.975934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.723 [2024-09-29 16:45:29.975989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.723 qpair failed and we were unable to recover it. 00:37:29.723 [2024-09-29 16:45:29.976178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.723 [2024-09-29 16:45:29.976230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.723 qpair failed and we were unable to recover it. 00:37:29.723 [2024-09-29 16:45:29.976417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.723 [2024-09-29 16:45:29.976469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.723 qpair failed and we were unable to recover it. 00:37:29.723 [2024-09-29 16:45:29.976588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.723 [2024-09-29 16:45:29.976622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.723 qpair failed and we were unable to recover it. 00:37:29.723 [2024-09-29 16:45:29.976782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.723 [2024-09-29 16:45:29.976835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.723 qpair failed and we were unable to recover it. 00:37:29.723 [2024-09-29 16:45:29.977003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.724 [2024-09-29 16:45:29.977043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.724 qpair failed and we were unable to recover it. 00:37:29.724 [2024-09-29 16:45:29.977211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.724 [2024-09-29 16:45:29.977280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.724 qpair failed and we were unable to recover it. 00:37:29.724 [2024-09-29 16:45:29.977564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.724 [2024-09-29 16:45:29.977632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.724 qpair failed and we were unable to recover it. 00:37:29.724 [2024-09-29 16:45:29.977773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.724 [2024-09-29 16:45:29.977807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.724 qpair failed and we were unable to recover it. 00:37:29.724 [2024-09-29 16:45:29.977918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.724 [2024-09-29 16:45:29.977972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.724 qpair failed and we were unable to recover it. 00:37:29.724 [2024-09-29 16:45:29.978154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.724 [2024-09-29 16:45:29.978186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.724 qpair failed and we were unable to recover it. 00:37:29.724 [2024-09-29 16:45:29.978360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.724 [2024-09-29 16:45:29.978397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.724 qpair failed and we were unable to recover it. 00:37:29.724 [2024-09-29 16:45:29.978569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.724 [2024-09-29 16:45:29.978601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.724 qpair failed and we were unable to recover it. 00:37:29.724 [2024-09-29 16:45:29.978726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.724 [2024-09-29 16:45:29.978759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.724 qpair failed and we were unable to recover it. 00:37:29.724 [2024-09-29 16:45:29.978924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.724 [2024-09-29 16:45:29.978974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.724 qpair failed and we were unable to recover it. 00:37:29.724 [2024-09-29 16:45:29.979136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.724 [2024-09-29 16:45:29.979200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.724 qpair failed and we were unable to recover it. 00:37:29.724 [2024-09-29 16:45:29.979416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.724 [2024-09-29 16:45:29.979454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.724 qpair failed and we were unable to recover it. 00:37:29.724 [2024-09-29 16:45:29.979646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.724 [2024-09-29 16:45:29.979691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.724 qpair failed and we were unable to recover it. 00:37:29.724 [2024-09-29 16:45:29.979854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.724 [2024-09-29 16:45:29.979886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.724 qpair failed and we were unable to recover it. 00:37:29.724 [2024-09-29 16:45:29.980146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.724 [2024-09-29 16:45:29.980202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.724 qpair failed and we were unable to recover it. 00:37:29.724 [2024-09-29 16:45:29.980358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.724 [2024-09-29 16:45:29.980394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.724 qpair failed and we were unable to recover it. 00:37:29.724 [2024-09-29 16:45:29.980524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.724 [2024-09-29 16:45:29.980571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.724 qpair failed and we were unable to recover it. 00:37:29.724 [2024-09-29 16:45:29.980715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.724 [2024-09-29 16:45:29.980747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.724 qpair failed and we were unable to recover it. 00:37:29.724 [2024-09-29 16:45:29.980861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.724 [2024-09-29 16:45:29.980893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.724 qpair failed and we were unable to recover it. 00:37:29.724 [2024-09-29 16:45:29.981021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.724 [2024-09-29 16:45:29.981057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.724 qpair failed and we were unable to recover it. 00:37:29.724 [2024-09-29 16:45:29.981238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.724 [2024-09-29 16:45:29.981274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.724 qpair failed and we were unable to recover it. 00:37:29.724 [2024-09-29 16:45:29.981402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.724 [2024-09-29 16:45:29.981439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.724 qpair failed and we were unable to recover it. 00:37:29.724 [2024-09-29 16:45:29.981569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.724 [2024-09-29 16:45:29.981605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.724 qpair failed and we were unable to recover it. 00:37:29.724 [2024-09-29 16:45:29.981743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.724 [2024-09-29 16:45:29.981776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.724 qpair failed and we were unable to recover it. 00:37:29.724 [2024-09-29 16:45:29.981881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.724 [2024-09-29 16:45:29.981914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.724 qpair failed and we were unable to recover it. 00:37:29.724 [2024-09-29 16:45:29.982079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.724 [2024-09-29 16:45:29.982114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.724 qpair failed and we were unable to recover it. 00:37:29.724 [2024-09-29 16:45:29.982266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.724 [2024-09-29 16:45:29.982302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.724 qpair failed and we were unable to recover it. 00:37:29.724 [2024-09-29 16:45:29.982435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.724 [2024-09-29 16:45:29.982471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.724 qpair failed and we were unable to recover it. 00:37:29.724 [2024-09-29 16:45:29.982617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.724 [2024-09-29 16:45:29.982670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.724 qpair failed and we were unable to recover it. 00:37:29.724 [2024-09-29 16:45:29.982867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.724 [2024-09-29 16:45:29.982915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.724 qpair failed and we were unable to recover it. 00:37:29.724 [2024-09-29 16:45:29.983048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.724 [2024-09-29 16:45:29.983088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.724 qpair failed and we were unable to recover it. 00:37:29.724 [2024-09-29 16:45:29.983238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.724 [2024-09-29 16:45:29.983290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.724 qpair failed and we were unable to recover it. 00:37:29.724 [2024-09-29 16:45:29.983446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.724 [2024-09-29 16:45:29.983498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.724 qpair failed and we were unable to recover it. 00:37:29.724 [2024-09-29 16:45:29.983665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.725 [2024-09-29 16:45:29.983708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.725 qpair failed and we were unable to recover it. 00:37:29.725 [2024-09-29 16:45:29.983852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.725 [2024-09-29 16:45:29.983886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.725 qpair failed and we were unable to recover it. 00:37:29.725 [2024-09-29 16:45:29.984011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.725 [2024-09-29 16:45:29.984048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.725 qpair failed and we were unable to recover it. 00:37:29.725 [2024-09-29 16:45:29.984220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.725 [2024-09-29 16:45:29.984255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.725 qpair failed and we were unable to recover it. 00:37:29.725 [2024-09-29 16:45:29.984383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.725 [2024-09-29 16:45:29.984433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.725 qpair failed and we were unable to recover it. 00:37:29.725 [2024-09-29 16:45:29.984568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.725 [2024-09-29 16:45:29.984601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.725 qpair failed and we were unable to recover it. 00:37:29.725 [2024-09-29 16:45:29.984746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.725 [2024-09-29 16:45:29.984779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.725 qpair failed and we were unable to recover it. 00:37:29.725 [2024-09-29 16:45:29.984915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.725 [2024-09-29 16:45:29.984949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.725 qpair failed and we were unable to recover it. 00:37:29.725 [2024-09-29 16:45:29.985112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.725 [2024-09-29 16:45:29.985177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.725 qpair failed and we were unable to recover it. 00:37:29.725 [2024-09-29 16:45:29.985310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.725 [2024-09-29 16:45:29.985368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.725 qpair failed and we were unable to recover it. 00:37:29.725 [2024-09-29 16:45:29.985520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.725 [2024-09-29 16:45:29.985554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.725 qpair failed and we were unable to recover it. 00:37:29.725 [2024-09-29 16:45:29.985666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.725 [2024-09-29 16:45:29.985715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.725 qpair failed and we were unable to recover it. 00:37:29.725 [2024-09-29 16:45:29.985853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.725 [2024-09-29 16:45:29.985904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.725 qpair failed and we were unable to recover it. 00:37:29.725 [2024-09-29 16:45:29.986016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.725 [2024-09-29 16:45:29.986051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.725 qpair failed and we were unable to recover it. 00:37:29.725 [2024-09-29 16:45:29.986220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.725 [2024-09-29 16:45:29.986273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.725 qpair failed and we were unable to recover it. 00:37:29.725 [2024-09-29 16:45:29.986388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.725 [2024-09-29 16:45:29.986422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.725 qpair failed and we were unable to recover it. 00:37:29.725 [2024-09-29 16:45:29.986589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.725 [2024-09-29 16:45:29.986623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.725 qpair failed and we were unable to recover it. 00:37:29.725 [2024-09-29 16:45:29.986778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.725 [2024-09-29 16:45:29.986832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.725 qpair failed and we were unable to recover it. 00:37:29.725 [2024-09-29 16:45:29.987024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.725 [2024-09-29 16:45:29.987077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.725 qpair failed and we were unable to recover it. 00:37:29.725 [2024-09-29 16:45:29.987302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.725 [2024-09-29 16:45:29.987368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.725 qpair failed and we were unable to recover it. 00:37:29.725 [2024-09-29 16:45:29.987616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.725 [2024-09-29 16:45:29.987687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.725 qpair failed and we were unable to recover it. 00:37:29.725 [2024-09-29 16:45:29.987852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.725 [2024-09-29 16:45:29.987885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.725 qpair failed and we were unable to recover it. 00:37:29.725 [2024-09-29 16:45:29.988054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.725 [2024-09-29 16:45:29.988092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.725 qpair failed and we were unable to recover it. 00:37:29.725 [2024-09-29 16:45:29.988275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.725 [2024-09-29 16:45:29.988311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.725 qpair failed and we were unable to recover it. 00:37:29.725 [2024-09-29 16:45:29.988447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.725 [2024-09-29 16:45:29.988484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.725 qpair failed and we were unable to recover it. 00:37:29.725 [2024-09-29 16:45:29.988643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.725 [2024-09-29 16:45:29.988685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.725 qpair failed and we were unable to recover it. 00:37:29.725 [2024-09-29 16:45:29.988857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.725 [2024-09-29 16:45:29.988890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.725 qpair failed and we were unable to recover it. 00:37:29.725 [2024-09-29 16:45:29.989052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.725 [2024-09-29 16:45:29.989090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.725 qpair failed and we were unable to recover it. 00:37:29.725 [2024-09-29 16:45:29.989296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.725 [2024-09-29 16:45:29.989334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.725 qpair failed and we were unable to recover it. 00:37:29.725 [2024-09-29 16:45:29.989458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.725 [2024-09-29 16:45:29.989495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.725 qpair failed and we were unable to recover it. 00:37:29.725 [2024-09-29 16:45:29.989664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.725 [2024-09-29 16:45:29.989705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.725 qpair failed and we were unable to recover it. 00:37:29.725 [2024-09-29 16:45:29.989868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.726 [2024-09-29 16:45:29.989906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.726 qpair failed and we were unable to recover it. 00:37:29.726 [2024-09-29 16:45:29.990055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.726 [2024-09-29 16:45:29.990108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.726 qpair failed and we were unable to recover it. 00:37:29.726 [2024-09-29 16:45:29.990242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.726 [2024-09-29 16:45:29.990280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.726 qpair failed and we were unable to recover it. 00:37:29.726 [2024-09-29 16:45:29.990433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.726 [2024-09-29 16:45:29.990469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.726 qpair failed and we were unable to recover it. 00:37:29.726 [2024-09-29 16:45:29.990627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.726 [2024-09-29 16:45:29.990664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.726 qpair failed and we were unable to recover it. 00:37:29.726 [2024-09-29 16:45:29.990802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.726 [2024-09-29 16:45:29.990835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.726 qpair failed and we were unable to recover it. 00:37:29.726 [2024-09-29 16:45:29.990988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.726 [2024-09-29 16:45:29.991056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.726 qpair failed and we were unable to recover it. 00:37:29.726 [2024-09-29 16:45:29.991321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.726 [2024-09-29 16:45:29.991377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.726 qpair failed and we were unable to recover it. 00:37:29.726 [2024-09-29 16:45:29.991531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.726 [2024-09-29 16:45:29.991566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.726 qpair failed and we were unable to recover it. 00:37:29.726 [2024-09-29 16:45:29.991728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.726 [2024-09-29 16:45:29.991784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.726 qpair failed and we were unable to recover it. 00:37:29.726 [2024-09-29 16:45:29.991936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.726 [2024-09-29 16:45:29.991989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.726 qpair failed and we were unable to recover it. 00:37:29.726 [2024-09-29 16:45:29.992100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.726 [2024-09-29 16:45:29.992135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.726 qpair failed and we were unable to recover it. 00:37:29.726 [2024-09-29 16:45:29.992257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.726 [2024-09-29 16:45:29.992291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.726 qpair failed and we were unable to recover it. 00:37:29.726 [2024-09-29 16:45:29.992407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.726 [2024-09-29 16:45:29.992441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.726 qpair failed and we were unable to recover it. 00:37:29.726 [2024-09-29 16:45:29.992580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.726 [2024-09-29 16:45:29.992614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.726 qpair failed and we were unable to recover it. 00:37:29.726 [2024-09-29 16:45:29.992772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.726 [2024-09-29 16:45:29.992807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.726 qpair failed and we were unable to recover it. 00:37:29.726 [2024-09-29 16:45:29.992921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.726 [2024-09-29 16:45:29.992954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.726 qpair failed and we were unable to recover it. 00:37:29.726 [2024-09-29 16:45:29.993095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.726 [2024-09-29 16:45:29.993134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.726 qpair failed and we were unable to recover it. 00:37:29.726 [2024-09-29 16:45:29.993249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.726 [2024-09-29 16:45:29.993283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.726 qpair failed and we were unable to recover it. 00:37:29.726 [2024-09-29 16:45:29.993421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.726 [2024-09-29 16:45:29.993456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.726 qpair failed and we were unable to recover it. 00:37:29.726 [2024-09-29 16:45:29.993593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.726 [2024-09-29 16:45:29.993627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.726 qpair failed and we were unable to recover it. 00:37:29.726 [2024-09-29 16:45:29.993751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.726 [2024-09-29 16:45:29.993786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.726 qpair failed and we were unable to recover it. 00:37:29.726 [2024-09-29 16:45:29.993974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.726 [2024-09-29 16:45:29.994021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.726 qpair failed and we were unable to recover it. 00:37:29.726 [2024-09-29 16:45:29.994147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.726 [2024-09-29 16:45:29.994182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.726 qpair failed and we were unable to recover it. 00:37:29.726 [2024-09-29 16:45:29.994303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.726 [2024-09-29 16:45:29.994337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.726 qpair failed and we were unable to recover it. 00:37:29.726 [2024-09-29 16:45:29.994492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.726 [2024-09-29 16:45:29.994528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.726 qpair failed and we were unable to recover it. 00:37:29.726 [2024-09-29 16:45:29.994696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.726 [2024-09-29 16:45:29.994730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.726 qpair failed and we were unable to recover it. 00:37:29.726 [2024-09-29 16:45:29.994840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.726 [2024-09-29 16:45:29.994874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.726 qpair failed and we were unable to recover it. 00:37:29.726 [2024-09-29 16:45:29.995011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.726 [2024-09-29 16:45:29.995049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.726 qpair failed and we were unable to recover it. 00:37:29.726 [2024-09-29 16:45:29.995238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.726 [2024-09-29 16:45:29.995292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.726 qpair failed and we were unable to recover it. 00:37:29.726 [2024-09-29 16:45:29.995476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.726 [2024-09-29 16:45:29.995529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.726 qpair failed and we were unable to recover it. 00:37:29.726 [2024-09-29 16:45:29.995685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.726 [2024-09-29 16:45:29.995740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.727 qpair failed and we were unable to recover it. 00:37:29.727 [2024-09-29 16:45:29.995893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.727 [2024-09-29 16:45:29.995931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.727 qpair failed and we were unable to recover it. 00:37:29.727 [2024-09-29 16:45:29.996200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.727 [2024-09-29 16:45:29.996237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.727 qpair failed and we were unable to recover it. 00:37:29.727 [2024-09-29 16:45:29.996407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.727 [2024-09-29 16:45:29.996506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.727 qpair failed and we were unable to recover it. 00:37:29.727 [2024-09-29 16:45:29.996682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.727 [2024-09-29 16:45:29.996716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.727 qpair failed and we were unable to recover it. 00:37:29.727 [2024-09-29 16:45:29.996856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.727 [2024-09-29 16:45:29.996889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.727 qpair failed and we were unable to recover it. 00:37:29.727 [2024-09-29 16:45:29.997053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.727 [2024-09-29 16:45:29.997090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.727 qpair failed and we were unable to recover it. 00:37:29.727 [2024-09-29 16:45:29.997216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.727 [2024-09-29 16:45:29.997252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.727 qpair failed and we were unable to recover it. 00:37:29.727 [2024-09-29 16:45:29.997379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.727 [2024-09-29 16:45:29.997415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.727 qpair failed and we were unable to recover it. 00:37:29.727 [2024-09-29 16:45:29.997546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.727 [2024-09-29 16:45:29.997584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.727 qpair failed and we were unable to recover it. 00:37:29.727 [2024-09-29 16:45:29.997750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.727 [2024-09-29 16:45:29.997784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.727 qpair failed and we were unable to recover it. 00:37:29.727 [2024-09-29 16:45:29.997978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.727 [2024-09-29 16:45:29.998015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.727 qpair failed and we were unable to recover it. 00:37:29.727 [2024-09-29 16:45:29.998185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.727 [2024-09-29 16:45:29.998223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.727 qpair failed and we were unable to recover it. 00:37:29.727 [2024-09-29 16:45:29.998419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.727 [2024-09-29 16:45:29.998468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.727 qpair failed and we were unable to recover it. 00:37:29.727 [2024-09-29 16:45:29.998638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.727 [2024-09-29 16:45:29.998682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.727 qpair failed and we were unable to recover it. 00:37:29.727 [2024-09-29 16:45:29.998826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.727 [2024-09-29 16:45:29.998866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.727 qpair failed and we were unable to recover it. 00:37:29.727 [2024-09-29 16:45:29.999064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.727 [2024-09-29 16:45:29.999117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.727 qpair failed and we were unable to recover it. 00:37:29.727 [2024-09-29 16:45:29.999354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.727 [2024-09-29 16:45:29.999409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.727 qpair failed and we were unable to recover it. 00:37:29.727 [2024-09-29 16:45:29.999524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.727 [2024-09-29 16:45:29.999558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.727 qpair failed and we were unable to recover it. 00:37:29.727 [2024-09-29 16:45:29.999724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.727 [2024-09-29 16:45:29.999784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.727 qpair failed and we were unable to recover it. 00:37:29.727 [2024-09-29 16:45:29.999952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.727 [2024-09-29 16:45:30.000005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.727 qpair failed and we were unable to recover it. 00:37:29.727 [2024-09-29 16:45:30.000160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.727 [2024-09-29 16:45:30.000212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.727 qpair failed and we were unable to recover it. 00:37:29.727 [2024-09-29 16:45:30.000329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.727 [2024-09-29 16:45:30.000364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.727 qpair failed and we were unable to recover it. 00:37:29.727 [2024-09-29 16:45:30.000487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.727 [2024-09-29 16:45:30.000523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.727 qpair failed and we were unable to recover it. 00:37:29.727 [2024-09-29 16:45:30.000692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.727 [2024-09-29 16:45:30.000726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.727 qpair failed and we were unable to recover it. 00:37:29.727 [2024-09-29 16:45:30.000865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.727 [2024-09-29 16:45:30.000899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.727 qpair failed and we were unable to recover it. 00:37:29.727 [2024-09-29 16:45:30.001071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.727 [2024-09-29 16:45:30.001133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.727 qpair failed and we were unable to recover it. 00:37:29.727 [2024-09-29 16:45:30.001344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.727 [2024-09-29 16:45:30.001386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.727 qpair failed and we were unable to recover it. 00:37:29.727 [2024-09-29 16:45:30.001556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.727 [2024-09-29 16:45:30.001595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.727 qpair failed and we were unable to recover it. 00:37:29.727 [2024-09-29 16:45:30.001770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.727 [2024-09-29 16:45:30.001806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.727 qpair failed and we were unable to recover it. 00:37:29.727 [2024-09-29 16:45:30.001918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.727 [2024-09-29 16:45:30.001976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.727 qpair failed and we were unable to recover it. 00:37:29.727 [2024-09-29 16:45:30.002128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.727 [2024-09-29 16:45:30.002171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.727 qpair failed and we were unable to recover it. 00:37:29.727 [2024-09-29 16:45:30.002360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.728 [2024-09-29 16:45:30.002398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.728 qpair failed and we were unable to recover it. 00:37:29.728 [2024-09-29 16:45:30.002542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.728 [2024-09-29 16:45:30.002576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.728 qpair failed and we were unable to recover it. 00:37:29.728 [2024-09-29 16:45:30.002711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.728 [2024-09-29 16:45:30.002758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.728 qpair failed and we were unable to recover it. 00:37:29.728 [2024-09-29 16:45:30.002932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.728 [2024-09-29 16:45:30.002972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.728 qpair failed and we were unable to recover it. 00:37:29.728 [2024-09-29 16:45:30.003133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.728 [2024-09-29 16:45:30.003173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.728 qpair failed and we were unable to recover it. 00:37:29.728 [2024-09-29 16:45:30.003305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.728 [2024-09-29 16:45:30.003345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.728 qpair failed and we were unable to recover it. 00:37:29.728 [2024-09-29 16:45:30.003500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.728 [2024-09-29 16:45:30.003539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.728 qpair failed and we were unable to recover it. 00:37:29.728 [2024-09-29 16:45:30.003658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.728 [2024-09-29 16:45:30.003720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.728 qpair failed and we were unable to recover it. 00:37:29.728 [2024-09-29 16:45:30.003853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.728 [2024-09-29 16:45:30.003889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.728 qpair failed and we were unable to recover it. 00:37:29.728 [2024-09-29 16:45:30.004065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.728 [2024-09-29 16:45:30.004129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.728 qpair failed and we were unable to recover it. 00:37:29.728 [2024-09-29 16:45:30.004303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.728 [2024-09-29 16:45:30.004359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.728 qpair failed and we were unable to recover it. 00:37:29.728 [2024-09-29 16:45:30.004502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.728 [2024-09-29 16:45:30.004557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.728 qpair failed and we were unable to recover it. 00:37:29.728 [2024-09-29 16:45:30.004720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.728 [2024-09-29 16:45:30.004759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.728 qpair failed and we were unable to recover it. 00:37:29.728 [2024-09-29 16:45:30.004943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.728 [2024-09-29 16:45:30.004995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.728 qpair failed and we were unable to recover it. 00:37:29.728 [2024-09-29 16:45:30.005187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.728 [2024-09-29 16:45:30.005287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.728 qpair failed and we were unable to recover it. 00:37:29.728 [2024-09-29 16:45:30.005509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.728 [2024-09-29 16:45:30.005567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.728 qpair failed and we were unable to recover it. 00:37:29.728 [2024-09-29 16:45:30.005739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.728 [2024-09-29 16:45:30.005774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.728 qpair failed and we were unable to recover it. 00:37:29.728 [2024-09-29 16:45:30.005943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.728 [2024-09-29 16:45:30.005981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.728 qpair failed and we were unable to recover it. 00:37:29.728 [2024-09-29 16:45:30.006170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.728 [2024-09-29 16:45:30.006209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.728 qpair failed and we were unable to recover it. 00:37:29.728 [2024-09-29 16:45:30.006368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.728 [2024-09-29 16:45:30.006406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.728 qpair failed and we were unable to recover it. 00:37:29.728 [2024-09-29 16:45:30.006536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.728 [2024-09-29 16:45:30.006569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.728 qpair failed and we were unable to recover it. 00:37:29.728 [2024-09-29 16:45:30.006762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.728 [2024-09-29 16:45:30.006810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.728 qpair failed and we were unable to recover it. 00:37:29.728 [2024-09-29 16:45:30.006983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.728 [2024-09-29 16:45:30.007024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.728 qpair failed and we were unable to recover it. 00:37:29.728 [2024-09-29 16:45:30.007143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.728 [2024-09-29 16:45:30.007181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.728 qpair failed and we were unable to recover it. 00:37:29.728 [2024-09-29 16:45:30.007375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.728 [2024-09-29 16:45:30.007409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.728 qpair failed and we were unable to recover it. 00:37:29.728 [2024-09-29 16:45:30.007584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.728 [2024-09-29 16:45:30.007619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.728 qpair failed and we were unable to recover it. 00:37:29.728 [2024-09-29 16:45:30.007744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.728 [2024-09-29 16:45:30.007778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.728 qpair failed and we were unable to recover it. 00:37:29.728 [2024-09-29 16:45:30.007902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.729 [2024-09-29 16:45:30.007936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.729 qpair failed and we were unable to recover it. 00:37:29.729 [2024-09-29 16:45:30.008116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.729 [2024-09-29 16:45:30.008185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.729 qpair failed and we were unable to recover it. 00:37:29.729 [2024-09-29 16:45:30.008330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.729 [2024-09-29 16:45:30.008385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.729 qpair failed and we were unable to recover it. 00:37:29.729 [2024-09-29 16:45:30.008504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.729 [2024-09-29 16:45:30.008539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.729 qpair failed and we were unable to recover it. 00:37:29.729 [2024-09-29 16:45:30.008656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.729 [2024-09-29 16:45:30.008702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.729 qpair failed and we were unable to recover it. 00:37:29.729 [2024-09-29 16:45:30.008828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.729 [2024-09-29 16:45:30.008864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.729 qpair failed and we were unable to recover it. 00:37:29.729 [2024-09-29 16:45:30.009035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.729 [2024-09-29 16:45:30.009069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.729 qpair failed and we were unable to recover it. 00:37:29.729 [2024-09-29 16:45:30.009229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.729 [2024-09-29 16:45:30.009268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.729 qpair failed and we were unable to recover it. 00:37:29.729 [2024-09-29 16:45:30.009437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.729 [2024-09-29 16:45:30.009482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.729 qpair failed and we were unable to recover it. 00:37:29.729 [2024-09-29 16:45:30.009598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.729 [2024-09-29 16:45:30.009633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.729 qpair failed and we were unable to recover it. 00:37:29.729 [2024-09-29 16:45:30.009801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.729 [2024-09-29 16:45:30.009842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.729 qpair failed and we were unable to recover it. 00:37:29.729 [2024-09-29 16:45:30.010000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.729 [2024-09-29 16:45:30.010039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.729 qpair failed and we were unable to recover it. 00:37:29.729 [2024-09-29 16:45:30.010166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.729 [2024-09-29 16:45:30.010203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.729 qpair failed and we were unable to recover it. 00:37:29.729 [2024-09-29 16:45:30.010328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.729 [2024-09-29 16:45:30.010364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.729 qpair failed and we were unable to recover it. 00:37:29.729 [2024-09-29 16:45:30.010500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.729 [2024-09-29 16:45:30.010535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.729 qpair failed and we were unable to recover it. 00:37:29.729 [2024-09-29 16:45:30.010709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.729 [2024-09-29 16:45:30.010757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.729 qpair failed and we were unable to recover it. 00:37:29.729 [2024-09-29 16:45:30.010913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.729 [2024-09-29 16:45:30.010951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.729 qpair failed and we were unable to recover it. 00:37:29.729 [2024-09-29 16:45:30.011103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.729 [2024-09-29 16:45:30.011138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.729 qpair failed and we were unable to recover it. 00:37:29.729 [2024-09-29 16:45:30.011264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.729 [2024-09-29 16:45:30.011299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.729 qpair failed and we were unable to recover it. 00:37:29.729 [2024-09-29 16:45:30.011447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.729 [2024-09-29 16:45:30.011487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.729 qpair failed and we were unable to recover it. 00:37:29.729 [2024-09-29 16:45:30.011635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.729 [2024-09-29 16:45:30.011693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.729 qpair failed and we were unable to recover it. 00:37:29.729 [2024-09-29 16:45:30.011883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.729 [2024-09-29 16:45:30.011920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.729 qpair failed and we were unable to recover it. 00:37:29.729 [2024-09-29 16:45:30.012057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.729 [2024-09-29 16:45:30.012092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.729 qpair failed and we were unable to recover it. 00:37:29.729 [2024-09-29 16:45:30.012254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.729 [2024-09-29 16:45:30.012289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.729 qpair failed and we were unable to recover it. 00:37:29.729 [2024-09-29 16:45:30.012414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.729 [2024-09-29 16:45:30.012451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.729 qpair failed and we were unable to recover it. 00:37:29.729 [2024-09-29 16:45:30.012602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.729 [2024-09-29 16:45:30.012639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.729 qpair failed and we were unable to recover it. 00:37:29.729 [2024-09-29 16:45:30.012803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.729 [2024-09-29 16:45:30.012851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.729 qpair failed and we were unable to recover it. 00:37:29.729 [2024-09-29 16:45:30.012991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.729 [2024-09-29 16:45:30.013044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.729 qpair failed and we were unable to recover it. 00:37:29.729 [2024-09-29 16:45:30.013198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.729 [2024-09-29 16:45:30.013249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.729 qpair failed and we were unable to recover it. 00:37:29.729 [2024-09-29 16:45:30.013388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.729 [2024-09-29 16:45:30.013441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.729 qpair failed and we were unable to recover it. 00:37:29.729 [2024-09-29 16:45:30.013601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.729 [2024-09-29 16:45:30.013648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.729 qpair failed and we were unable to recover it. 00:37:29.729 [2024-09-29 16:45:30.013827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.729 [2024-09-29 16:45:30.013872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.729 qpair failed and we were unable to recover it. 00:37:29.729 [2024-09-29 16:45:30.014029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.730 [2024-09-29 16:45:30.014076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.730 qpair failed and we were unable to recover it. 00:37:29.730 [2024-09-29 16:45:30.014227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.730 [2024-09-29 16:45:30.014279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.730 qpair failed and we were unable to recover it. 00:37:29.730 [2024-09-29 16:45:30.015411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.730 [2024-09-29 16:45:30.015473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.730 qpair failed and we were unable to recover it. 00:37:29.730 [2024-09-29 16:45:30.015607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.730 [2024-09-29 16:45:30.015643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.730 qpair failed and we were unable to recover it. 00:37:29.730 [2024-09-29 16:45:30.015789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.730 [2024-09-29 16:45:30.015837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.730 qpair failed and we were unable to recover it. 00:37:29.730 [2024-09-29 16:45:30.015991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.730 [2024-09-29 16:45:30.016031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.730 qpair failed and we were unable to recover it. 00:37:29.730 [2024-09-29 16:45:30.016185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.730 [2024-09-29 16:45:30.016222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.730 qpair failed and we were unable to recover it. 00:37:29.730 [2024-09-29 16:45:30.016346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.730 [2024-09-29 16:45:30.016384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.730 qpair failed and we were unable to recover it. 00:37:29.730 [2024-09-29 16:45:30.016527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.730 [2024-09-29 16:45:30.016565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.730 qpair failed and we were unable to recover it. 00:37:29.730 [2024-09-29 16:45:30.016683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.730 [2024-09-29 16:45:30.016718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.730 qpair failed and we were unable to recover it. 00:37:29.730 [2024-09-29 16:45:30.016879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.730 [2024-09-29 16:45:30.016927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.730 qpair failed and we were unable to recover it. 00:37:29.730 [2024-09-29 16:45:30.017087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.730 [2024-09-29 16:45:30.017123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.730 qpair failed and we were unable to recover it. 00:37:29.730 [2024-09-29 16:45:30.017245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.730 [2024-09-29 16:45:30.017278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.730 qpair failed and we were unable to recover it. 00:37:29.730 [2024-09-29 16:45:30.017418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.730 [2024-09-29 16:45:30.017452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.730 qpair failed and we were unable to recover it. 00:37:29.730 [2024-09-29 16:45:30.017566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.730 [2024-09-29 16:45:30.017609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.730 qpair failed and we were unable to recover it. 00:37:29.730 [2024-09-29 16:45:30.017779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.730 [2024-09-29 16:45:30.017836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.730 qpair failed and we were unable to recover it. 00:37:29.730 [2024-09-29 16:45:30.017970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.730 [2024-09-29 16:45:30.018006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.730 qpair failed and we were unable to recover it. 00:37:29.730 [2024-09-29 16:45:30.018151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.730 [2024-09-29 16:45:30.018186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.730 qpair failed and we were unable to recover it. 00:37:29.730 [2024-09-29 16:45:30.018300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.730 [2024-09-29 16:45:30.018336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.730 qpair failed and we were unable to recover it. 00:37:29.730 [2024-09-29 16:45:30.018504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.730 [2024-09-29 16:45:30.018538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.730 qpair failed and we were unable to recover it. 00:37:29.730 [2024-09-29 16:45:30.018706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.730 [2024-09-29 16:45:30.018754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.730 qpair failed and we were unable to recover it. 00:37:29.730 [2024-09-29 16:45:30.018889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.730 [2024-09-29 16:45:30.018926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.730 qpair failed and we were unable to recover it. 00:37:29.730 [2024-09-29 16:45:30.019043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.730 [2024-09-29 16:45:30.019077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.730 qpair failed and we were unable to recover it. 00:37:29.730 [2024-09-29 16:45:30.019207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.730 [2024-09-29 16:45:30.019240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.730 qpair failed and we were unable to recover it. 00:37:29.730 [2024-09-29 16:45:30.019386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.730 [2024-09-29 16:45:30.019419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.730 qpair failed and we were unable to recover it. 00:37:29.730 [2024-09-29 16:45:30.019526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.730 [2024-09-29 16:45:30.019559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.730 qpair failed and we were unable to recover it. 00:37:29.730 [2024-09-29 16:45:30.019687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.730 [2024-09-29 16:45:30.019723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.730 qpair failed and we were unable to recover it. 00:37:29.730 [2024-09-29 16:45:30.019869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.730 [2024-09-29 16:45:30.019917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.730 qpair failed and we were unable to recover it. 00:37:29.730 [2024-09-29 16:45:30.020039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.730 [2024-09-29 16:45:30.020076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.730 qpair failed and we were unable to recover it. 00:37:29.730 [2024-09-29 16:45:30.020203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.730 [2024-09-29 16:45:30.020238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.730 qpair failed and we were unable to recover it. 00:37:29.730 [2024-09-29 16:45:30.020381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.730 [2024-09-29 16:45:30.020415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.730 qpair failed and we were unable to recover it. 00:37:29.730 [2024-09-29 16:45:30.020527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.730 [2024-09-29 16:45:30.020561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.730 qpair failed and we were unable to recover it. 00:37:29.730 [2024-09-29 16:45:30.020717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.730 [2024-09-29 16:45:30.020753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.730 qpair failed and we were unable to recover it. 00:37:29.730 [2024-09-29 16:45:30.020872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.730 [2024-09-29 16:45:30.020908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.730 qpair failed and we were unable to recover it. 00:37:29.730 [2024-09-29 16:45:30.021024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.730 [2024-09-29 16:45:30.021059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.730 qpair failed and we were unable to recover it. 00:37:29.730 [2024-09-29 16:45:30.021190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.730 [2024-09-29 16:45:30.021225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.730 qpair failed and we were unable to recover it. 00:37:29.730 [2024-09-29 16:45:30.021346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.730 [2024-09-29 16:45:30.021381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.730 qpair failed and we were unable to recover it. 00:37:29.731 [2024-09-29 16:45:30.021539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.731 [2024-09-29 16:45:30.021588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.731 qpair failed and we were unable to recover it. 00:37:29.731 [2024-09-29 16:45:30.021722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.731 [2024-09-29 16:45:30.021759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.731 qpair failed and we were unable to recover it. 00:37:29.731 [2024-09-29 16:45:30.021874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.731 [2024-09-29 16:45:30.021908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.731 qpair failed and we were unable to recover it. 00:37:29.731 [2024-09-29 16:45:30.022051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.731 [2024-09-29 16:45:30.022085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.731 qpair failed and we were unable to recover it. 00:37:29.731 [2024-09-29 16:45:30.022203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.731 [2024-09-29 16:45:30.022237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.731 qpair failed and we were unable to recover it. 00:37:29.731 [2024-09-29 16:45:30.022350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.731 [2024-09-29 16:45:30.022389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.731 qpair failed and we were unable to recover it. 00:37:29.731 [2024-09-29 16:45:30.022535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.731 [2024-09-29 16:45:30.022569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.731 qpair failed and we were unable to recover it. 00:37:29.731 [2024-09-29 16:45:30.022698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.731 [2024-09-29 16:45:30.022748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.731 qpair failed and we were unable to recover it. 00:37:29.731 [2024-09-29 16:45:30.022891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.731 [2024-09-29 16:45:30.022939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.731 qpair failed and we were unable to recover it. 00:37:29.731 [2024-09-29 16:45:30.023094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.731 [2024-09-29 16:45:30.023130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.731 qpair failed and we were unable to recover it. 00:37:29.731 [2024-09-29 16:45:30.023275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.731 [2024-09-29 16:45:30.023309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.731 qpair failed and we were unable to recover it. 00:37:29.731 [2024-09-29 16:45:30.023425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.731 [2024-09-29 16:45:30.023459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.731 qpair failed and we were unable to recover it. 00:37:29.731 [2024-09-29 16:45:30.023602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.731 [2024-09-29 16:45:30.023635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.731 qpair failed and we were unable to recover it. 00:37:29.731 [2024-09-29 16:45:30.023783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.731 [2024-09-29 16:45:30.023817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.731 qpair failed and we were unable to recover it. 00:37:29.731 [2024-09-29 16:45:30.023952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.731 [2024-09-29 16:45:30.024000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.731 qpair failed and we were unable to recover it. 00:37:29.731 [2024-09-29 16:45:30.024137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.731 [2024-09-29 16:45:30.024174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.731 qpair failed and we were unable to recover it. 00:37:29.731 [2024-09-29 16:45:30.024298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.731 [2024-09-29 16:45:30.024334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.731 qpair failed and we were unable to recover it. 00:37:29.731 [2024-09-29 16:45:30.024482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.731 [2024-09-29 16:45:30.024516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.731 qpair failed and we were unable to recover it. 00:37:29.731 [2024-09-29 16:45:30.024662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.731 [2024-09-29 16:45:30.024709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.731 qpair failed and we were unable to recover it. 00:37:29.731 [2024-09-29 16:45:30.024829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.731 [2024-09-29 16:45:30.024863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.731 qpair failed and we were unable to recover it. 00:37:29.731 [2024-09-29 16:45:30.025029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.731 [2024-09-29 16:45:30.025063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.731 qpair failed and we were unable to recover it. 00:37:29.731 [2024-09-29 16:45:30.025182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.731 [2024-09-29 16:45:30.025216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.731 qpair failed and we were unable to recover it. 00:37:29.731 [2024-09-29 16:45:30.025341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.731 [2024-09-29 16:45:30.025374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.731 qpair failed and we were unable to recover it. 00:37:29.731 [2024-09-29 16:45:30.025485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.731 [2024-09-29 16:45:30.025519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.731 qpair failed and we were unable to recover it. 00:37:29.731 [2024-09-29 16:45:30.025628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.731 [2024-09-29 16:45:30.025661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.731 qpair failed and we were unable to recover it. 00:37:29.731 [2024-09-29 16:45:30.025818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.731 [2024-09-29 16:45:30.025851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.731 qpair failed and we were unable to recover it. 00:37:29.731 [2024-09-29 16:45:30.025976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.731 [2024-09-29 16:45:30.026012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.731 qpair failed and we were unable to recover it. 00:37:29.731 [2024-09-29 16:45:30.026127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.731 [2024-09-29 16:45:30.026162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.731 qpair failed and we were unable to recover it. 00:37:29.731 [2024-09-29 16:45:30.026305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.731 [2024-09-29 16:45:30.026339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.731 qpair failed and we were unable to recover it. 00:37:29.731 [2024-09-29 16:45:30.026496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.731 [2024-09-29 16:45:30.026531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.731 qpair failed and we were unable to recover it. 00:37:29.732 [2024-09-29 16:45:30.026647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.732 [2024-09-29 16:45:30.026690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.732 qpair failed and we were unable to recover it. 00:37:29.732 [2024-09-29 16:45:30.026817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.732 [2024-09-29 16:45:30.026865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.732 qpair failed and we were unable to recover it. 00:37:29.732 [2024-09-29 16:45:30.027040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.732 [2024-09-29 16:45:30.027076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.732 qpair failed and we were unable to recover it. 00:37:29.732 [2024-09-29 16:45:30.027218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.732 [2024-09-29 16:45:30.027264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.732 qpair failed and we were unable to recover it. 00:37:29.732 [2024-09-29 16:45:30.027402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.732 [2024-09-29 16:45:30.027435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.732 qpair failed and we were unable to recover it. 00:37:29.732 [2024-09-29 16:45:30.027544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.732 [2024-09-29 16:45:30.027579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.732 qpair failed and we were unable to recover it. 00:37:29.732 [2024-09-29 16:45:30.027710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.732 [2024-09-29 16:45:30.027758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.732 qpair failed and we were unable to recover it. 00:37:29.732 [2024-09-29 16:45:30.027881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.732 [2024-09-29 16:45:30.027916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.732 qpair failed and we were unable to recover it. 00:37:29.732 [2024-09-29 16:45:30.028055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.732 [2024-09-29 16:45:30.028088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.732 qpair failed and we were unable to recover it. 00:37:29.732 [2024-09-29 16:45:30.028200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.732 [2024-09-29 16:45:30.028234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.732 qpair failed and we were unable to recover it. 00:37:29.732 [2024-09-29 16:45:30.028375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.732 [2024-09-29 16:45:30.028409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.732 qpair failed and we were unable to recover it. 00:37:29.732 [2024-09-29 16:45:30.028549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.732 [2024-09-29 16:45:30.028583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.732 qpair failed and we were unable to recover it. 00:37:29.732 [2024-09-29 16:45:30.028728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.732 [2024-09-29 16:45:30.028762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.732 qpair failed and we were unable to recover it. 00:37:29.732 [2024-09-29 16:45:30.028879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.732 [2024-09-29 16:45:30.028912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.732 qpair failed and we were unable to recover it. 00:37:29.732 [2024-09-29 16:45:30.029024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.732 [2024-09-29 16:45:30.029057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.732 qpair failed and we were unable to recover it. 00:37:29.732 [2024-09-29 16:45:30.029228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.732 [2024-09-29 16:45:30.029267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.732 qpair failed and we were unable to recover it. 00:37:29.732 [2024-09-29 16:45:30.029415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.732 [2024-09-29 16:45:30.029449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.732 qpair failed and we were unable to recover it. 00:37:29.732 [2024-09-29 16:45:30.029562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.732 [2024-09-29 16:45:30.029595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.732 qpair failed and we were unable to recover it. 00:37:29.732 [2024-09-29 16:45:30.029736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.732 [2024-09-29 16:45:30.029772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.732 qpair failed and we were unable to recover it. 00:37:29.732 [2024-09-29 16:45:30.029892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.732 [2024-09-29 16:45:30.029926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.732 qpair failed and we were unable to recover it. 00:37:29.732 [2024-09-29 16:45:30.030047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.732 [2024-09-29 16:45:30.030081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.732 qpair failed and we were unable to recover it. 00:37:29.732 [2024-09-29 16:45:30.030224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.732 [2024-09-29 16:45:30.030258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.732 qpair failed and we were unable to recover it. 00:37:29.732 [2024-09-29 16:45:30.030371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.732 [2024-09-29 16:45:30.030405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.732 qpair failed and we were unable to recover it. 00:37:29.732 [2024-09-29 16:45:30.030519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.732 [2024-09-29 16:45:30.030553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.732 qpair failed and we were unable to recover it. 00:37:29.732 [2024-09-29 16:45:30.030724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.732 [2024-09-29 16:45:30.030758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.732 qpair failed and we were unable to recover it. 00:37:29.732 [2024-09-29 16:45:30.030874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.732 [2024-09-29 16:45:30.030908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.732 qpair failed and we were unable to recover it. 00:37:29.732 [2024-09-29 16:45:30.031027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.732 [2024-09-29 16:45:30.031060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.732 qpair failed and we were unable to recover it. 00:37:29.732 [2024-09-29 16:45:30.031230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.732 [2024-09-29 16:45:30.031263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.732 qpair failed and we were unable to recover it. 00:37:29.732 [2024-09-29 16:45:30.031374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.732 [2024-09-29 16:45:30.031408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.732 qpair failed and we were unable to recover it. 00:37:29.732 [2024-09-29 16:45:30.031565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.732 [2024-09-29 16:45:30.031598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.732 qpair failed and we were unable to recover it. 00:37:29.732 [2024-09-29 16:45:30.031721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.732 [2024-09-29 16:45:30.031756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.732 qpair failed and we were unable to recover it. 00:37:29.732 [2024-09-29 16:45:30.031867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.732 [2024-09-29 16:45:30.031901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.732 qpair failed and we were unable to recover it. 00:37:29.732 [2024-09-29 16:45:30.032018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.732 [2024-09-29 16:45:30.032052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.732 qpair failed and we were unable to recover it. 00:37:29.732 [2024-09-29 16:45:30.032223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.732 [2024-09-29 16:45:30.032258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.732 qpair failed and we were unable to recover it. 00:37:29.732 [2024-09-29 16:45:30.032367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.733 [2024-09-29 16:45:30.032400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.733 qpair failed and we were unable to recover it. 00:37:29.733 [2024-09-29 16:45:30.032556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.733 [2024-09-29 16:45:30.032604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.733 qpair failed and we were unable to recover it. 00:37:29.733 [2024-09-29 16:45:30.032793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.733 [2024-09-29 16:45:30.032828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.733 qpair failed and we were unable to recover it. 00:37:29.733 [2024-09-29 16:45:30.032961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.733 [2024-09-29 16:45:30.033008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.733 qpair failed and we were unable to recover it. 00:37:29.733 [2024-09-29 16:45:30.033165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.733 [2024-09-29 16:45:30.033202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.733 qpair failed and we were unable to recover it. 00:37:29.733 [2024-09-29 16:45:30.033322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.733 [2024-09-29 16:45:30.033357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.733 qpair failed and we were unable to recover it. 00:37:29.733 [2024-09-29 16:45:30.033468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.733 [2024-09-29 16:45:30.033502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.733 qpair failed and we were unable to recover it. 00:37:29.733 [2024-09-29 16:45:30.033616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.733 [2024-09-29 16:45:30.033652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.733 qpair failed and we were unable to recover it. 00:37:29.733 [2024-09-29 16:45:30.033789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.733 [2024-09-29 16:45:30.033826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.733 qpair failed and we were unable to recover it. 00:37:29.733 [2024-09-29 16:45:30.033982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.733 [2024-09-29 16:45:30.034017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.733 qpair failed and we were unable to recover it. 00:37:29.733 [2024-09-29 16:45:30.034146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.733 [2024-09-29 16:45:30.034193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.733 qpair failed and we were unable to recover it. 00:37:29.733 [2024-09-29 16:45:30.034328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.733 [2024-09-29 16:45:30.034364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.733 qpair failed and we were unable to recover it. 00:37:29.733 [2024-09-29 16:45:30.034483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.733 [2024-09-29 16:45:30.034517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.733 qpair failed and we were unable to recover it. 00:37:29.733 [2024-09-29 16:45:30.034696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.733 [2024-09-29 16:45:30.034731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.733 qpair failed and we were unable to recover it. 00:37:29.733 [2024-09-29 16:45:30.034849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.733 [2024-09-29 16:45:30.034885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.733 qpair failed and we were unable to recover it. 00:37:29.733 [2024-09-29 16:45:30.035017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.733 [2024-09-29 16:45:30.035052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.733 qpair failed and we were unable to recover it. 00:37:29.733 [2024-09-29 16:45:30.035173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.733 [2024-09-29 16:45:30.035208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.733 qpair failed and we were unable to recover it. 00:37:29.733 [2024-09-29 16:45:30.035348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.733 [2024-09-29 16:45:30.035382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.733 qpair failed and we were unable to recover it. 00:37:29.733 [2024-09-29 16:45:30.035496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.733 [2024-09-29 16:45:30.035530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.733 qpair failed and we were unable to recover it. 00:37:29.733 [2024-09-29 16:45:30.035678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.733 [2024-09-29 16:45:30.035712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.733 qpair failed and we were unable to recover it. 00:37:29.733 [2024-09-29 16:45:30.035856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.733 [2024-09-29 16:45:30.035891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.733 qpair failed and we were unable to recover it. 00:37:29.733 [2024-09-29 16:45:30.036041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.733 [2024-09-29 16:45:30.036082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.733 qpair failed and we were unable to recover it. 00:37:29.733 [2024-09-29 16:45:30.036201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.733 [2024-09-29 16:45:30.036236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.733 qpair failed and we were unable to recover it. 00:37:29.733 [2024-09-29 16:45:30.036377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.733 [2024-09-29 16:45:30.036410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.733 qpair failed and we were unable to recover it. 00:37:29.733 [2024-09-29 16:45:30.036527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.733 [2024-09-29 16:45:30.036562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.733 qpair failed and we were unable to recover it. 00:37:29.733 [2024-09-29 16:45:30.036722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.733 [2024-09-29 16:45:30.036771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.733 qpair failed and we were unable to recover it. 00:37:29.733 [2024-09-29 16:45:30.036889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.733 [2024-09-29 16:45:30.036923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.733 qpair failed and we were unable to recover it. 00:37:29.734 [2024-09-29 16:45:30.037037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.734 [2024-09-29 16:45:30.037071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.734 qpair failed and we were unable to recover it. 00:37:29.734 [2024-09-29 16:45:30.037289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.734 [2024-09-29 16:45:30.037323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.734 qpair failed and we were unable to recover it. 00:37:29.734 [2024-09-29 16:45:30.037496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.734 [2024-09-29 16:45:30.037530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.734 qpair failed and we were unable to recover it. 00:37:29.734 [2024-09-29 16:45:30.037677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.734 [2024-09-29 16:45:30.037712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.734 qpair failed and we were unable to recover it. 00:37:29.734 [2024-09-29 16:45:30.037859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.734 [2024-09-29 16:45:30.037893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.734 qpair failed and we were unable to recover it. 00:37:29.734 [2024-09-29 16:45:30.038011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.734 [2024-09-29 16:45:30.038046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.734 qpair failed and we were unable to recover it. 00:37:29.734 [2024-09-29 16:45:30.038221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.734 [2024-09-29 16:45:30.038255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.734 qpair failed and we were unable to recover it. 00:37:29.734 [2024-09-29 16:45:30.038403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.734 [2024-09-29 16:45:30.038440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.734 qpair failed and we were unable to recover it. 00:37:29.734 [2024-09-29 16:45:30.038589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.734 [2024-09-29 16:45:30.038623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.734 qpair failed and we were unable to recover it. 00:37:29.734 [2024-09-29 16:45:30.038793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.734 [2024-09-29 16:45:30.038841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.734 qpair failed and we were unable to recover it. 00:37:29.734 [2024-09-29 16:45:30.038991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.734 [2024-09-29 16:45:30.039026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.734 qpair failed and we were unable to recover it. 00:37:29.734 [2024-09-29 16:45:30.039143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.734 [2024-09-29 16:45:30.039177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.734 qpair failed and we were unable to recover it. 00:37:29.734 [2024-09-29 16:45:30.039347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.734 [2024-09-29 16:45:30.039381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.734 qpair failed and we were unable to recover it. 00:37:29.734 [2024-09-29 16:45:30.039496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.734 [2024-09-29 16:45:30.039530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.734 qpair failed and we were unable to recover it. 00:37:29.734 [2024-09-29 16:45:30.039661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.734 [2024-09-29 16:45:30.039719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.734 qpair failed and we were unable to recover it. 00:37:29.734 [2024-09-29 16:45:30.039850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.734 [2024-09-29 16:45:30.039886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.734 qpair failed and we were unable to recover it. 00:37:29.734 [2024-09-29 16:45:30.040034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.734 [2024-09-29 16:45:30.040070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.734 qpair failed and we were unable to recover it. 00:37:29.734 [2024-09-29 16:45:30.040213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.734 [2024-09-29 16:45:30.040248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.734 qpair failed and we were unable to recover it. 00:37:29.734 [2024-09-29 16:45:30.040358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.734 [2024-09-29 16:45:30.040393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.734 qpair failed and we were unable to recover it. 00:37:29.734 [2024-09-29 16:45:30.040509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.734 [2024-09-29 16:45:30.040542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.734 qpair failed and we were unable to recover it. 00:37:29.734 [2024-09-29 16:45:30.040714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.734 [2024-09-29 16:45:30.040750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.734 qpair failed and we were unable to recover it. 00:37:29.734 [2024-09-29 16:45:30.040869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.734 [2024-09-29 16:45:30.040903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.734 qpair failed and we were unable to recover it. 00:37:29.734 [2024-09-29 16:45:30.041045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.734 [2024-09-29 16:45:30.041079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.734 qpair failed and we were unable to recover it. 00:37:29.734 [2024-09-29 16:45:30.041227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.734 [2024-09-29 16:45:30.041261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.734 qpair failed and we were unable to recover it. 00:37:29.734 [2024-09-29 16:45:30.041408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.734 [2024-09-29 16:45:30.041442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.734 qpair failed and we were unable to recover it. 00:37:29.734 [2024-09-29 16:45:30.041623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.734 [2024-09-29 16:45:30.041657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.734 qpair failed and we were unable to recover it. 00:37:29.734 [2024-09-29 16:45:30.041790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.734 [2024-09-29 16:45:30.041825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.734 qpair failed and we were unable to recover it. 00:37:29.734 [2024-09-29 16:45:30.041963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.734 [2024-09-29 16:45:30.042011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.734 qpair failed and we were unable to recover it. 00:37:29.734 [2024-09-29 16:45:30.042179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.734 [2024-09-29 16:45:30.042227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.734 qpair failed and we were unable to recover it. 00:37:29.734 [2024-09-29 16:45:30.042373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.734 [2024-09-29 16:45:30.042410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.734 qpair failed and we were unable to recover it. 00:37:29.734 [2024-09-29 16:45:30.042551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.734 [2024-09-29 16:45:30.042585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.734 qpair failed and we were unable to recover it. 00:37:29.734 [2024-09-29 16:45:30.042702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.734 [2024-09-29 16:45:30.042737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.734 qpair failed and we were unable to recover it. 00:37:29.734 [2024-09-29 16:45:30.042853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.734 [2024-09-29 16:45:30.042889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.734 qpair failed and we were unable to recover it. 00:37:29.734 [2024-09-29 16:45:30.043010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.734 [2024-09-29 16:45:30.043046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.734 qpair failed and we were unable to recover it. 00:37:29.734 [2024-09-29 16:45:30.043257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.735 [2024-09-29 16:45:30.043300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.735 qpair failed and we were unable to recover it. 00:37:29.735 [2024-09-29 16:45:30.043441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.735 [2024-09-29 16:45:30.043475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.735 qpair failed and we were unable to recover it. 00:37:29.735 [2024-09-29 16:45:30.043595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.735 [2024-09-29 16:45:30.043631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.735 qpair failed and we were unable to recover it. 00:37:29.735 [2024-09-29 16:45:30.043750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.735 [2024-09-29 16:45:30.043786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.735 qpair failed and we were unable to recover it. 00:37:29.735 [2024-09-29 16:45:30.043923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.735 [2024-09-29 16:45:30.043971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.735 qpair failed and we were unable to recover it. 00:37:29.735 [2024-09-29 16:45:30.044159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.735 [2024-09-29 16:45:30.044194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.735 qpair failed and we were unable to recover it. 00:37:29.735 [2024-09-29 16:45:30.044347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.735 [2024-09-29 16:45:30.044395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.735 qpair failed and we were unable to recover it. 00:37:29.735 [2024-09-29 16:45:30.044517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.735 [2024-09-29 16:45:30.044552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.735 qpair failed and we were unable to recover it. 00:37:29.735 [2024-09-29 16:45:30.044720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.735 [2024-09-29 16:45:30.044755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.735 qpair failed and we were unable to recover it. 00:37:29.735 [2024-09-29 16:45:30.044869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.735 [2024-09-29 16:45:30.044904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.735 qpair failed and we were unable to recover it. 00:37:29.735 [2024-09-29 16:45:30.045077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.735 [2024-09-29 16:45:30.045112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.735 qpair failed and we were unable to recover it. 00:37:29.735 [2024-09-29 16:45:30.045277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.735 [2024-09-29 16:45:30.045324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.735 qpair failed and we were unable to recover it. 00:37:29.735 [2024-09-29 16:45:30.045458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.735 [2024-09-29 16:45:30.045506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.735 qpair failed and we were unable to recover it. 00:37:29.735 [2024-09-29 16:45:30.045659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.735 [2024-09-29 16:45:30.045704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.735 qpair failed and we were unable to recover it. 00:37:29.735 [2024-09-29 16:45:30.045836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.735 [2024-09-29 16:45:30.045871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.735 qpair failed and we were unable to recover it. 00:37:29.735 [2024-09-29 16:45:30.046011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.735 [2024-09-29 16:45:30.046046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.735 qpair failed and we were unable to recover it. 00:37:29.735 [2024-09-29 16:45:30.046240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.735 [2024-09-29 16:45:30.046275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.735 qpair failed and we were unable to recover it. 00:37:29.735 [2024-09-29 16:45:30.046409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.735 [2024-09-29 16:45:30.046443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.735 qpair failed and we were unable to recover it. 00:37:29.735 [2024-09-29 16:45:30.046589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.735 [2024-09-29 16:45:30.046627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.735 qpair failed and we were unable to recover it. 00:37:29.735 [2024-09-29 16:45:30.046769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.735 [2024-09-29 16:45:30.046818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.735 qpair failed and we were unable to recover it. 00:37:29.735 [2024-09-29 16:45:30.046988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.735 [2024-09-29 16:45:30.047054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.735 qpair failed and we were unable to recover it. 00:37:29.735 [2024-09-29 16:45:30.047296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.735 [2024-09-29 16:45:30.047356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.735 qpair failed and we were unable to recover it. 00:37:29.735 [2024-09-29 16:45:30.047505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.735 [2024-09-29 16:45:30.047581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.735 qpair failed and we were unable to recover it. 00:37:29.735 [2024-09-29 16:45:30.047709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.735 [2024-09-29 16:45:30.047761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.735 qpair failed and we were unable to recover it. 00:37:29.735 [2024-09-29 16:45:30.047899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.735 [2024-09-29 16:45:30.047932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.735 qpair failed and we were unable to recover it. 00:37:29.735 [2024-09-29 16:45:30.048096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.735 [2024-09-29 16:45:30.048133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.735 qpair failed and we were unable to recover it. 00:37:29.735 [2024-09-29 16:45:30.048290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.735 [2024-09-29 16:45:30.048327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.735 qpair failed and we were unable to recover it. 00:37:29.735 [2024-09-29 16:45:30.048470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.735 [2024-09-29 16:45:30.048508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.735 qpair failed and we were unable to recover it. 00:37:29.735 [2024-09-29 16:45:30.048684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.735 [2024-09-29 16:45:30.048717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.735 qpair failed and we were unable to recover it. 00:37:29.735 [2024-09-29 16:45:30.048829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.735 [2024-09-29 16:45:30.048864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.735 qpair failed and we were unable to recover it. 00:37:29.735 [2024-09-29 16:45:30.048988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.735 [2024-09-29 16:45:30.049037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.735 qpair failed and we were unable to recover it. 00:37:29.735 [2024-09-29 16:45:30.049191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.735 [2024-09-29 16:45:30.049229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.735 qpair failed and we were unable to recover it. 00:37:29.735 [2024-09-29 16:45:30.049344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.735 [2024-09-29 16:45:30.049380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.736 qpair failed and we were unable to recover it. 00:37:29.736 [2024-09-29 16:45:30.049523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.736 [2024-09-29 16:45:30.049557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.736 qpair failed and we were unable to recover it. 00:37:29.736 [2024-09-29 16:45:30.049725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.736 [2024-09-29 16:45:30.049774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.736 qpair failed and we were unable to recover it. 00:37:29.736 [2024-09-29 16:45:30.049914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.736 [2024-09-29 16:45:30.049954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.736 qpair failed and we were unable to recover it. 00:37:29.736 [2024-09-29 16:45:30.051341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.736 [2024-09-29 16:45:30.051397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.736 qpair failed and we were unable to recover it. 00:37:29.736 [2024-09-29 16:45:30.051527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.736 [2024-09-29 16:45:30.051561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.736 qpair failed and we were unable to recover it. 00:37:29.736 [2024-09-29 16:45:30.051689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.736 [2024-09-29 16:45:30.051725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.736 qpair failed and we were unable to recover it. 00:37:29.736 [2024-09-29 16:45:30.051851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.736 [2024-09-29 16:45:30.051886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.736 qpair failed and we were unable to recover it. 00:37:29.736 [2024-09-29 16:45:30.052034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.736 [2024-09-29 16:45:30.052078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.736 qpair failed and we were unable to recover it. 00:37:29.736 [2024-09-29 16:45:30.052249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.736 [2024-09-29 16:45:30.052307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.736 qpair failed and we were unable to recover it. 00:37:29.736 [2024-09-29 16:45:30.052455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.736 [2024-09-29 16:45:30.052490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.736 qpair failed and we were unable to recover it. 00:37:29.736 [2024-09-29 16:45:30.052628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.736 [2024-09-29 16:45:30.052685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.736 qpair failed and we were unable to recover it. 00:37:29.736 [2024-09-29 16:45:30.052833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.736 [2024-09-29 16:45:30.052870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.736 qpair failed and we were unable to recover it. 00:37:29.736 [2024-09-29 16:45:30.053052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.736 [2024-09-29 16:45:30.053106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.736 qpair failed and we were unable to recover it. 00:37:29.736 [2024-09-29 16:45:30.053266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.736 [2024-09-29 16:45:30.053321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.736 qpair failed and we were unable to recover it. 00:37:29.736 [2024-09-29 16:45:30.053478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.736 [2024-09-29 16:45:30.053514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.736 qpair failed and we were unable to recover it. 00:37:29.736 [2024-09-29 16:45:30.053688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.736 [2024-09-29 16:45:30.053724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.736 qpair failed and we were unable to recover it. 00:37:29.736 [2024-09-29 16:45:30.053869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.736 [2024-09-29 16:45:30.053902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.736 qpair failed and we were unable to recover it. 00:37:29.736 [2024-09-29 16:45:30.054054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.736 [2024-09-29 16:45:30.054103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.736 qpair failed and we were unable to recover it. 00:37:29.736 [2024-09-29 16:45:30.054365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.736 [2024-09-29 16:45:30.054437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.736 qpair failed and we were unable to recover it. 00:37:29.736 [2024-09-29 16:45:30.054560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.736 [2024-09-29 16:45:30.054612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.736 qpair failed and we were unable to recover it. 00:37:29.736 [2024-09-29 16:45:30.054745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.736 [2024-09-29 16:45:30.054780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.736 qpair failed and we were unable to recover it. 00:37:29.736 [2024-09-29 16:45:30.054906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.736 [2024-09-29 16:45:30.054941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.736 qpair failed and we were unable to recover it. 00:37:29.736 [2024-09-29 16:45:30.055080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.736 [2024-09-29 16:45:30.055113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.736 qpair failed and we were unable to recover it. 00:37:29.736 [2024-09-29 16:45:30.055283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.736 [2024-09-29 16:45:30.055354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.736 qpair failed and we were unable to recover it. 00:37:29.736 [2024-09-29 16:45:30.055517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.736 [2024-09-29 16:45:30.055556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.736 qpair failed and we were unable to recover it. 00:37:29.736 [2024-09-29 16:45:30.055686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.736 [2024-09-29 16:45:30.055724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.736 qpair failed and we were unable to recover it. 00:37:29.736 [2024-09-29 16:45:30.055859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.736 [2024-09-29 16:45:30.055896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.736 qpair failed and we were unable to recover it. 00:37:29.736 [2024-09-29 16:45:30.056040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.736 [2024-09-29 16:45:30.056074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.736 qpair failed and we were unable to recover it. 00:37:29.736 [2024-09-29 16:45:30.056345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.736 [2024-09-29 16:45:30.056380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.736 qpair failed and we were unable to recover it. 00:37:29.736 [2024-09-29 16:45:30.056486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.736 [2024-09-29 16:45:30.056533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.736 qpair failed and we were unable to recover it. 00:37:29.736 [2024-09-29 16:45:30.056657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.736 [2024-09-29 16:45:30.056701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.736 qpair failed and we were unable to recover it. 00:37:29.736 [2024-09-29 16:45:30.056816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.736 [2024-09-29 16:45:30.056850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.736 qpair failed and we were unable to recover it. 00:37:29.736 [2024-09-29 16:45:30.056974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.736 [2024-09-29 16:45:30.057010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.736 qpair failed and we were unable to recover it. 00:37:29.737 [2024-09-29 16:45:30.057224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.737 [2024-09-29 16:45:30.057262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.737 qpair failed and we were unable to recover it. 00:37:29.737 [2024-09-29 16:45:30.057457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.737 [2024-09-29 16:45:30.057517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.737 qpair failed and we were unable to recover it. 00:37:29.737 [2024-09-29 16:45:30.057669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.737 [2024-09-29 16:45:30.057713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.737 qpair failed and we were unable to recover it. 00:37:29.737 [2024-09-29 16:45:30.057879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.737 [2024-09-29 16:45:30.057914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.737 qpair failed and we were unable to recover it. 00:37:29.737 [2024-09-29 16:45:30.058031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.737 [2024-09-29 16:45:30.058066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.737 qpair failed and we were unable to recover it. 00:37:29.737 [2024-09-29 16:45:30.058201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.737 [2024-09-29 16:45:30.058254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.737 qpair failed and we were unable to recover it. 00:37:29.737 [2024-09-29 16:45:30.058362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.737 [2024-09-29 16:45:30.058396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.737 qpair failed and we were unable to recover it. 00:37:29.737 [2024-09-29 16:45:30.058535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.737 [2024-09-29 16:45:30.058569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.737 qpair failed and we were unable to recover it. 00:37:29.737 [2024-09-29 16:45:30.058697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.737 [2024-09-29 16:45:30.058746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.737 qpair failed and we were unable to recover it. 00:37:29.737 [2024-09-29 16:45:30.058896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.737 [2024-09-29 16:45:30.058931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.737 qpair failed and we were unable to recover it. 00:37:29.737 [2024-09-29 16:45:30.059053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.737 [2024-09-29 16:45:30.059087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.737 qpair failed and we were unable to recover it. 00:37:29.737 [2024-09-29 16:45:30.059224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.737 [2024-09-29 16:45:30.059258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.737 qpair failed and we were unable to recover it. 00:37:29.737 [2024-09-29 16:45:30.059429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.737 [2024-09-29 16:45:30.059463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.737 qpair failed and we were unable to recover it. 00:37:29.737 [2024-09-29 16:45:30.059606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.737 [2024-09-29 16:45:30.059640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.737 qpair failed and we were unable to recover it. 00:37:29.737 [2024-09-29 16:45:30.059756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.737 [2024-09-29 16:45:30.059797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.737 qpair failed and we were unable to recover it. 00:37:29.737 [2024-09-29 16:45:30.059923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.737 [2024-09-29 16:45:30.059960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.737 qpair failed and we were unable to recover it. 00:37:29.737 [2024-09-29 16:45:30.060110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.737 [2024-09-29 16:45:30.060144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.737 qpair failed and we were unable to recover it. 00:37:29.737 [2024-09-29 16:45:30.060312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.737 [2024-09-29 16:45:30.060346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.737 qpair failed and we were unable to recover it. 00:37:29.737 [2024-09-29 16:45:30.060509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.737 [2024-09-29 16:45:30.060562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.737 qpair failed and we were unable to recover it. 00:37:29.737 [2024-09-29 16:45:30.060768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.737 [2024-09-29 16:45:30.060822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.737 qpair failed and we were unable to recover it. 00:37:29.737 [2024-09-29 16:45:30.061017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.737 [2024-09-29 16:45:30.061092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.737 qpair failed and we were unable to recover it. 00:37:29.737 [2024-09-29 16:45:30.061279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.737 [2024-09-29 16:45:30.061350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.737 qpair failed and we were unable to recover it. 00:37:29.737 [2024-09-29 16:45:30.061531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.737 [2024-09-29 16:45:30.061568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.737 qpair failed and we were unable to recover it. 00:37:29.737 [2024-09-29 16:45:30.061731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.737 [2024-09-29 16:45:30.061766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.737 qpair failed and we were unable to recover it. 00:37:29.737 [2024-09-29 16:45:30.061905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.737 [2024-09-29 16:45:30.061952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.737 qpair failed and we were unable to recover it. 00:37:29.737 [2024-09-29 16:45:30.062212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.737 [2024-09-29 16:45:30.062272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.737 qpair failed and we were unable to recover it. 00:37:29.737 [2024-09-29 16:45:30.062515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.737 [2024-09-29 16:45:30.062576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.737 qpair failed and we were unable to recover it. 00:37:29.737 [2024-09-29 16:45:30.062757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.737 [2024-09-29 16:45:30.062792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.737 qpair failed and we were unable to recover it. 00:37:29.737 [2024-09-29 16:45:30.062957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.737 [2024-09-29 16:45:30.062990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.737 qpair failed and we were unable to recover it. 00:37:29.737 [2024-09-29 16:45:30.063211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.737 [2024-09-29 16:45:30.063249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.737 qpair failed and we were unable to recover it. 00:37:29.737 [2024-09-29 16:45:30.063395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.737 [2024-09-29 16:45:30.063429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.737 qpair failed and we were unable to recover it. 00:37:29.737 [2024-09-29 16:45:30.063607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.737 [2024-09-29 16:45:30.063641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.737 qpair failed and we were unable to recover it. 00:37:29.737 [2024-09-29 16:45:30.063782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.737 [2024-09-29 16:45:30.063830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.737 qpair failed and we were unable to recover it. 00:37:29.737 [2024-09-29 16:45:30.063995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.737 [2024-09-29 16:45:30.064062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.737 qpair failed and we were unable to recover it. 00:37:29.737 [2024-09-29 16:45:30.064361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.738 [2024-09-29 16:45:30.064424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.738 qpair failed and we were unable to recover it. 00:37:29.738 [2024-09-29 16:45:30.064592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.738 [2024-09-29 16:45:30.064628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.738 qpair failed and we were unable to recover it. 00:37:29.738 [2024-09-29 16:45:30.064804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.738 [2024-09-29 16:45:30.064839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.738 qpair failed and we were unable to recover it. 00:37:29.738 [2024-09-29 16:45:30.065021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.738 [2024-09-29 16:45:30.065088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.738 qpair failed and we were unable to recover it. 00:37:29.738 [2024-09-29 16:45:30.065320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.738 [2024-09-29 16:45:30.065380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.738 qpair failed and we were unable to recover it. 00:37:29.738 [2024-09-29 16:45:30.065550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.738 [2024-09-29 16:45:30.065584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.738 qpair failed and we were unable to recover it. 00:37:29.738 [2024-09-29 16:45:30.065707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.738 [2024-09-29 16:45:30.065742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.738 qpair failed and we were unable to recover it. 00:37:29.738 [2024-09-29 16:45:30.065907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.738 [2024-09-29 16:45:30.065975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.738 qpair failed and we were unable to recover it. 00:37:29.738 [2024-09-29 16:45:30.066204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.738 [2024-09-29 16:45:30.066260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.738 qpair failed and we were unable to recover it. 00:37:29.738 [2024-09-29 16:45:30.066500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.738 [2024-09-29 16:45:30.066571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.738 qpair failed and we were unable to recover it. 00:37:29.738 [2024-09-29 16:45:30.066730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.738 [2024-09-29 16:45:30.066764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.738 qpair failed and we were unable to recover it. 00:37:29.738 [2024-09-29 16:45:30.066895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.738 [2024-09-29 16:45:30.066942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.738 qpair failed and we were unable to recover it. 00:37:29.738 [2024-09-29 16:45:30.067070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.738 [2024-09-29 16:45:30.067106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.738 qpair failed and we were unable to recover it. 00:37:29.738 [2024-09-29 16:45:30.067238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.738 [2024-09-29 16:45:30.067275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.738 qpair failed and we were unable to recover it. 00:37:29.738 [2024-09-29 16:45:30.067452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.738 [2024-09-29 16:45:30.067511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.738 qpair failed and we were unable to recover it. 00:37:29.738 [2024-09-29 16:45:30.067637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.738 [2024-09-29 16:45:30.067695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.738 qpair failed and we were unable to recover it. 00:37:29.738 [2024-09-29 16:45:30.067834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.738 [2024-09-29 16:45:30.067883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.738 qpair failed and we were unable to recover it. 00:37:29.738 [2024-09-29 16:45:30.068014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.738 [2024-09-29 16:45:30.068064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.738 qpair failed and we were unable to recover it. 00:37:29.738 [2024-09-29 16:45:30.068184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.738 [2024-09-29 16:45:30.068217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.738 qpair failed and we were unable to recover it. 00:37:29.738 [2024-09-29 16:45:30.068363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.738 [2024-09-29 16:45:30.068396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.738 qpair failed and we were unable to recover it. 00:37:29.738 [2024-09-29 16:45:30.068511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.738 [2024-09-29 16:45:30.068550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.738 qpair failed and we were unable to recover it. 00:37:29.738 [2024-09-29 16:45:30.068709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.738 [2024-09-29 16:45:30.068756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.738 qpair failed and we were unable to recover it. 00:37:29.738 [2024-09-29 16:45:30.068876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.738 [2024-09-29 16:45:30.068913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.738 qpair failed and we were unable to recover it. 00:37:29.738 [2024-09-29 16:45:30.069082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.738 [2024-09-29 16:45:30.069136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.738 qpair failed and we were unable to recover it. 00:37:29.738 [2024-09-29 16:45:30.069376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.738 [2024-09-29 16:45:30.069435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.738 qpair failed and we were unable to recover it. 00:37:29.738 [2024-09-29 16:45:30.069592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.738 [2024-09-29 16:45:30.069630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.738 qpair failed and we were unable to recover it. 00:37:29.738 [2024-09-29 16:45:30.069778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.738 [2024-09-29 16:45:30.069812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.738 qpair failed and we were unable to recover it. 00:37:29.738 [2024-09-29 16:45:30.069977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.738 [2024-09-29 16:45:30.070015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.738 qpair failed and we were unable to recover it. 00:37:29.738 [2024-09-29 16:45:30.070201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.738 [2024-09-29 16:45:30.070266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.738 qpair failed and we were unable to recover it. 00:37:29.739 [2024-09-29 16:45:30.070418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.739 [2024-09-29 16:45:30.070455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.739 qpair failed and we were unable to recover it. 00:37:29.739 [2024-09-29 16:45:30.070630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.739 [2024-09-29 16:45:30.070665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.739 qpair failed and we were unable to recover it. 00:37:29.739 [2024-09-29 16:45:30.070821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.739 [2024-09-29 16:45:30.070856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.739 qpair failed and we were unable to recover it. 00:37:29.739 [2024-09-29 16:45:30.071098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.739 [2024-09-29 16:45:30.071170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.739 qpair failed and we were unable to recover it. 00:37:29.739 [2024-09-29 16:45:30.071425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.739 [2024-09-29 16:45:30.071483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.739 qpair failed and we were unable to recover it. 00:37:29.739 [2024-09-29 16:45:30.071697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.739 [2024-09-29 16:45:30.071733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.739 qpair failed and we were unable to recover it. 00:37:29.739 [2024-09-29 16:45:30.071854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.739 [2024-09-29 16:45:30.071891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.739 qpair failed and we were unable to recover it. 00:37:29.739 [2024-09-29 16:45:30.072055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.739 [2024-09-29 16:45:30.072090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.739 qpair failed and we were unable to recover it. 00:37:29.739 [2024-09-29 16:45:30.072228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.739 [2024-09-29 16:45:30.072262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.739 qpair failed and we were unable to recover it. 00:37:29.739 [2024-09-29 16:45:30.072443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.739 [2024-09-29 16:45:30.072478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.739 qpair failed and we were unable to recover it. 00:37:29.739 [2024-09-29 16:45:30.072615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.739 [2024-09-29 16:45:30.072663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.739 qpair failed and we were unable to recover it. 00:37:29.739 [2024-09-29 16:45:30.072843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.739 [2024-09-29 16:45:30.072891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.739 qpair failed and we were unable to recover it. 00:37:29.739 [2024-09-29 16:45:30.073039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.739 [2024-09-29 16:45:30.073079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.739 qpair failed and we were unable to recover it. 00:37:29.739 [2024-09-29 16:45:30.073257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.739 [2024-09-29 16:45:30.073295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.739 qpair failed and we were unable to recover it. 00:37:29.739 [2024-09-29 16:45:30.073595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.739 [2024-09-29 16:45:30.073659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.739 qpair failed and we were unable to recover it. 00:37:29.739 [2024-09-29 16:45:30.073829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.739 [2024-09-29 16:45:30.073863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.739 qpair failed and we were unable to recover it. 00:37:29.739 [2024-09-29 16:45:30.074027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.739 [2024-09-29 16:45:30.074076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.739 qpair failed and we were unable to recover it. 00:37:29.739 [2024-09-29 16:45:30.074292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.739 [2024-09-29 16:45:30.074349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.739 qpair failed and we were unable to recover it. 00:37:29.739 [2024-09-29 16:45:30.074571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.739 [2024-09-29 16:45:30.074607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.739 qpair failed and we were unable to recover it. 00:37:29.739 [2024-09-29 16:45:30.074763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.739 [2024-09-29 16:45:30.074798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.739 qpair failed and we were unable to recover it. 00:37:29.739 [2024-09-29 16:45:30.074962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.739 [2024-09-29 16:45:30.075013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.739 qpair failed and we were unable to recover it. 00:37:29.739 [2024-09-29 16:45:30.075140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.739 [2024-09-29 16:45:30.075191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.739 qpair failed and we were unable to recover it. 00:37:29.739 [2024-09-29 16:45:30.075333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.739 [2024-09-29 16:45:30.075367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.739 qpair failed and we were unable to recover it. 00:37:29.739 [2024-09-29 16:45:30.075478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.739 [2024-09-29 16:45:30.075512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.739 qpair failed and we were unable to recover it. 00:37:29.739 [2024-09-29 16:45:30.075687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.739 [2024-09-29 16:45:30.075735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.739 qpair failed and we were unable to recover it. 00:37:29.739 [2024-09-29 16:45:30.075868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.739 [2024-09-29 16:45:30.075905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.739 qpair failed and we were unable to recover it. 00:37:29.739 [2024-09-29 16:45:30.076055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.739 [2024-09-29 16:45:30.076089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.739 qpair failed and we were unable to recover it. 00:37:29.739 [2024-09-29 16:45:30.076234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.739 [2024-09-29 16:45:30.076268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.739 qpair failed and we were unable to recover it. 00:37:29.739 [2024-09-29 16:45:30.076410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.739 [2024-09-29 16:45:30.076445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.739 qpair failed and we were unable to recover it. 00:37:29.739 [2024-09-29 16:45:30.076595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.739 [2024-09-29 16:45:30.076629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.739 qpair failed and we were unable to recover it. 00:37:29.739 [2024-09-29 16:45:30.076805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.739 [2024-09-29 16:45:30.076841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.739 qpair failed and we were unable to recover it. 00:37:29.739 [2024-09-29 16:45:30.077015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.739 [2024-09-29 16:45:30.077075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.739 qpair failed and we were unable to recover it. 00:37:29.740 [2024-09-29 16:45:30.077227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.740 [2024-09-29 16:45:30.077287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.740 qpair failed and we were unable to recover it. 00:37:29.740 [2024-09-29 16:45:30.077483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.740 [2024-09-29 16:45:30.077521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.740 qpair failed and we were unable to recover it. 00:37:29.740 [2024-09-29 16:45:30.077655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.740 [2024-09-29 16:45:30.077696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.740 qpair failed and we were unable to recover it. 00:37:29.740 [2024-09-29 16:45:30.077865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.740 [2024-09-29 16:45:30.077912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.740 qpair failed and we were unable to recover it. 00:37:29.740 [2024-09-29 16:45:30.078072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.740 [2024-09-29 16:45:30.078113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.740 qpair failed and we were unable to recover it. 00:37:29.740 [2024-09-29 16:45:30.078297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.740 [2024-09-29 16:45:30.078353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.740 qpair failed and we were unable to recover it. 00:37:29.740 [2024-09-29 16:45:30.078502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.740 [2024-09-29 16:45:30.078554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.740 qpair failed and we were unable to recover it. 00:37:29.740 [2024-09-29 16:45:30.078694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.740 [2024-09-29 16:45:30.078745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.740 qpair failed and we were unable to recover it. 00:37:29.740 [2024-09-29 16:45:30.078883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.740 [2024-09-29 16:45:30.078916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.740 qpair failed and we were unable to recover it. 00:37:29.740 [2024-09-29 16:45:30.079041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.740 [2024-09-29 16:45:30.079077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.740 qpair failed and we were unable to recover it. 00:37:29.740 [2024-09-29 16:45:30.079223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.740 [2024-09-29 16:45:30.079259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.740 qpair failed and we were unable to recover it. 00:37:29.740 [2024-09-29 16:45:30.079415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.740 [2024-09-29 16:45:30.079452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.740 qpair failed and we were unable to recover it. 00:37:29.740 [2024-09-29 16:45:30.079583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.740 [2024-09-29 16:45:30.079620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.740 qpair failed and we were unable to recover it. 00:37:29.740 [2024-09-29 16:45:30.079796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.740 [2024-09-29 16:45:30.079845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.740 qpair failed and we were unable to recover it. 00:37:29.740 [2024-09-29 16:45:30.080002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.740 [2024-09-29 16:45:30.080069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.740 qpair failed and we were unable to recover it. 00:37:29.740 [2024-09-29 16:45:30.080246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.740 [2024-09-29 16:45:30.080300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.740 qpair failed and we were unable to recover it. 00:37:29.740 [2024-09-29 16:45:30.080495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.740 [2024-09-29 16:45:30.080549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.740 qpair failed and we were unable to recover it. 00:37:29.740 [2024-09-29 16:45:30.080678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.740 [2024-09-29 16:45:30.080715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.740 qpair failed and we were unable to recover it. 00:37:29.740 [2024-09-29 16:45:30.080850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.740 [2024-09-29 16:45:30.080902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.740 qpair failed and we were unable to recover it. 00:37:29.740 [2024-09-29 16:45:30.081086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.740 [2024-09-29 16:45:30.081145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.740 qpair failed and we were unable to recover it. 00:37:29.740 [2024-09-29 16:45:30.081339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.740 [2024-09-29 16:45:30.081377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.740 qpair failed and we were unable to recover it. 00:37:29.740 [2024-09-29 16:45:30.081533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.740 [2024-09-29 16:45:30.081570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.740 qpair failed and we were unable to recover it. 00:37:29.740 [2024-09-29 16:45:30.081709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.740 [2024-09-29 16:45:30.081743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.740 qpair failed and we were unable to recover it. 00:37:29.740 [2024-09-29 16:45:30.081898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.740 [2024-09-29 16:45:30.081949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.740 qpair failed and we were unable to recover it. 00:37:29.740 [2024-09-29 16:45:30.082125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.740 [2024-09-29 16:45:30.082181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.740 qpair failed and we were unable to recover it. 00:37:29.740 [2024-09-29 16:45:30.082371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.740 [2024-09-29 16:45:30.082421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.740 qpair failed and we were unable to recover it. 00:37:29.740 [2024-09-29 16:45:30.082593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.740 [2024-09-29 16:45:30.082630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.740 qpair failed and we were unable to recover it. 00:37:29.740 [2024-09-29 16:45:30.082787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.740 [2024-09-29 16:45:30.082835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.740 qpair failed and we were unable to recover it. 00:37:29.740 [2024-09-29 16:45:30.082988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.741 [2024-09-29 16:45:30.083029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.741 qpair failed and we were unable to recover it. 00:37:29.741 [2024-09-29 16:45:30.083193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.741 [2024-09-29 16:45:30.083231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.741 qpair failed and we were unable to recover it. 00:37:29.741 [2024-09-29 16:45:30.083419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.741 [2024-09-29 16:45:30.083473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.741 qpair failed and we were unable to recover it. 00:37:29.741 [2024-09-29 16:45:30.083608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.741 [2024-09-29 16:45:30.083644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.741 qpair failed and we were unable to recover it. 00:37:29.741 [2024-09-29 16:45:30.083796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.741 [2024-09-29 16:45:30.083844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.741 qpair failed and we were unable to recover it. 00:37:29.741 [2024-09-29 16:45:30.083957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.741 [2024-09-29 16:45:30.083991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.741 qpair failed and we were unable to recover it. 00:37:29.741 [2024-09-29 16:45:30.084150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.741 [2024-09-29 16:45:30.084205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.741 qpair failed and we were unable to recover it. 00:37:29.741 [2024-09-29 16:45:30.084391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.741 [2024-09-29 16:45:30.084446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.741 qpair failed and we were unable to recover it. 00:37:29.741 [2024-09-29 16:45:30.084571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.741 [2024-09-29 16:45:30.084609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.741 qpair failed and we were unable to recover it. 00:37:29.741 [2024-09-29 16:45:30.084798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.741 [2024-09-29 16:45:30.084846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.741 qpair failed and we were unable to recover it. 00:37:29.741 [2024-09-29 16:45:30.084999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.741 [2024-09-29 16:45:30.085034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.741 qpair failed and we were unable to recover it. 00:37:29.741 [2024-09-29 16:45:30.085204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.741 [2024-09-29 16:45:30.085247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.741 qpair failed and we were unable to recover it. 00:37:29.741 [2024-09-29 16:45:30.085376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.741 [2024-09-29 16:45:30.085413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.741 qpair failed and we were unable to recover it. 00:37:29.741 [2024-09-29 16:45:30.085601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.741 [2024-09-29 16:45:30.085637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.741 qpair failed and we were unable to recover it. 00:37:29.741 [2024-09-29 16:45:30.085805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.741 [2024-09-29 16:45:30.085841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.741 qpair failed and we were unable to recover it. 00:37:29.741 [2024-09-29 16:45:30.086000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.741 [2024-09-29 16:45:30.086037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.741 qpair failed and we were unable to recover it. 00:37:29.741 [2024-09-29 16:45:30.086223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.741 [2024-09-29 16:45:30.086274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.741 qpair failed and we were unable to recover it. 00:37:29.741 [2024-09-29 16:45:30.086485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.741 [2024-09-29 16:45:30.086538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.741 qpair failed and we were unable to recover it. 00:37:29.741 [2024-09-29 16:45:30.086660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.741 [2024-09-29 16:45:30.086720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.741 qpair failed and we were unable to recover it. 00:37:29.741 [2024-09-29 16:45:30.086842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.741 [2024-09-29 16:45:30.086875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.741 qpair failed and we were unable to recover it. 00:37:29.741 [2024-09-29 16:45:30.087049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.741 [2024-09-29 16:45:30.087100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.741 qpair failed and we were unable to recover it. 00:37:29.741 [2024-09-29 16:45:30.087274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.741 [2024-09-29 16:45:30.087326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.741 qpair failed and we were unable to recover it. 00:37:29.741 [2024-09-29 16:45:30.087506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.741 [2024-09-29 16:45:30.087543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.741 qpair failed and we were unable to recover it. 00:37:29.741 [2024-09-29 16:45:30.087753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.741 [2024-09-29 16:45:30.087801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.741 qpair failed and we were unable to recover it. 00:37:29.741 [2024-09-29 16:45:30.087942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.741 [2024-09-29 16:45:30.087989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.741 qpair failed and we were unable to recover it. 00:37:29.741 [2024-09-29 16:45:30.088215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.741 [2024-09-29 16:45:30.088271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.741 qpair failed and we were unable to recover it. 00:37:29.741 [2024-09-29 16:45:30.088403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.741 [2024-09-29 16:45:30.088442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.741 qpair failed and we were unable to recover it. 00:37:29.741 [2024-09-29 16:45:30.088603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.741 [2024-09-29 16:45:30.088655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.741 qpair failed and we were unable to recover it. 00:37:29.741 [2024-09-29 16:45:30.088849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.741 [2024-09-29 16:45:30.088898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.741 qpair failed and we were unable to recover it. 00:37:29.741 [2024-09-29 16:45:30.089069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.741 [2024-09-29 16:45:30.089126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.741 qpair failed and we were unable to recover it. 00:37:29.741 [2024-09-29 16:45:30.089292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.741 [2024-09-29 16:45:30.089345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.741 qpair failed and we were unable to recover it. 00:37:29.741 [2024-09-29 16:45:30.089459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.741 [2024-09-29 16:45:30.089493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.741 qpair failed and we were unable to recover it. 00:37:29.741 [2024-09-29 16:45:30.089616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.741 [2024-09-29 16:45:30.089652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.741 qpair failed and we were unable to recover it. 00:37:29.741 [2024-09-29 16:45:30.089812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.741 [2024-09-29 16:45:30.089845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.741 qpair failed and we were unable to recover it. 00:37:29.741 [2024-09-29 16:45:30.089966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.741 [2024-09-29 16:45:30.089999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.741 qpair failed and we were unable to recover it. 00:37:29.741 [2024-09-29 16:45:30.090156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.742 [2024-09-29 16:45:30.090209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.742 qpair failed and we were unable to recover it. 00:37:29.742 [2024-09-29 16:45:30.090342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.742 [2024-09-29 16:45:30.090378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.742 qpair failed and we were unable to recover it. 00:37:29.742 [2024-09-29 16:45:30.090540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.742 [2024-09-29 16:45:30.090577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.742 qpair failed and we were unable to recover it. 00:37:29.742 [2024-09-29 16:45:30.090720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.742 [2024-09-29 16:45:30.090756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.742 qpair failed and we were unable to recover it. 00:37:29.742 [2024-09-29 16:45:30.090876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.742 [2024-09-29 16:45:30.090911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.742 qpair failed and we were unable to recover it. 00:37:29.742 [2024-09-29 16:45:30.091043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.742 [2024-09-29 16:45:30.091097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.742 qpair failed and we were unable to recover it. 00:37:29.742 [2024-09-29 16:45:30.091298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.742 [2024-09-29 16:45:30.091335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.742 qpair failed and we were unable to recover it. 00:37:29.742 [2024-09-29 16:45:30.091460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.742 [2024-09-29 16:45:30.091495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.742 qpair failed and we were unable to recover it. 00:37:29.742 [2024-09-29 16:45:30.091621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.742 [2024-09-29 16:45:30.091659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.742 qpair failed and we were unable to recover it. 00:37:29.742 [2024-09-29 16:45:30.091842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.742 [2024-09-29 16:45:30.091877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.742 qpair failed and we were unable to recover it. 00:37:29.742 [2024-09-29 16:45:30.092028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.742 [2024-09-29 16:45:30.092065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.742 qpair failed and we were unable to recover it. 00:37:29.742 [2024-09-29 16:45:30.092188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.742 [2024-09-29 16:45:30.092224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.742 qpair failed and we were unable to recover it. 00:37:29.742 [2024-09-29 16:45:30.092385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.742 [2024-09-29 16:45:30.092437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.742 qpair failed and we were unable to recover it. 00:37:29.742 [2024-09-29 16:45:30.092556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.742 [2024-09-29 16:45:30.092593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.742 qpair failed and we were unable to recover it. 00:37:29.742 [2024-09-29 16:45:30.092760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.742 [2024-09-29 16:45:30.092795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.742 qpair failed and we were unable to recover it. 00:37:29.742 [2024-09-29 16:45:30.092965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.742 [2024-09-29 16:45:30.093001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.742 qpair failed and we were unable to recover it. 00:37:29.742 [2024-09-29 16:45:30.093150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.742 [2024-09-29 16:45:30.093190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.742 qpair failed and we were unable to recover it. 00:37:29.742 [2024-09-29 16:45:30.093328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.742 [2024-09-29 16:45:30.093376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.742 qpair failed and we were unable to recover it. 00:37:29.742 [2024-09-29 16:45:30.093553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.742 [2024-09-29 16:45:30.093590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.742 qpair failed and we were unable to recover it. 00:37:29.742 [2024-09-29 16:45:30.093732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.742 [2024-09-29 16:45:30.093766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.742 qpair failed and we were unable to recover it. 00:37:29.742 [2024-09-29 16:45:30.093876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.742 [2024-09-29 16:45:30.093909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.742 qpair failed and we were unable to recover it. 00:37:29.742 [2024-09-29 16:45:30.094071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.742 [2024-09-29 16:45:30.094106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.742 qpair failed and we were unable to recover it. 00:37:29.742 [2024-09-29 16:45:30.094253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.742 [2024-09-29 16:45:30.094290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.742 qpair failed and we were unable to recover it. 00:37:29.742 [2024-09-29 16:45:30.094439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.742 [2024-09-29 16:45:30.094489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.742 qpair failed and we were unable to recover it. 00:37:29.742 [2024-09-29 16:45:30.094638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.742 [2024-09-29 16:45:30.094677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.742 qpair failed and we were unable to recover it. 00:37:29.742 [2024-09-29 16:45:30.094822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.742 [2024-09-29 16:45:30.094856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.742 qpair failed and we were unable to recover it. 00:37:29.742 [2024-09-29 16:45:30.094995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.742 [2024-09-29 16:45:30.095030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.742 qpair failed and we were unable to recover it. 00:37:29.742 [2024-09-29 16:45:30.095207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.742 [2024-09-29 16:45:30.095242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.742 qpair failed and we were unable to recover it. 00:37:29.742 [2024-09-29 16:45:30.095388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.742 [2024-09-29 16:45:30.095422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.742 qpair failed and we were unable to recover it. 00:37:29.742 [2024-09-29 16:45:30.095549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.742 [2024-09-29 16:45:30.095585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.742 qpair failed and we were unable to recover it. 00:37:29.742 [2024-09-29 16:45:30.095775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.742 [2024-09-29 16:45:30.095824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.742 qpair failed and we were unable to recover it. 00:37:29.742 [2024-09-29 16:45:30.095975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.742 [2024-09-29 16:45:30.096011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.742 qpair failed and we were unable to recover it. 00:37:29.742 [2024-09-29 16:45:30.096200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.742 [2024-09-29 16:45:30.096260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.742 qpair failed and we were unable to recover it. 00:37:29.742 [2024-09-29 16:45:30.096390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.742 [2024-09-29 16:45:30.096427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.742 qpair failed and we were unable to recover it. 00:37:29.742 [2024-09-29 16:45:30.096630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.742 [2024-09-29 16:45:30.096698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.742 qpair failed and we were unable to recover it. 00:37:29.742 [2024-09-29 16:45:30.096852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.742 [2024-09-29 16:45:30.096887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.743 qpair failed and we were unable to recover it. 00:37:29.743 [2024-09-29 16:45:30.097042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.743 [2024-09-29 16:45:30.097078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.743 qpair failed and we were unable to recover it. 00:37:29.743 [2024-09-29 16:45:30.097257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.743 [2024-09-29 16:45:30.097292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.743 qpair failed and we were unable to recover it. 00:37:29.743 [2024-09-29 16:45:30.097408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.743 [2024-09-29 16:45:30.097443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.743 qpair failed and we were unable to recover it. 00:37:29.743 [2024-09-29 16:45:30.097593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.743 [2024-09-29 16:45:30.097627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.743 qpair failed and we were unable to recover it. 00:37:29.743 [2024-09-29 16:45:30.097797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.743 [2024-09-29 16:45:30.097831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.743 qpair failed and we were unable to recover it. 00:37:29.743 [2024-09-29 16:45:30.097963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.743 [2024-09-29 16:45:30.098012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.743 qpair failed and we were unable to recover it. 00:37:29.743 [2024-09-29 16:45:30.098142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.743 [2024-09-29 16:45:30.098178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.743 qpair failed and we were unable to recover it. 00:37:29.743 [2024-09-29 16:45:30.098370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.743 [2024-09-29 16:45:30.098418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.743 qpair failed and we were unable to recover it. 00:37:29.743 [2024-09-29 16:45:30.098545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.743 [2024-09-29 16:45:30.098579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.743 qpair failed and we were unable to recover it. 00:37:29.743 [2024-09-29 16:45:30.098718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.743 [2024-09-29 16:45:30.098752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.743 qpair failed and we were unable to recover it. 00:37:29.743 [2024-09-29 16:45:30.098897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.743 [2024-09-29 16:45:30.098929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.743 qpair failed and we were unable to recover it. 00:37:29.743 [2024-09-29 16:45:30.099072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.743 [2024-09-29 16:45:30.099105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.743 qpair failed and we were unable to recover it. 00:37:29.743 [2024-09-29 16:45:30.099243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.743 [2024-09-29 16:45:30.099276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.743 qpair failed and we were unable to recover it. 00:37:29.743 [2024-09-29 16:45:30.099393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.743 [2024-09-29 16:45:30.099427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.743 qpair failed and we were unable to recover it. 00:37:29.743 [2024-09-29 16:45:30.099571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.743 [2024-09-29 16:45:30.099605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.743 qpair failed and we were unable to recover it. 00:37:29.743 [2024-09-29 16:45:30.099750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.743 [2024-09-29 16:45:30.099783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.743 qpair failed and we were unable to recover it. 00:37:29.743 [2024-09-29 16:45:30.099928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.743 [2024-09-29 16:45:30.099961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.743 qpair failed and we were unable to recover it. 00:37:29.743 [2024-09-29 16:45:30.100083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.743 [2024-09-29 16:45:30.100116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.743 qpair failed and we were unable to recover it. 00:37:29.743 [2024-09-29 16:45:30.100255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.743 [2024-09-29 16:45:30.100288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.743 qpair failed and we were unable to recover it. 00:37:29.743 [2024-09-29 16:45:30.100405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.743 [2024-09-29 16:45:30.100438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.743 qpair failed and we were unable to recover it. 00:37:29.743 [2024-09-29 16:45:30.100572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.743 [2024-09-29 16:45:30.100609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.743 qpair failed and we were unable to recover it. 00:37:29.743 [2024-09-29 16:45:30.100755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.743 [2024-09-29 16:45:30.100804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.743 qpair failed and we were unable to recover it. 00:37:29.743 [2024-09-29 16:45:30.100950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.743 [2024-09-29 16:45:30.100985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.743 qpair failed and we were unable to recover it. 00:37:29.743 [2024-09-29 16:45:30.101124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.743 [2024-09-29 16:45:30.101159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.743 qpair failed and we were unable to recover it. 00:37:29.743 [2024-09-29 16:45:30.101329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.743 [2024-09-29 16:45:30.101363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.743 qpair failed and we were unable to recover it. 00:37:29.743 [2024-09-29 16:45:30.101474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.743 [2024-09-29 16:45:30.101507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.743 qpair failed and we were unable to recover it. 00:37:29.743 [2024-09-29 16:45:30.101647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.743 [2024-09-29 16:45:30.101689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.743 qpair failed and we were unable to recover it. 00:37:29.743 [2024-09-29 16:45:30.101837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.743 [2024-09-29 16:45:30.101871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.743 qpair failed and we were unable to recover it. 00:37:29.743 [2024-09-29 16:45:30.101988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.743 [2024-09-29 16:45:30.102022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.743 qpair failed and we were unable to recover it. 00:37:29.743 [2024-09-29 16:45:30.102132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.743 [2024-09-29 16:45:30.102166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.743 qpair failed and we were unable to recover it. 00:37:29.743 [2024-09-29 16:45:30.102308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.743 [2024-09-29 16:45:30.102342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.743 qpair failed and we were unable to recover it. 00:37:29.743 [2024-09-29 16:45:30.102477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.743 [2024-09-29 16:45:30.102510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.743 qpair failed and we were unable to recover it. 00:37:29.743 [2024-09-29 16:45:30.102619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.743 [2024-09-29 16:45:30.102653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.743 qpair failed and we were unable to recover it. 00:37:29.743 [2024-09-29 16:45:30.102772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.743 [2024-09-29 16:45:30.102806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.743 qpair failed and we were unable to recover it. 00:37:29.744 [2024-09-29 16:45:30.102927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.744 [2024-09-29 16:45:30.102961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.744 qpair failed and we were unable to recover it. 00:37:29.744 [2024-09-29 16:45:30.103076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.744 [2024-09-29 16:45:30.103111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.744 qpair failed and we were unable to recover it. 00:37:29.744 [2024-09-29 16:45:30.103234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.744 [2024-09-29 16:45:30.103269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.744 qpair failed and we were unable to recover it. 00:37:29.744 [2024-09-29 16:45:30.103404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.744 [2024-09-29 16:45:30.103438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.744 qpair failed and we were unable to recover it. 00:37:29.744 [2024-09-29 16:45:30.103582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.744 [2024-09-29 16:45:30.103616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.744 qpair failed and we were unable to recover it. 00:37:29.744 [2024-09-29 16:45:30.103761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.744 [2024-09-29 16:45:30.103795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.744 qpair failed and we were unable to recover it. 00:37:29.744 [2024-09-29 16:45:30.103923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.744 [2024-09-29 16:45:30.103958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.744 qpair failed and we were unable to recover it. 00:37:29.744 [2024-09-29 16:45:30.104072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.744 [2024-09-29 16:45:30.104117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.744 qpair failed and we were unable to recover it. 00:37:29.744 [2024-09-29 16:45:30.104268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.744 [2024-09-29 16:45:30.104301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.744 qpair failed and we were unable to recover it. 00:37:29.744 [2024-09-29 16:45:30.104441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.744 [2024-09-29 16:45:30.104475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.744 qpair failed and we were unable to recover it. 00:37:29.744 [2024-09-29 16:45:30.104617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.744 [2024-09-29 16:45:30.104651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.744 qpair failed and we were unable to recover it. 00:37:29.744 [2024-09-29 16:45:30.104777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.744 [2024-09-29 16:45:30.104810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.744 qpair failed and we were unable to recover it. 00:37:29.744 [2024-09-29 16:45:30.104922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.744 [2024-09-29 16:45:30.104955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.744 qpair failed and we were unable to recover it. 00:37:29.744 [2024-09-29 16:45:30.105118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.744 [2024-09-29 16:45:30.105166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.744 qpair failed and we were unable to recover it. 00:37:29.744 [2024-09-29 16:45:30.105314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.744 [2024-09-29 16:45:30.105350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.744 qpair failed and we were unable to recover it. 00:37:29.744 [2024-09-29 16:45:30.105495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.744 [2024-09-29 16:45:30.105529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.744 qpair failed and we were unable to recover it. 00:37:29.744 [2024-09-29 16:45:30.105679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.744 [2024-09-29 16:45:30.105715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.744 qpair failed and we were unable to recover it. 00:37:29.744 [2024-09-29 16:45:30.105829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.744 [2024-09-29 16:45:30.105865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.744 qpair failed and we were unable to recover it. 00:37:29.744 [2024-09-29 16:45:30.105986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.744 [2024-09-29 16:45:30.106020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.744 qpair failed and we were unable to recover it. 00:37:29.744 [2024-09-29 16:45:30.106134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.744 [2024-09-29 16:45:30.106169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.744 qpair failed and we were unable to recover it. 00:37:29.744 [2024-09-29 16:45:30.106294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.744 [2024-09-29 16:45:30.106337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.744 qpair failed and we were unable to recover it. 00:37:29.744 [2024-09-29 16:45:30.106507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.744 [2024-09-29 16:45:30.106555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.744 qpair failed and we were unable to recover it. 00:37:29.744 [2024-09-29 16:45:30.106708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.744 [2024-09-29 16:45:30.106744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.744 qpair failed and we were unable to recover it. 00:37:29.744 [2024-09-29 16:45:30.106863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.744 [2024-09-29 16:45:30.106898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.744 qpair failed and we were unable to recover it. 00:37:29.744 [2024-09-29 16:45:30.107042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.744 [2024-09-29 16:45:30.107076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.745 qpair failed and we were unable to recover it. 00:37:29.745 [2024-09-29 16:45:30.107200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.745 [2024-09-29 16:45:30.107235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.745 qpair failed and we were unable to recover it. 00:37:29.745 [2024-09-29 16:45:30.107414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.745 [2024-09-29 16:45:30.107456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.745 qpair failed and we were unable to recover it. 00:37:29.745 [2024-09-29 16:45:30.107609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.745 [2024-09-29 16:45:30.107645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.745 qpair failed and we were unable to recover it. 00:37:29.745 [2024-09-29 16:45:30.107774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.745 [2024-09-29 16:45:30.107811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.745 qpair failed and we were unable to recover it. 00:37:29.745 [2024-09-29 16:45:30.107931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.745 [2024-09-29 16:45:30.107965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.745 qpair failed and we were unable to recover it. 00:37:29.745 [2024-09-29 16:45:30.108107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.745 [2024-09-29 16:45:30.108141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.745 qpair failed and we were unable to recover it. 00:37:29.745 [2024-09-29 16:45:30.108276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.745 [2024-09-29 16:45:30.108310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.745 qpair failed and we were unable to recover it. 00:37:29.745 [2024-09-29 16:45:30.108437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.745 [2024-09-29 16:45:30.108470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.745 qpair failed and we were unable to recover it. 00:37:29.745 [2024-09-29 16:45:30.108620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.745 [2024-09-29 16:45:30.108656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.745 qpair failed and we were unable to recover it. 00:37:29.745 [2024-09-29 16:45:30.108812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.745 [2024-09-29 16:45:30.108848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.745 qpair failed and we were unable to recover it. 00:37:29.745 [2024-09-29 16:45:30.108973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.745 [2024-09-29 16:45:30.109009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.745 qpair failed and we were unable to recover it. 00:37:29.745 [2024-09-29 16:45:30.109124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.745 [2024-09-29 16:45:30.109159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.745 qpair failed and we were unable to recover it. 00:37:29.745 [2024-09-29 16:45:30.109274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.745 [2024-09-29 16:45:30.109308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.745 qpair failed and we were unable to recover it. 00:37:29.745 [2024-09-29 16:45:30.109433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.745 [2024-09-29 16:45:30.109468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.745 qpair failed and we were unable to recover it. 00:37:29.745 [2024-09-29 16:45:30.109615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.745 [2024-09-29 16:45:30.109650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.745 qpair failed and we were unable to recover it. 00:37:29.745 [2024-09-29 16:45:30.109795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.745 [2024-09-29 16:45:30.109831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.745 qpair failed and we were unable to recover it. 00:37:29.745 [2024-09-29 16:45:30.109976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.745 [2024-09-29 16:45:30.110011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.745 qpair failed and we were unable to recover it. 00:37:29.745 [2024-09-29 16:45:30.110125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.745 [2024-09-29 16:45:30.110160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.745 qpair failed and we were unable to recover it. 00:37:29.745 [2024-09-29 16:45:30.110295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.745 [2024-09-29 16:45:30.110330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.745 qpair failed and we were unable to recover it. 00:37:29.745 [2024-09-29 16:45:30.110468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.745 [2024-09-29 16:45:30.110501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.745 qpair failed and we were unable to recover it. 00:37:29.745 [2024-09-29 16:45:30.110656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.745 [2024-09-29 16:45:30.110712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.745 qpair failed and we were unable to recover it. 00:37:29.745 [2024-09-29 16:45:30.110855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.745 [2024-09-29 16:45:30.110902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.745 qpair failed and we were unable to recover it. 00:37:29.745 [2024-09-29 16:45:30.111083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.745 [2024-09-29 16:45:30.111119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.745 qpair failed and we were unable to recover it. 00:37:29.745 [2024-09-29 16:45:30.111273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.745 [2024-09-29 16:45:30.111308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.745 qpair failed and we were unable to recover it. 00:37:29.745 [2024-09-29 16:45:30.111481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.745 [2024-09-29 16:45:30.111515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.745 qpair failed and we were unable to recover it. 00:37:29.745 [2024-09-29 16:45:30.111636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.745 [2024-09-29 16:45:30.111677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.745 qpair failed and we were unable to recover it. 00:37:29.745 [2024-09-29 16:45:30.111826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.745 [2024-09-29 16:45:30.111861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.745 qpair failed and we were unable to recover it. 00:37:29.745 [2024-09-29 16:45:30.112010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.745 [2024-09-29 16:45:30.112047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.746 qpair failed and we were unable to recover it. 00:37:29.746 [2024-09-29 16:45:30.112198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.746 [2024-09-29 16:45:30.112233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.746 qpair failed and we were unable to recover it. 00:37:29.746 [2024-09-29 16:45:30.112382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.746 [2024-09-29 16:45:30.112414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.746 qpair failed and we were unable to recover it. 00:37:29.746 [2024-09-29 16:45:30.112586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.746 [2024-09-29 16:45:30.112620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.746 qpair failed and we were unable to recover it. 00:37:29.746 [2024-09-29 16:45:30.112767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.746 [2024-09-29 16:45:30.112802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.746 qpair failed and we were unable to recover it. 00:37:29.746 [2024-09-29 16:45:30.112915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.746 [2024-09-29 16:45:30.112948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.746 qpair failed and we were unable to recover it. 00:37:29.746 [2024-09-29 16:45:30.113090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.746 [2024-09-29 16:45:30.113123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.746 qpair failed and we were unable to recover it. 00:37:29.746 [2024-09-29 16:45:30.113260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.746 [2024-09-29 16:45:30.113293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.746 qpair failed and we were unable to recover it. 00:37:29.746 [2024-09-29 16:45:30.113438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.746 [2024-09-29 16:45:30.113471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.746 qpair failed and we were unable to recover it. 00:37:29.746 [2024-09-29 16:45:30.113586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.746 [2024-09-29 16:45:30.113619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.746 qpair failed and we were unable to recover it. 00:37:29.746 [2024-09-29 16:45:30.113777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.746 [2024-09-29 16:45:30.113811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.746 qpair failed and we were unable to recover it. 00:37:29.746 [2024-09-29 16:45:30.113942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.746 [2024-09-29 16:45:30.113990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.746 qpair failed and we were unable to recover it. 00:37:29.746 [2024-09-29 16:45:30.114142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.746 [2024-09-29 16:45:30.114178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.746 qpair failed and we were unable to recover it. 00:37:29.746 [2024-09-29 16:45:30.114305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.746 [2024-09-29 16:45:30.114354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.746 qpair failed and we were unable to recover it. 00:37:29.746 [2024-09-29 16:45:30.114508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.746 [2024-09-29 16:45:30.114551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.746 qpair failed and we were unable to recover it. 00:37:29.746 [2024-09-29 16:45:30.114698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.746 [2024-09-29 16:45:30.114732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.746 qpair failed and we were unable to recover it. 00:37:29.746 [2024-09-29 16:45:30.114850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.746 [2024-09-29 16:45:30.114883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.746 qpair failed and we were unable to recover it. 00:37:29.746 [2024-09-29 16:45:30.115031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.746 [2024-09-29 16:45:30.115065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.746 qpair failed and we were unable to recover it. 00:37:29.746 [2024-09-29 16:45:30.115241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.746 [2024-09-29 16:45:30.115274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.746 qpair failed and we were unable to recover it. 00:37:29.746 [2024-09-29 16:45:30.115445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.746 [2024-09-29 16:45:30.115478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.746 qpair failed and we were unable to recover it. 00:37:29.746 [2024-09-29 16:45:30.115597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.746 [2024-09-29 16:45:30.115631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.746 qpair failed and we were unable to recover it. 00:37:29.746 [2024-09-29 16:45:30.115759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.746 [2024-09-29 16:45:30.115792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.746 qpair failed and we were unable to recover it. 00:37:29.746 [2024-09-29 16:45:30.115900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.746 [2024-09-29 16:45:30.115934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.746 qpair failed and we were unable to recover it. 00:37:29.746 [2024-09-29 16:45:30.116076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.746 [2024-09-29 16:45:30.116109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.746 qpair failed and we were unable to recover it. 00:37:29.746 [2024-09-29 16:45:30.116248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.746 [2024-09-29 16:45:30.116283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.746 qpair failed and we were unable to recover it. 00:37:29.746 [2024-09-29 16:45:30.116445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.746 [2024-09-29 16:45:30.116513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.746 qpair failed and we were unable to recover it. 00:37:29.746 [2024-09-29 16:45:30.116645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.746 [2024-09-29 16:45:30.116690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.746 qpair failed and we were unable to recover it. 00:37:29.746 [2024-09-29 16:45:30.116852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.746 [2024-09-29 16:45:30.116899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.746 qpair failed and we were unable to recover it. 00:37:29.746 [2024-09-29 16:45:30.117132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.746 [2024-09-29 16:45:30.117165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.746 qpair failed and we were unable to recover it. 00:37:29.746 [2024-09-29 16:45:30.117294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.746 [2024-09-29 16:45:30.117355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.746 qpair failed and we were unable to recover it. 00:37:29.746 [2024-09-29 16:45:30.117518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.746 [2024-09-29 16:45:30.117556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.746 qpair failed and we were unable to recover it. 00:37:29.746 [2024-09-29 16:45:30.117726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.746 [2024-09-29 16:45:30.117760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.746 qpair failed and we were unable to recover it. 00:37:29.746 [2024-09-29 16:45:30.117912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.746 [2024-09-29 16:45:30.117946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.746 qpair failed and we were unable to recover it. 00:37:29.746 [2024-09-29 16:45:30.118107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.747 [2024-09-29 16:45:30.118166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.747 qpair failed and we were unable to recover it. 00:37:29.747 [2024-09-29 16:45:30.118338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.747 [2024-09-29 16:45:30.118375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.747 qpair failed and we were unable to recover it. 00:37:29.747 [2024-09-29 16:45:30.118533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.747 [2024-09-29 16:45:30.118570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.747 qpair failed and we were unable to recover it. 00:37:29.747 [2024-09-29 16:45:30.118730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.747 [2024-09-29 16:45:30.118778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.747 qpair failed and we were unable to recover it. 00:37:29.747 [2024-09-29 16:45:30.118965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.747 [2024-09-29 16:45:30.119003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.747 qpair failed and we were unable to recover it. 00:37:29.747 [2024-09-29 16:45:30.119166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.747 [2024-09-29 16:45:30.119229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.747 qpair failed and we were unable to recover it. 00:37:29.747 [2024-09-29 16:45:30.119422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.747 [2024-09-29 16:45:30.119474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.747 qpair failed and we were unable to recover it. 00:37:29.747 [2024-09-29 16:45:30.119587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.747 [2024-09-29 16:45:30.119622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.747 qpair failed and we were unable to recover it. 00:37:29.747 [2024-09-29 16:45:30.119771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.747 [2024-09-29 16:45:30.119820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.747 qpair failed and we were unable to recover it. 00:37:29.747 [2024-09-29 16:45:30.119969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.747 [2024-09-29 16:45:30.120004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.747 qpair failed and we were unable to recover it. 00:37:29.747 [2024-09-29 16:45:30.120139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.747 [2024-09-29 16:45:30.120176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.747 qpair failed and we were unable to recover it. 00:37:29.747 [2024-09-29 16:45:30.120310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.747 [2024-09-29 16:45:30.120348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.747 qpair failed and we were unable to recover it. 00:37:29.747 [2024-09-29 16:45:30.120494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.747 [2024-09-29 16:45:30.120531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.747 qpair failed and we were unable to recover it. 00:37:29.747 [2024-09-29 16:45:30.120724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.747 [2024-09-29 16:45:30.120793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.747 qpair failed and we were unable to recover it. 00:37:29.747 [2024-09-29 16:45:30.120918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.747 [2024-09-29 16:45:30.120953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.747 qpair failed and we were unable to recover it. 00:37:29.747 [2024-09-29 16:45:30.121115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.747 [2024-09-29 16:45:30.121167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.747 qpair failed and we were unable to recover it. 00:37:29.747 [2024-09-29 16:45:30.121304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.747 [2024-09-29 16:45:30.121356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.747 qpair failed and we were unable to recover it. 00:37:29.747 [2024-09-29 16:45:30.121467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.747 [2024-09-29 16:45:30.121501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.747 qpair failed and we were unable to recover it. 00:37:29.747 [2024-09-29 16:45:30.121659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.747 [2024-09-29 16:45:30.121721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.747 qpair failed and we were unable to recover it. 00:37:29.747 [2024-09-29 16:45:30.121838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.747 [2024-09-29 16:45:30.121872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.747 qpair failed and we were unable to recover it. 00:37:29.747 [2024-09-29 16:45:30.122001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.747 [2024-09-29 16:45:30.122054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.747 qpair failed and we were unable to recover it. 00:37:29.747 [2024-09-29 16:45:30.122207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.747 [2024-09-29 16:45:30.122243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.747 qpair failed and we were unable to recover it. 00:37:29.747 [2024-09-29 16:45:30.122446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.747 [2024-09-29 16:45:30.122505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.747 qpair failed and we were unable to recover it. 00:37:29.747 [2024-09-29 16:45:30.122650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.747 [2024-09-29 16:45:30.122729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.747 qpair failed and we were unable to recover it. 00:37:29.747 [2024-09-29 16:45:30.122859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.747 [2024-09-29 16:45:30.122895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.747 qpair failed and we were unable to recover it. 00:37:29.747 [2024-09-29 16:45:30.123059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.747 [2024-09-29 16:45:30.123112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.747 qpair failed and we were unable to recover it. 00:37:29.747 [2024-09-29 16:45:30.123356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.747 [2024-09-29 16:45:30.123416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.747 qpair failed and we were unable to recover it. 00:37:29.747 [2024-09-29 16:45:30.123551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.747 [2024-09-29 16:45:30.123588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.747 qpair failed and we were unable to recover it. 00:37:29.747 [2024-09-29 16:45:30.123786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.747 [2024-09-29 16:45:30.123821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.747 qpair failed and we were unable to recover it. 00:37:29.747 [2024-09-29 16:45:30.123993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.747 [2024-09-29 16:45:30.124045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.747 qpair failed and we were unable to recover it. 00:37:29.747 [2024-09-29 16:45:30.124199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.747 [2024-09-29 16:45:30.124236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.747 qpair failed and we were unable to recover it. 00:37:29.747 [2024-09-29 16:45:30.124394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.747 [2024-09-29 16:45:30.124432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.747 qpair failed and we were unable to recover it. 00:37:29.747 [2024-09-29 16:45:30.124614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.747 [2024-09-29 16:45:30.124661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.747 qpair failed and we were unable to recover it. 00:37:29.747 [2024-09-29 16:45:30.124829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.747 [2024-09-29 16:45:30.124864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.747 qpair failed and we were unable to recover it. 00:37:29.748 [2024-09-29 16:45:30.124986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.748 [2024-09-29 16:45:30.125020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.748 qpair failed and we were unable to recover it. 00:37:29.748 [2024-09-29 16:45:30.125187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.748 [2024-09-29 16:45:30.125258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.748 qpair failed and we were unable to recover it. 00:37:29.748 [2024-09-29 16:45:30.125487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.748 [2024-09-29 16:45:30.125545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.748 qpair failed and we were unable to recover it. 00:37:29.748 [2024-09-29 16:45:30.125709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.748 [2024-09-29 16:45:30.125743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.748 qpair failed and we were unable to recover it. 00:37:29.748 [2024-09-29 16:45:30.125858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.748 [2024-09-29 16:45:30.125891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.748 qpair failed and we were unable to recover it. 00:37:29.748 [2024-09-29 16:45:30.126029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.748 [2024-09-29 16:45:30.126063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.748 qpair failed and we were unable to recover it. 00:37:29.748 [2024-09-29 16:45:30.126213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.748 [2024-09-29 16:45:30.126247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.748 qpair failed and we were unable to recover it. 00:37:29.748 [2024-09-29 16:45:30.126389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.748 [2024-09-29 16:45:30.126422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.748 qpair failed and we were unable to recover it. 00:37:29.748 [2024-09-29 16:45:30.126540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.748 [2024-09-29 16:45:30.126574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.748 qpair failed and we were unable to recover it. 00:37:29.748 [2024-09-29 16:45:30.126690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.748 [2024-09-29 16:45:30.126724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.748 qpair failed and we were unable to recover it. 00:37:29.748 [2024-09-29 16:45:30.126877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.748 [2024-09-29 16:45:30.126925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.748 qpair failed and we were unable to recover it. 00:37:29.748 [2024-09-29 16:45:30.127068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.748 [2024-09-29 16:45:30.127109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.748 qpair failed and we were unable to recover it. 00:37:29.748 [2024-09-29 16:45:30.127240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.748 [2024-09-29 16:45:30.127280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.748 qpair failed and we were unable to recover it. 00:37:29.748 [2024-09-29 16:45:30.127442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.748 [2024-09-29 16:45:30.127480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.748 qpair failed and we were unable to recover it. 00:37:29.748 [2024-09-29 16:45:30.127613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.748 [2024-09-29 16:45:30.127652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.748 qpair failed and we were unable to recover it. 00:37:29.748 [2024-09-29 16:45:30.127817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.748 [2024-09-29 16:45:30.127865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.748 qpair failed and we were unable to recover it. 00:37:29.748 [2024-09-29 16:45:30.128045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.748 [2024-09-29 16:45:30.128079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.748 qpair failed and we were unable to recover it. 00:37:29.748 [2024-09-29 16:45:30.128218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.748 [2024-09-29 16:45:30.128252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.748 qpair failed and we were unable to recover it. 00:37:29.748 [2024-09-29 16:45:30.128421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.748 [2024-09-29 16:45:30.128455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.748 qpair failed and we were unable to recover it. 00:37:29.748 [2024-09-29 16:45:30.128565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.748 [2024-09-29 16:45:30.128598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.748 qpair failed and we were unable to recover it. 00:37:29.748 [2024-09-29 16:45:30.128709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.748 [2024-09-29 16:45:30.128742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.748 qpair failed and we were unable to recover it. 00:37:29.748 [2024-09-29 16:45:30.128892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.748 [2024-09-29 16:45:30.128927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.748 qpair failed and we were unable to recover it. 00:37:29.748 [2024-09-29 16:45:30.129043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.748 [2024-09-29 16:45:30.129077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.748 qpair failed and we were unable to recover it. 00:37:29.748 [2024-09-29 16:45:30.129223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.748 [2024-09-29 16:45:30.129256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.748 qpair failed and we were unable to recover it. 00:37:29.748 [2024-09-29 16:45:30.129399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.748 [2024-09-29 16:45:30.129433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.748 qpair failed and we were unable to recover it. 00:37:29.748 [2024-09-29 16:45:30.129620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.748 [2024-09-29 16:45:30.129667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.748 qpair failed and we were unable to recover it. 00:37:29.748 [2024-09-29 16:45:30.129803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.748 [2024-09-29 16:45:30.129840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.748 qpair failed and we were unable to recover it. 00:37:29.748 [2024-09-29 16:45:30.129988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.748 [2024-09-29 16:45:30.130023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.748 qpair failed and we were unable to recover it. 00:37:29.748 [2024-09-29 16:45:30.130145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.748 [2024-09-29 16:45:30.130178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.748 qpair failed and we were unable to recover it. 00:37:29.748 [2024-09-29 16:45:30.130346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.748 [2024-09-29 16:45:30.130414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.748 qpair failed and we were unable to recover it. 00:37:29.748 [2024-09-29 16:45:30.130653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.748 [2024-09-29 16:45:30.130701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.748 qpair failed and we were unable to recover it. 00:37:29.748 [2024-09-29 16:45:30.130884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.748 [2024-09-29 16:45:30.130932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.748 qpair failed and we were unable to recover it. 00:37:29.748 [2024-09-29 16:45:30.131077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.748 [2024-09-29 16:45:30.131129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.749 qpair failed and we were unable to recover it. 00:37:29.749 [2024-09-29 16:45:30.131359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.749 [2024-09-29 16:45:30.131438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.749 qpair failed and we were unable to recover it. 00:37:29.749 [2024-09-29 16:45:30.131613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.749 [2024-09-29 16:45:30.131647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.749 qpair failed and we were unable to recover it. 00:37:29.749 [2024-09-29 16:45:30.131794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.749 [2024-09-29 16:45:30.131828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.749 qpair failed and we were unable to recover it. 00:37:29.749 [2024-09-29 16:45:30.131960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.749 [2024-09-29 16:45:30.132008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.749 qpair failed and we were unable to recover it. 00:37:29.749 [2024-09-29 16:45:30.132188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.749 [2024-09-29 16:45:30.132244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.749 qpair failed and we were unable to recover it. 00:37:29.749 [2024-09-29 16:45:30.132469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.749 [2024-09-29 16:45:30.132504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.749 qpair failed and we were unable to recover it. 00:37:29.749 [2024-09-29 16:45:30.132653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.749 [2024-09-29 16:45:30.132694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.749 qpair failed and we were unable to recover it. 00:37:29.749 [2024-09-29 16:45:30.132839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.749 [2024-09-29 16:45:30.132873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.749 qpair failed and we were unable to recover it. 00:37:29.749 [2024-09-29 16:45:30.133005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.749 [2024-09-29 16:45:30.133056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.749 qpair failed and we were unable to recover it. 00:37:29.749 [2024-09-29 16:45:30.133170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.749 [2024-09-29 16:45:30.133204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.749 qpair failed and we were unable to recover it. 00:37:29.749 [2024-09-29 16:45:30.133317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.749 [2024-09-29 16:45:30.133352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.749 qpair failed and we were unable to recover it. 00:37:29.749 [2024-09-29 16:45:30.133473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.749 [2024-09-29 16:45:30.133507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.749 qpair failed and we were unable to recover it. 00:37:29.749 [2024-09-29 16:45:30.133629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.749 [2024-09-29 16:45:30.133664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.749 qpair failed and we were unable to recover it. 00:37:29.749 [2024-09-29 16:45:30.133786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.749 [2024-09-29 16:45:30.133820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.749 qpair failed and we were unable to recover it. 00:37:29.749 [2024-09-29 16:45:30.134010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.749 [2024-09-29 16:45:30.134058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.749 qpair failed and we were unable to recover it. 00:37:29.749 [2024-09-29 16:45:30.134200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.749 [2024-09-29 16:45:30.134235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.749 qpair failed and we were unable to recover it. 00:37:29.749 [2024-09-29 16:45:30.134352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.749 [2024-09-29 16:45:30.134386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.749 qpair failed and we were unable to recover it. 00:37:29.749 [2024-09-29 16:45:30.134505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.749 [2024-09-29 16:45:30.134538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.749 qpair failed and we were unable to recover it. 00:37:29.749 [2024-09-29 16:45:30.134702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.749 [2024-09-29 16:45:30.134737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.749 qpair failed and we were unable to recover it. 00:37:29.749 [2024-09-29 16:45:30.134845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.749 [2024-09-29 16:45:30.134878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.749 qpair failed and we were unable to recover it. 00:37:29.749 [2024-09-29 16:45:30.135041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.749 [2024-09-29 16:45:30.135076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.749 qpair failed and we were unable to recover it. 00:37:29.749 [2024-09-29 16:45:30.135209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.749 [2024-09-29 16:45:30.135267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.749 qpair failed and we were unable to recover it. 00:37:29.749 [2024-09-29 16:45:30.135429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.749 [2024-09-29 16:45:30.135482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.749 qpair failed and we were unable to recover it. 00:37:29.749 [2024-09-29 16:45:30.135637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.749 [2024-09-29 16:45:30.135699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.749 qpair failed and we were unable to recover it. 00:37:29.749 [2024-09-29 16:45:30.135911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.749 [2024-09-29 16:45:30.135960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.749 qpair failed and we were unable to recover it. 00:37:29.749 [2024-09-29 16:45:30.136122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.749 [2024-09-29 16:45:30.136176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.749 qpair failed and we were unable to recover it. 00:37:29.749 [2024-09-29 16:45:30.136351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.749 [2024-09-29 16:45:30.136385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.749 qpair failed and we were unable to recover it. 00:37:29.749 [2024-09-29 16:45:30.136498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.749 [2024-09-29 16:45:30.136531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.749 qpair failed and we were unable to recover it. 00:37:29.749 [2024-09-29 16:45:30.136680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.749 [2024-09-29 16:45:30.136716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.750 qpair failed and we were unable to recover it. 00:37:29.750 [2024-09-29 16:45:30.136883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.750 [2024-09-29 16:45:30.136916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.750 qpair failed and we were unable to recover it. 00:37:29.750 [2024-09-29 16:45:30.137041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.750 [2024-09-29 16:45:30.137077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.750 qpair failed and we were unable to recover it. 00:37:29.750 [2024-09-29 16:45:30.137245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.750 [2024-09-29 16:45:30.137279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.750 qpair failed and we were unable to recover it. 00:37:29.750 [2024-09-29 16:45:30.137446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.750 [2024-09-29 16:45:30.137479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.750 qpair failed and we were unable to recover it. 00:37:29.750 [2024-09-29 16:45:30.137603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.750 [2024-09-29 16:45:30.137638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.750 qpair failed and we were unable to recover it. 00:37:29.750 [2024-09-29 16:45:30.137826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.750 [2024-09-29 16:45:30.137860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.750 qpair failed and we were unable to recover it. 00:37:29.750 [2024-09-29 16:45:30.138022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.750 [2024-09-29 16:45:30.138058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.750 qpair failed and we were unable to recover it. 00:37:29.750 [2024-09-29 16:45:30.138240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.750 [2024-09-29 16:45:30.138277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.750 qpair failed and we were unable to recover it. 00:37:29.750 [2024-09-29 16:45:30.138432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.750 [2024-09-29 16:45:30.138468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.750 qpair failed and we were unable to recover it. 00:37:29.750 [2024-09-29 16:45:30.138612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.750 [2024-09-29 16:45:30.138662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.750 qpair failed and we were unable to recover it. 00:37:29.750 [2024-09-29 16:45:30.138838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.750 [2024-09-29 16:45:30.138904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.750 qpair failed and we were unable to recover it. 00:37:29.750 [2024-09-29 16:45:30.139102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.750 [2024-09-29 16:45:30.139154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.750 qpair failed and we were unable to recover it. 00:37:29.750 [2024-09-29 16:45:30.139311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.750 [2024-09-29 16:45:30.139363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.750 qpair failed and we were unable to recover it. 00:37:29.750 [2024-09-29 16:45:30.139531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.750 [2024-09-29 16:45:30.139565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.750 qpair failed and we were unable to recover it. 00:37:29.750 [2024-09-29 16:45:30.139690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.750 [2024-09-29 16:45:30.139757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.750 qpair failed and we were unable to recover it. 00:37:29.750 [2024-09-29 16:45:30.139921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.750 [2024-09-29 16:45:30.139959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.750 qpair failed and we were unable to recover it. 00:37:29.750 [2024-09-29 16:45:30.140155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.750 [2024-09-29 16:45:30.140206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.750 qpair failed and we were unable to recover it. 00:37:29.750 [2024-09-29 16:45:30.140349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.750 [2024-09-29 16:45:30.140384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.750 qpair failed and we were unable to recover it. 00:37:29.750 [2024-09-29 16:45:30.140579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.750 [2024-09-29 16:45:30.140616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.750 qpair failed and we were unable to recover it. 00:37:29.750 [2024-09-29 16:45:30.140772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.750 [2024-09-29 16:45:30.140824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.750 qpair failed and we were unable to recover it. 00:37:29.750 [2024-09-29 16:45:30.140943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.750 [2024-09-29 16:45:30.140978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.750 qpair failed and we were unable to recover it. 00:37:29.750 [2024-09-29 16:45:30.141110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.750 [2024-09-29 16:45:30.141147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.750 qpair failed and we were unable to recover it. 00:37:29.750 [2024-09-29 16:45:30.141292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.750 [2024-09-29 16:45:30.141328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.750 qpair failed and we were unable to recover it. 00:37:29.750 [2024-09-29 16:45:30.141481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.750 [2024-09-29 16:45:30.141517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.750 qpair failed and we were unable to recover it. 00:37:29.750 [2024-09-29 16:45:30.141691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.750 [2024-09-29 16:45:30.141756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.750 qpair failed and we were unable to recover it. 00:37:29.750 [2024-09-29 16:45:30.141898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.750 [2024-09-29 16:45:30.141945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.750 qpair failed and we were unable to recover it. 00:37:29.750 [2024-09-29 16:45:30.142142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.750 [2024-09-29 16:45:30.142180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.750 qpair failed and we were unable to recover it. 00:37:29.750 [2024-09-29 16:45:30.142360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.750 [2024-09-29 16:45:30.142397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.750 qpair failed and we were unable to recover it. 00:37:29.750 [2024-09-29 16:45:30.142551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.750 [2024-09-29 16:45:30.142586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.750 qpair failed and we were unable to recover it. 00:37:29.750 [2024-09-29 16:45:30.142747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.750 [2024-09-29 16:45:30.142782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.750 qpair failed and we were unable to recover it. 00:37:29.750 [2024-09-29 16:45:30.142970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.750 [2024-09-29 16:45:30.143006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.750 qpair failed and we were unable to recover it. 00:37:29.750 [2024-09-29 16:45:30.143115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.750 [2024-09-29 16:45:30.143151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.750 qpair failed and we were unable to recover it. 00:37:29.751 [2024-09-29 16:45:30.143292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.751 [2024-09-29 16:45:30.143349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.751 qpair failed and we were unable to recover it. 00:37:29.751 [2024-09-29 16:45:30.143501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.751 [2024-09-29 16:45:30.143537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.751 qpair failed and we were unable to recover it. 00:37:29.751 [2024-09-29 16:45:30.143690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.751 [2024-09-29 16:45:30.143741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.751 qpair failed and we were unable to recover it. 00:37:29.751 [2024-09-29 16:45:30.143886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.751 [2024-09-29 16:45:30.143933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.751 qpair failed and we were unable to recover it. 00:37:29.751 [2024-09-29 16:45:30.144078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.751 [2024-09-29 16:45:30.144136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.751 qpair failed and we were unable to recover it. 00:37:29.751 [2024-09-29 16:45:30.144293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.751 [2024-09-29 16:45:30.144344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.751 qpair failed and we were unable to recover it. 00:37:29.751 [2024-09-29 16:45:30.144505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.751 [2024-09-29 16:45:30.144558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.751 qpair failed and we were unable to recover it. 00:37:29.751 [2024-09-29 16:45:30.144733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.751 [2024-09-29 16:45:30.144799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.751 qpair failed and we were unable to recover it. 00:37:29.751 [2024-09-29 16:45:30.144953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.751 [2024-09-29 16:45:30.144990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.751 qpair failed and we were unable to recover it. 00:37:29.751 [2024-09-29 16:45:30.145111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.751 [2024-09-29 16:45:30.145147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.751 qpair failed and we were unable to recover it. 00:37:29.751 [2024-09-29 16:45:30.145306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.751 [2024-09-29 16:45:30.145343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.751 qpair failed and we were unable to recover it. 00:37:29.751 [2024-09-29 16:45:30.145495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.751 [2024-09-29 16:45:30.145531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.751 qpair failed and we were unable to recover it. 00:37:29.751 [2024-09-29 16:45:30.145699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.751 [2024-09-29 16:45:30.145733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.751 qpair failed and we were unable to recover it. 00:37:29.751 [2024-09-29 16:45:30.145871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.751 [2024-09-29 16:45:30.145904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.751 qpair failed and we were unable to recover it. 00:37:29.751 [2024-09-29 16:45:30.146027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.751 [2024-09-29 16:45:30.146079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.751 qpair failed and we were unable to recover it. 00:37:29.751 [2024-09-29 16:45:30.146224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.751 [2024-09-29 16:45:30.146259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.751 qpair failed and we were unable to recover it. 00:37:29.751 [2024-09-29 16:45:30.146407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.751 [2024-09-29 16:45:30.146443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.751 qpair failed and we were unable to recover it. 00:37:29.751 [2024-09-29 16:45:30.146625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.751 [2024-09-29 16:45:30.146684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.751 qpair failed and we were unable to recover it. 00:37:29.751 [2024-09-29 16:45:30.146823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.751 [2024-09-29 16:45:30.146860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.751 qpair failed and we were unable to recover it. 00:37:29.751 [2024-09-29 16:45:30.147000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.751 [2024-09-29 16:45:30.147037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.751 qpair failed and we were unable to recover it. 00:37:29.751 [2024-09-29 16:45:30.147187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.751 [2024-09-29 16:45:30.147222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.751 qpair failed and we were unable to recover it. 00:37:29.751 [2024-09-29 16:45:30.147370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.751 [2024-09-29 16:45:30.147406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.751 qpair failed and we were unable to recover it. 00:37:29.751 [2024-09-29 16:45:30.147570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.751 [2024-09-29 16:45:30.147606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.751 qpair failed and we were unable to recover it. 00:37:29.751 [2024-09-29 16:45:30.147757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.751 [2024-09-29 16:45:30.147792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.751 qpair failed and we were unable to recover it. 00:37:29.751 [2024-09-29 16:45:30.147931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.751 [2024-09-29 16:45:30.147982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.751 qpair failed and we were unable to recover it. 00:37:29.751 [2024-09-29 16:45:30.148133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.751 [2024-09-29 16:45:30.148168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.751 qpair failed and we were unable to recover it. 00:37:29.751 [2024-09-29 16:45:30.148325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.751 [2024-09-29 16:45:30.148361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.751 qpair failed and we were unable to recover it. 00:37:29.751 [2024-09-29 16:45:30.148539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.751 [2024-09-29 16:45:30.148590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.751 qpair failed and we were unable to recover it. 00:37:29.751 [2024-09-29 16:45:30.148780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.751 [2024-09-29 16:45:30.148829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.751 qpair failed and we were unable to recover it. 00:37:29.751 [2024-09-29 16:45:30.149006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.751 [2024-09-29 16:45:30.149045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.751 qpair failed and we were unable to recover it. 00:37:29.751 [2024-09-29 16:45:30.149164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.751 [2024-09-29 16:45:30.149201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.751 qpair failed and we were unable to recover it. 00:37:29.751 [2024-09-29 16:45:30.149351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.751 [2024-09-29 16:45:30.149387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.751 qpair failed and we were unable to recover it. 00:37:29.751 [2024-09-29 16:45:30.149578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.751 [2024-09-29 16:45:30.149626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.751 qpair failed and we were unable to recover it. 00:37:29.751 [2024-09-29 16:45:30.149800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.751 [2024-09-29 16:45:30.149837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.751 qpair failed and we were unable to recover it. 00:37:29.751 [2024-09-29 16:45:30.149952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.751 [2024-09-29 16:45:30.149987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.751 qpair failed and we were unable to recover it. 00:37:29.751 [2024-09-29 16:45:30.150148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.751 [2024-09-29 16:45:30.150200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.751 qpair failed and we were unable to recover it. 00:37:29.751 [2024-09-29 16:45:30.150333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.751 [2024-09-29 16:45:30.150386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.751 qpair failed and we were unable to recover it. 00:37:29.752 [2024-09-29 16:45:30.150555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.752 [2024-09-29 16:45:30.150603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.752 qpair failed and we were unable to recover it. 00:37:29.752 [2024-09-29 16:45:30.150762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.752 [2024-09-29 16:45:30.150815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.752 qpair failed and we were unable to recover it. 00:37:29.752 [2024-09-29 16:45:30.150965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.752 [2024-09-29 16:45:30.151015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.752 qpair failed and we were unable to recover it. 00:37:29.752 [2024-09-29 16:45:30.151149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.752 [2024-09-29 16:45:30.151192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.752 qpair failed and we were unable to recover it. 00:37:29.752 [2024-09-29 16:45:30.151374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.752 [2024-09-29 16:45:30.151410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.752 qpair failed and we were unable to recover it. 00:37:29.752 [2024-09-29 16:45:30.151536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.752 [2024-09-29 16:45:30.151572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.752 qpair failed and we were unable to recover it. 00:37:29.752 [2024-09-29 16:45:30.151725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.752 [2024-09-29 16:45:30.151761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.752 qpair failed and we were unable to recover it. 00:37:29.752 [2024-09-29 16:45:30.151932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.752 [2024-09-29 16:45:30.151985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.752 qpair failed and we were unable to recover it. 00:37:29.752 [2024-09-29 16:45:30.152183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.752 [2024-09-29 16:45:30.152222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.752 qpair failed and we were unable to recover it. 00:37:29.752 [2024-09-29 16:45:30.152369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.752 [2024-09-29 16:45:30.152405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.752 qpair failed and we were unable to recover it. 00:37:29.752 [2024-09-29 16:45:30.152572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.752 [2024-09-29 16:45:30.152606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.752 qpair failed and we were unable to recover it. 00:37:29.752 [2024-09-29 16:45:30.152735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.752 [2024-09-29 16:45:30.152770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.752 qpair failed and we were unable to recover it. 00:37:29.752 [2024-09-29 16:45:30.152938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.752 [2024-09-29 16:45:30.152994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.752 qpair failed and we were unable to recover it. 00:37:29.752 [2024-09-29 16:45:30.153113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.752 [2024-09-29 16:45:30.153149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.752 qpair failed and we were unable to recover it. 00:37:29.752 [2024-09-29 16:45:30.153287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.752 [2024-09-29 16:45:30.153337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.752 qpair failed and we were unable to recover it. 00:37:29.752 [2024-09-29 16:45:30.153549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.752 [2024-09-29 16:45:30.153602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.752 qpair failed and we were unable to recover it. 00:37:29.752 [2024-09-29 16:45:30.153805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.752 [2024-09-29 16:45:30.153853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.752 qpair failed and we were unable to recover it. 00:37:29.752 [2024-09-29 16:45:30.153988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.752 [2024-09-29 16:45:30.154024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.752 qpair failed and we were unable to recover it. 00:37:29.752 [2024-09-29 16:45:30.154163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.752 [2024-09-29 16:45:30.154198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.752 qpair failed and we were unable to recover it. 00:37:29.752 [2024-09-29 16:45:30.154388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.752 [2024-09-29 16:45:30.154424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.752 qpair failed and we were unable to recover it. 00:37:29.752 [2024-09-29 16:45:30.154605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.752 [2024-09-29 16:45:30.154642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.752 qpair failed and we were unable to recover it. 00:37:29.752 [2024-09-29 16:45:30.154833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.752 [2024-09-29 16:45:30.154881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.752 qpair failed and we were unable to recover it. 00:37:29.752 [2024-09-29 16:45:30.155080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.752 [2024-09-29 16:45:30.155119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.752 qpair failed and we were unable to recover it. 00:37:29.752 [2024-09-29 16:45:30.155340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.752 [2024-09-29 16:45:30.155377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.752 qpair failed and we were unable to recover it. 00:37:29.752 [2024-09-29 16:45:30.155530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.752 [2024-09-29 16:45:30.155565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.752 qpair failed and we were unable to recover it. 00:37:29.752 [2024-09-29 16:45:30.155687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.752 [2024-09-29 16:45:30.155739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.752 qpair failed and we were unable to recover it. 00:37:29.752 [2024-09-29 16:45:30.155883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.752 [2024-09-29 16:45:30.155930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.752 qpair failed and we were unable to recover it. 00:37:29.752 [2024-09-29 16:45:30.156097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.752 [2024-09-29 16:45:30.156151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.752 qpair failed and we were unable to recover it. 00:37:29.752 [2024-09-29 16:45:30.156309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.752 [2024-09-29 16:45:30.156365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.752 qpair failed and we were unable to recover it. 00:37:29.752 [2024-09-29 16:45:30.156496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.752 [2024-09-29 16:45:30.156549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.752 qpair failed and we were unable to recover it. 00:37:29.752 [2024-09-29 16:45:30.156685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.752 [2024-09-29 16:45:30.156720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.752 qpair failed and we were unable to recover it. 00:37:29.752 [2024-09-29 16:45:30.156869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.752 [2024-09-29 16:45:30.156909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.752 qpair failed and we were unable to recover it. 00:37:29.752 [2024-09-29 16:45:30.157035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.752 [2024-09-29 16:45:30.157073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.752 qpair failed and we were unable to recover it. 00:37:29.753 [2024-09-29 16:45:30.157200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.753 [2024-09-29 16:45:30.157236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.753 qpair failed and we were unable to recover it. 00:37:29.753 [2024-09-29 16:45:30.157387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.753 [2024-09-29 16:45:30.157440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.753 qpair failed and we were unable to recover it. 00:37:29.753 [2024-09-29 16:45:30.157582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.753 [2024-09-29 16:45:30.157635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.753 qpair failed and we were unable to recover it. 00:37:29.753 [2024-09-29 16:45:30.157803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.753 [2024-09-29 16:45:30.157851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.753 qpair failed and we were unable to recover it. 00:37:29.753 [2024-09-29 16:45:30.158037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.753 [2024-09-29 16:45:30.158091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.753 qpair failed and we were unable to recover it. 00:37:29.753 [2024-09-29 16:45:30.158251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.753 [2024-09-29 16:45:30.158288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.753 qpair failed and we were unable to recover it. 00:37:29.753 [2024-09-29 16:45:30.158464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.753 [2024-09-29 16:45:30.158500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.753 qpair failed and we were unable to recover it. 00:37:29.753 [2024-09-29 16:45:30.158664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.753 [2024-09-29 16:45:30.158704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.753 qpair failed and we were unable to recover it. 00:37:29.753 [2024-09-29 16:45:30.158877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.753 [2024-09-29 16:45:30.158911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.753 qpair failed and we were unable to recover it. 00:37:29.753 [2024-09-29 16:45:30.159059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.753 [2024-09-29 16:45:30.159095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.753 qpair failed and we were unable to recover it. 00:37:29.753 [2024-09-29 16:45:30.159294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.753 [2024-09-29 16:45:30.159336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.753 qpair failed and we were unable to recover it. 00:37:29.753 [2024-09-29 16:45:30.159495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.753 [2024-09-29 16:45:30.159532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.753 qpair failed and we were unable to recover it. 00:37:29.753 [2024-09-29 16:45:30.159702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.753 [2024-09-29 16:45:30.159768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.753 qpair failed and we were unable to recover it. 00:37:29.753 [2024-09-29 16:45:30.159916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.753 [2024-09-29 16:45:30.159964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.753 qpair failed and we were unable to recover it. 00:37:29.753 [2024-09-29 16:45:30.160138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.753 [2024-09-29 16:45:30.160192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.753 qpair failed and we were unable to recover it. 00:37:29.753 [2024-09-29 16:45:30.160349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.753 [2024-09-29 16:45:30.160401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.753 qpair failed and we were unable to recover it. 00:37:29.753 [2024-09-29 16:45:30.160521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.753 [2024-09-29 16:45:30.160554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.753 qpair failed and we were unable to recover it. 00:37:29.753 [2024-09-29 16:45:30.160696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.753 [2024-09-29 16:45:30.160730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.753 qpair failed and we were unable to recover it. 00:37:29.753 [2024-09-29 16:45:30.160873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.753 [2024-09-29 16:45:30.160907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.753 qpair failed and we were unable to recover it. 00:37:29.753 [2024-09-29 16:45:30.161047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.753 [2024-09-29 16:45:30.161082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.753 qpair failed and we were unable to recover it. 00:37:29.753 [2024-09-29 16:45:30.161254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.753 [2024-09-29 16:45:30.161305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.753 qpair failed and we were unable to recover it. 00:37:29.753 [2024-09-29 16:45:30.161452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.753 [2024-09-29 16:45:30.161488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.753 qpair failed and we were unable to recover it. 00:37:29.753 [2024-09-29 16:45:30.161658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.753 [2024-09-29 16:45:30.161720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.753 qpair failed and we were unable to recover it. 00:37:29.753 [2024-09-29 16:45:30.161846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.753 [2024-09-29 16:45:30.161882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.753 qpair failed and we were unable to recover it. 00:37:29.753 [2024-09-29 16:45:30.162054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.753 [2024-09-29 16:45:30.162090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.753 qpair failed and we were unable to recover it. 00:37:29.753 [2024-09-29 16:45:30.162244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.753 [2024-09-29 16:45:30.162280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.753 qpair failed and we were unable to recover it. 00:37:29.753 [2024-09-29 16:45:30.162396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.753 [2024-09-29 16:45:30.162431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.753 qpair failed and we were unable to recover it. 00:37:29.753 [2024-09-29 16:45:30.162598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.753 [2024-09-29 16:45:30.162633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.753 qpair failed and we were unable to recover it. 00:37:29.753 [2024-09-29 16:45:30.162795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.753 [2024-09-29 16:45:30.162846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.753 qpair failed and we were unable to recover it. 00:37:29.753 [2024-09-29 16:45:30.163021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.753 [2024-09-29 16:45:30.163075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.753 qpair failed and we were unable to recover it. 00:37:29.753 [2024-09-29 16:45:30.163201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.753 [2024-09-29 16:45:30.163237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.753 qpair failed and we were unable to recover it. 00:37:29.754 [2024-09-29 16:45:30.163352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.754 [2024-09-29 16:45:30.163387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.754 qpair failed and we were unable to recover it. 00:37:29.754 [2024-09-29 16:45:30.163533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.754 [2024-09-29 16:45:30.163583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.754 qpair failed and we were unable to recover it. 00:37:29.754 [2024-09-29 16:45:30.163758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.754 [2024-09-29 16:45:30.163794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.754 qpair failed and we were unable to recover it. 00:37:29.754 [2024-09-29 16:45:30.163921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.754 [2024-09-29 16:45:30.163957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.754 qpair failed and we were unable to recover it. 00:37:29.754 [2024-09-29 16:45:30.164111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.754 [2024-09-29 16:45:30.164147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.754 qpair failed and we were unable to recover it. 00:37:29.754 [2024-09-29 16:45:30.164261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.754 [2024-09-29 16:45:30.164297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.754 qpair failed and we were unable to recover it. 00:37:29.754 [2024-09-29 16:45:30.164502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.754 [2024-09-29 16:45:30.164537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.754 qpair failed and we were unable to recover it. 00:37:29.754 [2024-09-29 16:45:30.164702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.754 [2024-09-29 16:45:30.164755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.754 qpair failed and we were unable to recover it. 00:37:29.754 [2024-09-29 16:45:30.164897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.754 [2024-09-29 16:45:30.164934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.754 qpair failed and we were unable to recover it. 00:37:29.754 [2024-09-29 16:45:30.165098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.754 [2024-09-29 16:45:30.165134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.754 qpair failed and we were unable to recover it. 00:37:29.754 [2024-09-29 16:45:30.165315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.754 [2024-09-29 16:45:30.165350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.754 qpair failed and we were unable to recover it. 00:37:29.754 [2024-09-29 16:45:30.165503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.754 [2024-09-29 16:45:30.165538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.754 qpair failed and we were unable to recover it. 00:37:29.754 [2024-09-29 16:45:30.165693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.754 [2024-09-29 16:45:30.165727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.754 qpair failed and we were unable to recover it. 00:37:29.754 [2024-09-29 16:45:30.165879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.754 [2024-09-29 16:45:30.165914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.754 qpair failed and we were unable to recover it. 00:37:29.754 [2024-09-29 16:45:30.166052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.754 [2024-09-29 16:45:30.166086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.754 qpair failed and we were unable to recover it. 00:37:29.754 [2024-09-29 16:45:30.166244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.754 [2024-09-29 16:45:30.166291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.754 qpair failed and we were unable to recover it. 00:37:29.754 [2024-09-29 16:45:30.166443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.754 [2024-09-29 16:45:30.166476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.754 qpair failed and we were unable to recover it. 00:37:29.754 [2024-09-29 16:45:30.166645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.754 [2024-09-29 16:45:30.166701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.754 qpair failed and we were unable to recover it. 00:37:29.754 [2024-09-29 16:45:30.166844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.754 [2024-09-29 16:45:30.166881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.754 qpair failed and we were unable to recover it. 00:37:29.754 [2024-09-29 16:45:30.167017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.754 [2024-09-29 16:45:30.167062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.754 qpair failed and we were unable to recover it. 00:37:29.754 [2024-09-29 16:45:30.167234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.754 [2024-09-29 16:45:30.167292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.754 qpair failed and we were unable to recover it. 00:37:29.754 [2024-09-29 16:45:30.167517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.754 [2024-09-29 16:45:30.167577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.754 qpair failed and we were unable to recover it. 00:37:29.754 [2024-09-29 16:45:30.167704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.754 [2024-09-29 16:45:30.167742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.754 qpair failed and we were unable to recover it. 00:37:29.754 [2024-09-29 16:45:30.167922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.754 [2024-09-29 16:45:30.167975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.754 qpair failed and we were unable to recover it. 00:37:29.754 [2024-09-29 16:45:30.168213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.754 [2024-09-29 16:45:30.168265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.754 qpair failed and we were unable to recover it. 00:37:29.754 [2024-09-29 16:45:30.168482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.754 [2024-09-29 16:45:30.168550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.754 qpair failed and we were unable to recover it. 00:37:29.754 [2024-09-29 16:45:30.168729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.754 [2024-09-29 16:45:30.168763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.754 qpair failed and we were unable to recover it. 00:37:29.754 [2024-09-29 16:45:30.168896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.754 [2024-09-29 16:45:30.168933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.754 qpair failed and we were unable to recover it. 00:37:29.754 [2024-09-29 16:45:30.169151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.754 [2024-09-29 16:45:30.169211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.755 qpair failed and we were unable to recover it. 00:37:29.755 [2024-09-29 16:45:30.169461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.755 [2024-09-29 16:45:30.169558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.755 qpair failed and we were unable to recover it. 00:37:29.755 [2024-09-29 16:45:30.169751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.755 [2024-09-29 16:45:30.169785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.755 qpair failed and we were unable to recover it. 00:37:29.755 [2024-09-29 16:45:30.169938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.755 [2024-09-29 16:45:30.169995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.755 qpair failed and we were unable to recover it. 00:37:29.755 [2024-09-29 16:45:30.170152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.755 [2024-09-29 16:45:30.170214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.755 qpair failed and we were unable to recover it. 00:37:29.755 [2024-09-29 16:45:30.170388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.755 [2024-09-29 16:45:30.170439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.755 qpair failed and we were unable to recover it. 00:37:29.755 [2024-09-29 16:45:30.170604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.755 [2024-09-29 16:45:30.170638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.755 qpair failed and we were unable to recover it. 00:37:29.755 [2024-09-29 16:45:30.170801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.755 [2024-09-29 16:45:30.170849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.755 qpair failed and we were unable to recover it. 00:37:29.755 [2024-09-29 16:45:30.170997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.755 [2024-09-29 16:45:30.171033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.755 qpair failed and we were unable to recover it. 00:37:29.755 [2024-09-29 16:45:30.171180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.755 [2024-09-29 16:45:30.171215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.755 qpair failed and we were unable to recover it. 00:37:29.755 [2024-09-29 16:45:30.171351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.755 [2024-09-29 16:45:30.171385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.755 qpair failed and we were unable to recover it. 00:37:29.755 [2024-09-29 16:45:30.171543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.755 [2024-09-29 16:45:30.171591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.755 qpair failed and we were unable to recover it. 00:37:29.755 [2024-09-29 16:45:30.171741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.755 [2024-09-29 16:45:30.171778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.755 qpair failed and we were unable to recover it. 00:37:29.755 [2024-09-29 16:45:30.171916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.755 [2024-09-29 16:45:30.171976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.755 qpair failed and we were unable to recover it. 00:37:29.755 [2024-09-29 16:45:30.172159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.755 [2024-09-29 16:45:30.172212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.755 qpair failed and we were unable to recover it. 00:37:29.755 [2024-09-29 16:45:30.172402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.755 [2024-09-29 16:45:30.172437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.755 qpair failed and we were unable to recover it. 00:37:29.755 [2024-09-29 16:45:30.172586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.755 [2024-09-29 16:45:30.172620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.755 qpair failed and we were unable to recover it. 00:37:29.755 [2024-09-29 16:45:30.172785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.755 [2024-09-29 16:45:30.172836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.755 qpair failed and we were unable to recover it. 00:37:29.755 [2024-09-29 16:45:30.172961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.755 [2024-09-29 16:45:30.172997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.755 qpair failed and we were unable to recover it. 00:37:29.755 [2024-09-29 16:45:30.173170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.755 [2024-09-29 16:45:30.173204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.755 qpair failed and we were unable to recover it. 00:37:29.755 [2024-09-29 16:45:30.173344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.755 [2024-09-29 16:45:30.173381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.755 qpair failed and we were unable to recover it. 00:37:29.755 [2024-09-29 16:45:30.173508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.755 [2024-09-29 16:45:30.173545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.755 qpair failed and we were unable to recover it. 00:37:29.755 [2024-09-29 16:45:30.173720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.755 [2024-09-29 16:45:30.173754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.755 qpair failed and we were unable to recover it. 00:37:29.755 [2024-09-29 16:45:30.173897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.755 [2024-09-29 16:45:30.173934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.755 qpair failed and we were unable to recover it. 00:37:29.755 [2024-09-29 16:45:30.174068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.755 [2024-09-29 16:45:30.174106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.755 qpair failed and we were unable to recover it. 00:37:29.755 [2024-09-29 16:45:30.174265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.755 [2024-09-29 16:45:30.174303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.755 qpair failed and we were unable to recover it. 00:37:29.755 [2024-09-29 16:45:30.174441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.755 [2024-09-29 16:45:30.174476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.755 qpair failed and we were unable to recover it. 00:37:29.755 [2024-09-29 16:45:30.174621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.755 [2024-09-29 16:45:30.174656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.755 qpair failed and we were unable to recover it. 00:37:29.755 [2024-09-29 16:45:30.174817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.755 [2024-09-29 16:45:30.174870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.755 qpair failed and we were unable to recover it. 00:37:29.755 [2024-09-29 16:45:30.175036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.755 [2024-09-29 16:45:30.175073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.755 qpair failed and we were unable to recover it. 00:37:29.755 [2024-09-29 16:45:30.175318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.755 [2024-09-29 16:45:30.175377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.755 qpair failed and we were unable to recover it. 00:37:29.755 [2024-09-29 16:45:30.175496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.755 [2024-09-29 16:45:30.175538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.755 qpair failed and we were unable to recover it. 00:37:29.755 [2024-09-29 16:45:30.175652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.755 [2024-09-29 16:45:30.175697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.755 qpair failed and we were unable to recover it. 00:37:29.755 [2024-09-29 16:45:30.175833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.755 [2024-09-29 16:45:30.175866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.755 qpair failed and we were unable to recover it. 00:37:29.755 [2024-09-29 16:45:30.176008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.756 [2024-09-29 16:45:30.176063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.756 qpair failed and we were unable to recover it. 00:37:29.756 [2024-09-29 16:45:30.176223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.756 [2024-09-29 16:45:30.176275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.756 qpair failed and we were unable to recover it. 00:37:29.756 [2024-09-29 16:45:30.176457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.756 [2024-09-29 16:45:30.176518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.756 qpair failed and we were unable to recover it. 00:37:29.756 [2024-09-29 16:45:30.176662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.756 [2024-09-29 16:45:30.176727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.756 qpair failed and we were unable to recover it. 00:37:29.756 [2024-09-29 16:45:30.176846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.756 [2024-09-29 16:45:30.176878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.756 qpair failed and we were unable to recover it. 00:37:29.756 [2024-09-29 16:45:30.177085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.756 [2024-09-29 16:45:30.177123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.756 qpair failed and we were unable to recover it. 00:37:29.756 [2024-09-29 16:45:30.177305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.756 [2024-09-29 16:45:30.177366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.756 qpair failed and we were unable to recover it. 00:37:29.756 [2024-09-29 16:45:30.177518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.756 [2024-09-29 16:45:30.177554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.756 qpair failed and we were unable to recover it. 00:37:29.756 [2024-09-29 16:45:30.177686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.756 [2024-09-29 16:45:30.177722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.756 qpair failed and we were unable to recover it. 00:37:29.756 [2024-09-29 16:45:30.177885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.756 [2024-09-29 16:45:30.177933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.756 qpair failed and we were unable to recover it. 00:37:29.756 [2024-09-29 16:45:30.178108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.756 [2024-09-29 16:45:30.178168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.756 qpair failed and we were unable to recover it. 00:37:29.756 [2024-09-29 16:45:30.178308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.756 [2024-09-29 16:45:30.178347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.756 qpair failed and we were unable to recover it. 00:37:29.756 [2024-09-29 16:45:30.178504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.756 [2024-09-29 16:45:30.178541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.756 qpair failed and we were unable to recover it. 00:37:29.756 [2024-09-29 16:45:30.178679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.756 [2024-09-29 16:45:30.178714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.756 qpair failed and we were unable to recover it. 00:37:29.756 [2024-09-29 16:45:30.178824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.756 [2024-09-29 16:45:30.178858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.756 qpair failed and we were unable to recover it. 00:37:29.756 [2024-09-29 16:45:30.178998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.756 [2024-09-29 16:45:30.179035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.756 qpair failed and we were unable to recover it. 00:37:29.756 [2024-09-29 16:45:30.179238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.756 [2024-09-29 16:45:30.179271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.756 qpair failed and we were unable to recover it. 00:37:29.756 [2024-09-29 16:45:30.179444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.756 [2024-09-29 16:45:30.179511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.756 qpair failed and we were unable to recover it. 00:37:29.756 [2024-09-29 16:45:30.179686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.756 [2024-09-29 16:45:30.179721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.756 qpair failed and we were unable to recover it. 00:37:29.756 [2024-09-29 16:45:30.179869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.756 [2024-09-29 16:45:30.179903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.756 qpair failed and we were unable to recover it. 00:37:29.756 [2024-09-29 16:45:30.180085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.756 [2024-09-29 16:45:30.180120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.756 qpair failed and we were unable to recover it. 00:37:29.756 [2024-09-29 16:45:30.180227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.756 [2024-09-29 16:45:30.180279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.756 qpair failed and we were unable to recover it. 00:37:29.756 [2024-09-29 16:45:30.180545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.756 [2024-09-29 16:45:30.180625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.756 qpair failed and we were unable to recover it. 00:37:29.756 [2024-09-29 16:45:30.180785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.756 [2024-09-29 16:45:30.180822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.756 qpair failed and we were unable to recover it. 00:37:29.756 [2024-09-29 16:45:30.181049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.756 [2024-09-29 16:45:30.181127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.756 qpair failed and we were unable to recover it. 00:37:29.756 [2024-09-29 16:45:30.181394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.756 [2024-09-29 16:45:30.181451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.756 qpair failed and we were unable to recover it. 00:37:29.756 [2024-09-29 16:45:30.181596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.756 [2024-09-29 16:45:30.181630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.756 qpair failed and we were unable to recover it. 00:37:29.756 [2024-09-29 16:45:30.181804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.756 [2024-09-29 16:45:30.181838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.756 qpair failed and we were unable to recover it. 00:37:29.756 [2024-09-29 16:45:30.181964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.756 [2024-09-29 16:45:30.181998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.756 qpair failed and we were unable to recover it. 00:37:29.756 [2024-09-29 16:45:30.182152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.756 [2024-09-29 16:45:30.182256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.756 qpair failed and we were unable to recover it. 00:37:29.756 [2024-09-29 16:45:30.182541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.756 [2024-09-29 16:45:30.182608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.756 qpair failed and we were unable to recover it. 00:37:29.756 [2024-09-29 16:45:30.182799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.756 [2024-09-29 16:45:30.182847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.756 qpair failed and we were unable to recover it. 00:37:29.756 [2024-09-29 16:45:30.183006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.756 [2024-09-29 16:45:30.183044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.756 qpair failed and we were unable to recover it. 00:37:29.756 [2024-09-29 16:45:30.183187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.756 [2024-09-29 16:45:30.183258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.756 qpair failed and we were unable to recover it. 00:37:29.756 [2024-09-29 16:45:30.183480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.756 [2024-09-29 16:45:30.183535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.756 qpair failed and we were unable to recover it. 00:37:29.756 [2024-09-29 16:45:30.183681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.756 [2024-09-29 16:45:30.183715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.757 qpair failed and we were unable to recover it. 00:37:29.757 [2024-09-29 16:45:30.183855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.757 [2024-09-29 16:45:30.183889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.757 qpair failed and we were unable to recover it. 00:37:29.757 [2024-09-29 16:45:30.184025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.757 [2024-09-29 16:45:30.184068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.757 qpair failed and we were unable to recover it. 00:37:29.757 [2024-09-29 16:45:30.184226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.757 [2024-09-29 16:45:30.184263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.757 qpair failed and we were unable to recover it. 00:37:29.757 [2024-09-29 16:45:30.184420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.757 [2024-09-29 16:45:30.184456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.757 qpair failed and we were unable to recover it. 00:37:29.757 [2024-09-29 16:45:30.184611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.757 [2024-09-29 16:45:30.184651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.757 qpair failed and we were unable to recover it. 00:37:29.757 [2024-09-29 16:45:30.184844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.757 [2024-09-29 16:45:30.184892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.757 qpair failed and we were unable to recover it. 00:37:29.757 [2024-09-29 16:45:30.185095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.757 [2024-09-29 16:45:30.185150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.757 qpair failed and we were unable to recover it. 00:37:29.757 [2024-09-29 16:45:30.185375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.757 [2024-09-29 16:45:30.185413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.757 qpair failed and we were unable to recover it. 00:37:29.757 [2024-09-29 16:45:30.185573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.757 [2024-09-29 16:45:30.185606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.757 qpair failed and we were unable to recover it. 00:37:29.757 [2024-09-29 16:45:30.185753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.757 [2024-09-29 16:45:30.185787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.757 qpair failed and we were unable to recover it. 00:37:29.757 [2024-09-29 16:45:30.185920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.757 [2024-09-29 16:45:30.185957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.757 qpair failed and we were unable to recover it. 00:37:29.757 [2024-09-29 16:45:30.186142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.757 [2024-09-29 16:45:30.186192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.757 qpair failed and we were unable to recover it. 00:37:29.757 [2024-09-29 16:45:30.186313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.757 [2024-09-29 16:45:30.186348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.757 qpair failed and we were unable to recover it. 00:37:29.757 [2024-09-29 16:45:30.186498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.757 [2024-09-29 16:45:30.186532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.757 qpair failed and we were unable to recover it. 00:37:29.757 [2024-09-29 16:45:30.186645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.757 [2024-09-29 16:45:30.186685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.757 qpair failed and we were unable to recover it. 00:37:29.757 [2024-09-29 16:45:30.186835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.757 [2024-09-29 16:45:30.186869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.757 qpair failed and we were unable to recover it. 00:37:29.757 [2024-09-29 16:45:30.187034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.757 [2024-09-29 16:45:30.187068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.757 qpair failed and we were unable to recover it. 00:37:29.757 [2024-09-29 16:45:30.187205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.757 [2024-09-29 16:45:30.187239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.757 qpair failed and we were unable to recover it. 00:37:29.757 [2024-09-29 16:45:30.187384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.757 [2024-09-29 16:45:30.187418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.757 qpair failed and we were unable to recover it. 00:37:29.757 [2024-09-29 16:45:30.187563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.757 [2024-09-29 16:45:30.187599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.757 qpair failed and we were unable to recover it. 00:37:29.757 [2024-09-29 16:45:30.187744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.757 [2024-09-29 16:45:30.187778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.757 qpair failed and we were unable to recover it. 00:37:29.757 [2024-09-29 16:45:30.187937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.757 [2024-09-29 16:45:30.187985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.757 qpair failed and we were unable to recover it. 00:37:29.757 [2024-09-29 16:45:30.188113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.757 [2024-09-29 16:45:30.188150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.757 qpair failed and we were unable to recover it. 00:37:29.757 [2024-09-29 16:45:30.188297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.757 [2024-09-29 16:45:30.188332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.757 qpair failed and we were unable to recover it. 00:37:29.757 [2024-09-29 16:45:30.188455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.757 [2024-09-29 16:45:30.188490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.757 qpair failed and we were unable to recover it. 00:37:29.757 [2024-09-29 16:45:30.188639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.757 [2024-09-29 16:45:30.188680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.757 qpair failed and we were unable to recover it. 00:37:29.757 [2024-09-29 16:45:30.188833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.757 [2024-09-29 16:45:30.188868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.757 qpair failed and we were unable to recover it. 00:37:29.757 [2024-09-29 16:45:30.189016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.757 [2024-09-29 16:45:30.189050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.757 qpair failed and we were unable to recover it. 00:37:29.757 [2024-09-29 16:45:30.189220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.757 [2024-09-29 16:45:30.189275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.757 qpair failed and we were unable to recover it. 00:37:29.757 [2024-09-29 16:45:30.189447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.757 [2024-09-29 16:45:30.189499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.757 qpair failed and we were unable to recover it. 00:37:29.757 [2024-09-29 16:45:30.189676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.757 [2024-09-29 16:45:30.189712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.757 qpair failed and we were unable to recover it. 00:37:29.757 [2024-09-29 16:45:30.189876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.757 [2024-09-29 16:45:30.189913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.757 qpair failed and we were unable to recover it. 00:37:29.757 [2024-09-29 16:45:30.190062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.757 [2024-09-29 16:45:30.190100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.757 qpair failed and we were unable to recover it. 00:37:29.757 [2024-09-29 16:45:30.190281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.757 [2024-09-29 16:45:30.190317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.757 qpair failed and we were unable to recover it. 00:37:29.757 [2024-09-29 16:45:30.190448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.758 [2024-09-29 16:45:30.190488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.758 qpair failed and we were unable to recover it. 00:37:29.758 [2024-09-29 16:45:30.190614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.758 [2024-09-29 16:45:30.190652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.758 qpair failed and we were unable to recover it. 00:37:29.758 [2024-09-29 16:45:30.190794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.758 [2024-09-29 16:45:30.190829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.758 qpair failed and we were unable to recover it. 00:37:29.758 [2024-09-29 16:45:30.190970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.758 [2024-09-29 16:45:30.191023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.758 qpair failed and we were unable to recover it. 00:37:29.758 [2024-09-29 16:45:30.191243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.758 [2024-09-29 16:45:30.191306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.758 qpair failed and we were unable to recover it. 00:37:29.758 [2024-09-29 16:45:30.191505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.758 [2024-09-29 16:45:30.191538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.758 qpair failed and we were unable to recover it. 00:37:29.758 [2024-09-29 16:45:30.191718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.758 [2024-09-29 16:45:30.191753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.758 qpair failed and we were unable to recover it. 00:37:29.758 [2024-09-29 16:45:30.191905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.758 [2024-09-29 16:45:30.191945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.758 qpair failed and we were unable to recover it. 00:37:29.758 [2024-09-29 16:45:30.192223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.758 [2024-09-29 16:45:30.192331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.758 qpair failed and we were unable to recover it. 00:37:29.758 [2024-09-29 16:45:30.192491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.758 [2024-09-29 16:45:30.192530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.758 qpair failed and we were unable to recover it. 00:37:29.758 [2024-09-29 16:45:30.192686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.758 [2024-09-29 16:45:30.192740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.758 qpair failed and we were unable to recover it. 00:37:29.758 [2024-09-29 16:45:30.192891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.758 [2024-09-29 16:45:30.192927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.758 qpair failed and we were unable to recover it. 00:37:29.758 [2024-09-29 16:45:30.193121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.758 [2024-09-29 16:45:30.193173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.758 qpair failed and we were unable to recover it. 00:37:29.758 [2024-09-29 16:45:30.193367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.758 [2024-09-29 16:45:30.193406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.758 qpair failed and we were unable to recover it. 00:37:29.758 [2024-09-29 16:45:30.193565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.758 [2024-09-29 16:45:30.193603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.758 qpair failed and we were unable to recover it. 00:37:29.758 [2024-09-29 16:45:30.193762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.758 [2024-09-29 16:45:30.193797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.758 qpair failed and we were unable to recover it. 00:37:29.758 [2024-09-29 16:45:30.193918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.758 [2024-09-29 16:45:30.193963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.758 qpair failed and we were unable to recover it. 00:37:29.758 [2024-09-29 16:45:30.194133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.758 [2024-09-29 16:45:30.194185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.758 qpair failed and we were unable to recover it. 00:37:29.758 [2024-09-29 16:45:30.194325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.758 [2024-09-29 16:45:30.194377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.758 qpair failed and we were unable to recover it. 00:37:29.758 [2024-09-29 16:45:30.194554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.758 [2024-09-29 16:45:30.194607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.758 qpair failed and we were unable to recover it. 00:37:29.758 [2024-09-29 16:45:30.194779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.758 [2024-09-29 16:45:30.194814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.758 qpair failed and we were unable to recover it. 00:37:29.758 [2024-09-29 16:45:30.194959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.758 [2024-09-29 16:45:30.195008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.758 qpair failed and we were unable to recover it. 00:37:29.758 [2024-09-29 16:45:30.195181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.758 [2024-09-29 16:45:30.195219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.758 qpair failed and we were unable to recover it. 00:37:29.758 [2024-09-29 16:45:30.195498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.758 [2024-09-29 16:45:30.195557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.758 qpair failed and we were unable to recover it. 00:37:29.758 [2024-09-29 16:45:30.195726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.758 [2024-09-29 16:45:30.195760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.758 qpair failed and we were unable to recover it. 00:37:29.758 [2024-09-29 16:45:30.195901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.758 [2024-09-29 16:45:30.195934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.758 qpair failed and we were unable to recover it. 00:37:29.758 [2024-09-29 16:45:30.196050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.758 [2024-09-29 16:45:30.196105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.758 qpair failed and we were unable to recover it. 00:37:29.758 [2024-09-29 16:45:30.196319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.758 [2024-09-29 16:45:30.196399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.758 qpair failed and we were unable to recover it. 00:37:29.758 [2024-09-29 16:45:30.196578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.758 [2024-09-29 16:45:30.196611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.758 qpair failed and we were unable to recover it. 00:37:29.758 [2024-09-29 16:45:30.196721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.758 [2024-09-29 16:45:30.196754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.758 qpair failed and we were unable to recover it. 00:37:29.758 [2024-09-29 16:45:30.196908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.758 [2024-09-29 16:45:30.196941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.758 qpair failed and we were unable to recover it. 00:37:29.758 [2024-09-29 16:45:30.197235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.758 [2024-09-29 16:45:30.197295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.758 qpair failed and we were unable to recover it. 00:37:29.758 [2024-09-29 16:45:30.197503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.758 [2024-09-29 16:45:30.197563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.758 qpair failed and we were unable to recover it. 00:37:29.758 [2024-09-29 16:45:30.197758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.758 [2024-09-29 16:45:30.197792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.758 qpair failed and we were unable to recover it. 00:37:29.758 [2024-09-29 16:45:30.197939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.758 [2024-09-29 16:45:30.197987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.758 qpair failed and we were unable to recover it. 00:37:29.758 [2024-09-29 16:45:30.198160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.758 [2024-09-29 16:45:30.198215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.758 qpair failed and we were unable to recover it. 00:37:29.758 [2024-09-29 16:45:30.198405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.758 [2024-09-29 16:45:30.198459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.758 qpair failed and we were unable to recover it. 00:37:29.758 [2024-09-29 16:45:30.198629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.758 [2024-09-29 16:45:30.198663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.759 qpair failed and we were unable to recover it. 00:37:29.759 [2024-09-29 16:45:30.198814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.759 [2024-09-29 16:45:30.198848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.759 qpair failed and we were unable to recover it. 00:37:29.759 [2024-09-29 16:45:30.198972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.759 [2024-09-29 16:45:30.199011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.759 qpair failed and we were unable to recover it. 00:37:29.759 [2024-09-29 16:45:30.199194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.759 [2024-09-29 16:45:30.199245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.759 qpair failed and we were unable to recover it. 00:37:29.759 [2024-09-29 16:45:30.199506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.759 [2024-09-29 16:45:30.199577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.759 qpair failed and we were unable to recover it. 00:37:29.759 [2024-09-29 16:45:30.199737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.759 [2024-09-29 16:45:30.199774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.759 qpair failed and we were unable to recover it. 00:37:29.759 [2024-09-29 16:45:30.199901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.759 [2024-09-29 16:45:30.199936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.759 qpair failed and we were unable to recover it. 00:37:29.759 [2024-09-29 16:45:30.200189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.759 [2024-09-29 16:45:30.200264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.759 qpair failed and we were unable to recover it. 00:37:29.759 [2024-09-29 16:45:30.200460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.759 [2024-09-29 16:45:30.200516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.759 qpair failed and we were unable to recover it. 00:37:29.759 [2024-09-29 16:45:30.200643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.759 [2024-09-29 16:45:30.200687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.759 qpair failed and we were unable to recover it. 00:37:29.759 [2024-09-29 16:45:30.200853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.759 [2024-09-29 16:45:30.200891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.759 qpair failed and we were unable to recover it. 00:37:29.759 [2024-09-29 16:45:30.201003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.759 [2024-09-29 16:45:30.201055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.759 qpair failed and we were unable to recover it. 00:37:29.759 [2024-09-29 16:45:30.201294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.759 [2024-09-29 16:45:30.201344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.759 qpair failed and we were unable to recover it. 00:37:29.759 [2024-09-29 16:45:30.201580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.759 [2024-09-29 16:45:30.201616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.759 qpair failed and we were unable to recover it. 00:37:29.759 [2024-09-29 16:45:30.201759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.759 [2024-09-29 16:45:30.201793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.759 qpair failed and we were unable to recover it. 00:37:29.759 [2024-09-29 16:45:30.201905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.759 [2024-09-29 16:45:30.201938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.759 qpair failed and we were unable to recover it. 00:37:29.759 [2024-09-29 16:45:30.202092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.759 [2024-09-29 16:45:30.202157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.759 qpair failed and we were unable to recover it. 00:37:29.759 [2024-09-29 16:45:30.202360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.759 [2024-09-29 16:45:30.202414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.759 qpair failed and we were unable to recover it. 00:37:29.759 [2024-09-29 16:45:30.202542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.759 [2024-09-29 16:45:30.202590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.759 qpair failed and we were unable to recover it. 00:37:29.759 [2024-09-29 16:45:30.202713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.759 [2024-09-29 16:45:30.202749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.759 qpair failed and we were unable to recover it. 00:37:29.759 [2024-09-29 16:45:30.202876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.759 [2024-09-29 16:45:30.202924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.759 qpair failed and we were unable to recover it. 00:37:29.759 [2024-09-29 16:45:30.203108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.759 [2024-09-29 16:45:30.203148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.759 qpair failed and we were unable to recover it. 00:37:29.759 [2024-09-29 16:45:30.203426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.759 [2024-09-29 16:45:30.203465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.759 qpair failed and we were unable to recover it. 00:37:29.759 [2024-09-29 16:45:30.203618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.759 [2024-09-29 16:45:30.203655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.759 qpair failed and we were unable to recover it. 00:37:29.759 [2024-09-29 16:45:30.203810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.759 [2024-09-29 16:45:30.203844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.759 qpair failed and we were unable to recover it. 00:37:29.759 [2024-09-29 16:45:30.203977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.759 [2024-09-29 16:45:30.204017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.759 qpair failed and we were unable to recover it. 00:37:29.759 [2024-09-29 16:45:30.204141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.759 [2024-09-29 16:45:30.204180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.759 qpair failed and we were unable to recover it. 00:37:29.759 [2024-09-29 16:45:30.204398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.759 [2024-09-29 16:45:30.204462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.759 qpair failed and we were unable to recover it. 00:37:29.759 [2024-09-29 16:45:30.204610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.759 [2024-09-29 16:45:30.204648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.759 qpair failed and we were unable to recover it. 00:37:29.759 [2024-09-29 16:45:30.204813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.759 [2024-09-29 16:45:30.204860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.759 qpair failed and we were unable to recover it. 00:37:29.759 [2024-09-29 16:45:30.205028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.759 [2024-09-29 16:45:30.205083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.759 qpair failed and we were unable to recover it. 00:37:29.759 [2024-09-29 16:45:30.205189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.759 [2024-09-29 16:45:30.205222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.759 qpair failed and we were unable to recover it. 00:37:29.759 [2024-09-29 16:45:30.205357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.759 [2024-09-29 16:45:30.205408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.759 qpair failed and we were unable to recover it. 00:37:29.759 [2024-09-29 16:45:30.205599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.759 [2024-09-29 16:45:30.205633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.759 qpair failed and we were unable to recover it. 00:37:29.760 [2024-09-29 16:45:30.205824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.760 [2024-09-29 16:45:30.205877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.760 qpair failed and we were unable to recover it. 00:37:29.760 [2024-09-29 16:45:30.206043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.760 [2024-09-29 16:45:30.206083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.760 qpair failed and we were unable to recover it. 00:37:29.760 [2024-09-29 16:45:30.206324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.760 [2024-09-29 16:45:30.206381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.760 qpair failed and we were unable to recover it. 00:37:29.760 [2024-09-29 16:45:30.206574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.760 [2024-09-29 16:45:30.206609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.760 qpair failed and we were unable to recover it. 00:37:29.760 [2024-09-29 16:45:30.206753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.760 [2024-09-29 16:45:30.206789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.760 qpair failed and we were unable to recover it. 00:37:29.760 [2024-09-29 16:45:30.206959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.760 [2024-09-29 16:45:30.207012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.760 qpair failed and we were unable to recover it. 00:37:29.760 [2024-09-29 16:45:30.207143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.760 [2024-09-29 16:45:30.207181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.760 qpair failed and we were unable to recover it. 00:37:29.760 [2024-09-29 16:45:30.207359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.760 [2024-09-29 16:45:30.207408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.760 qpair failed and we were unable to recover it. 00:37:29.760 [2024-09-29 16:45:30.207519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.760 [2024-09-29 16:45:30.207554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.760 qpair failed and we were unable to recover it. 00:37:29.760 [2024-09-29 16:45:30.207718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.760 [2024-09-29 16:45:30.207752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.760 qpair failed and we were unable to recover it. 00:37:29.760 [2024-09-29 16:45:30.207865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.760 [2024-09-29 16:45:30.207899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.760 qpair failed and we were unable to recover it. 00:37:29.760 [2024-09-29 16:45:30.208090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.760 [2024-09-29 16:45:30.208139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.760 qpair failed and we were unable to recover it. 00:37:29.760 [2024-09-29 16:45:30.208295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.760 [2024-09-29 16:45:30.208330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.760 qpair failed and we were unable to recover it. 00:37:29.760 [2024-09-29 16:45:30.208469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.760 [2024-09-29 16:45:30.208503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.760 qpair failed and we were unable to recover it. 00:37:29.760 [2024-09-29 16:45:30.208619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.760 [2024-09-29 16:45:30.208652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.760 qpair failed and we were unable to recover it. 00:37:29.760 [2024-09-29 16:45:30.208835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.760 [2024-09-29 16:45:30.208873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.760 qpair failed and we were unable to recover it. 00:37:29.760 [2024-09-29 16:45:30.209028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.760 [2024-09-29 16:45:30.209071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.760 qpair failed and we were unable to recover it. 00:37:29.760 [2024-09-29 16:45:30.209204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.760 [2024-09-29 16:45:30.209243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.760 qpair failed and we were unable to recover it. 00:37:29.760 [2024-09-29 16:45:30.209398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.760 [2024-09-29 16:45:30.209437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.760 qpair failed and we were unable to recover it. 00:37:29.760 [2024-09-29 16:45:30.209592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.760 [2024-09-29 16:45:30.209630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.760 qpair failed and we were unable to recover it. 00:37:29.760 [2024-09-29 16:45:30.209807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.760 [2024-09-29 16:45:30.209843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.760 qpair failed and we were unable to recover it. 00:37:29.760 [2024-09-29 16:45:30.210000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.760 [2024-09-29 16:45:30.210053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.760 qpair failed and we were unable to recover it. 00:37:29.760 [2024-09-29 16:45:30.210194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.760 [2024-09-29 16:45:30.210234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.760 qpair failed and we were unable to recover it. 00:37:29.760 [2024-09-29 16:45:30.210387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.760 [2024-09-29 16:45:30.210425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.760 qpair failed and we were unable to recover it. 00:37:29.760 [2024-09-29 16:45:30.210583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.760 [2024-09-29 16:45:30.210617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.760 qpair failed and we were unable to recover it. 00:37:29.760 [2024-09-29 16:45:30.210792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.760 [2024-09-29 16:45:30.210839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.760 qpair failed and we were unable to recover it. 00:37:29.760 [2024-09-29 16:45:30.210983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.760 [2024-09-29 16:45:30.211018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.760 qpair failed and we were unable to recover it. 00:37:29.760 [2024-09-29 16:45:30.211283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.760 [2024-09-29 16:45:30.211322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.760 qpair failed and we were unable to recover it. 00:37:29.760 [2024-09-29 16:45:30.211451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.760 [2024-09-29 16:45:30.211490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.760 qpair failed and we were unable to recover it. 00:37:29.760 [2024-09-29 16:45:30.211638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.760 [2024-09-29 16:45:30.211683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.760 qpair failed and we were unable to recover it. 00:37:29.760 [2024-09-29 16:45:30.211823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.760 [2024-09-29 16:45:30.211859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.760 qpair failed and we were unable to recover it. 00:37:29.760 [2024-09-29 16:45:30.212070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.760 [2024-09-29 16:45:30.212110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.760 qpair failed and we were unable to recover it. 00:37:29.760 [2024-09-29 16:45:30.212241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.760 [2024-09-29 16:45:30.212279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.760 qpair failed and we were unable to recover it. 00:37:29.760 [2024-09-29 16:45:30.212460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.760 [2024-09-29 16:45:30.212498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.760 qpair failed and we were unable to recover it. 00:37:29.760 [2024-09-29 16:45:30.212638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.760 [2024-09-29 16:45:30.212683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.760 qpair failed and we were unable to recover it. 00:37:29.760 [2024-09-29 16:45:30.212830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.760 [2024-09-29 16:45:30.212863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.760 qpair failed and we were unable to recover it. 00:37:29.760 [2024-09-29 16:45:30.212988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.760 [2024-09-29 16:45:30.213025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.760 qpair failed and we were unable to recover it. 00:37:29.760 [2024-09-29 16:45:30.213173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.760 [2024-09-29 16:45:30.213210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.760 qpair failed and we were unable to recover it. 00:37:29.761 [2024-09-29 16:45:30.213367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.761 [2024-09-29 16:45:30.213404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.761 qpair failed and we were unable to recover it. 00:37:29.761 [2024-09-29 16:45:30.213561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.761 [2024-09-29 16:45:30.213600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.761 qpair failed and we were unable to recover it. 00:37:29.761 [2024-09-29 16:45:30.213770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.761 [2024-09-29 16:45:30.213805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.761 qpair failed and we were unable to recover it. 00:37:29.761 [2024-09-29 16:45:30.213960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.761 [2024-09-29 16:45:30.214013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.761 qpair failed and we were unable to recover it. 00:37:29.761 [2024-09-29 16:45:30.214200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.761 [2024-09-29 16:45:30.214259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.761 qpair failed and we were unable to recover it. 00:37:29.761 [2024-09-29 16:45:30.214461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.761 [2024-09-29 16:45:30.214523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.761 qpair failed and we were unable to recover it. 00:37:29.761 [2024-09-29 16:45:30.214648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.761 [2024-09-29 16:45:30.214688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.761 qpair failed and we were unable to recover it. 00:37:29.761 [2024-09-29 16:45:30.214810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.761 [2024-09-29 16:45:30.214845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.761 qpair failed and we were unable to recover it. 00:37:29.761 [2024-09-29 16:45:30.215000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.761 [2024-09-29 16:45:30.215034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.761 qpair failed and we were unable to recover it. 00:37:29.761 [2024-09-29 16:45:30.215199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.761 [2024-09-29 16:45:30.215237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.761 qpair failed and we were unable to recover it. 00:37:29.761 [2024-09-29 16:45:30.215393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.761 [2024-09-29 16:45:30.215431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.761 qpair failed and we were unable to recover it. 00:37:29.761 [2024-09-29 16:45:30.215547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.761 [2024-09-29 16:45:30.215585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.761 qpair failed and we were unable to recover it. 00:37:29.761 [2024-09-29 16:45:30.215725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.761 [2024-09-29 16:45:30.215761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.761 qpair failed and we were unable to recover it. 00:37:29.761 [2024-09-29 16:45:30.215901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.761 [2024-09-29 16:45:30.215934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.761 qpair failed and we were unable to recover it. 00:37:29.761 [2024-09-29 16:45:30.216086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.761 [2024-09-29 16:45:30.216138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.761 qpair failed and we were unable to recover it. 00:37:29.761 [2024-09-29 16:45:30.216304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.761 [2024-09-29 16:45:30.216360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.761 qpair failed and we were unable to recover it. 00:37:29.761 [2024-09-29 16:45:30.216542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.761 [2024-09-29 16:45:30.216595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.761 qpair failed and we were unable to recover it. 00:37:29.761 [2024-09-29 16:45:30.216787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.761 [2024-09-29 16:45:30.216823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.761 qpair failed and we were unable to recover it. 00:37:29.761 [2024-09-29 16:45:30.216976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.761 [2024-09-29 16:45:30.217020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.761 qpair failed and we were unable to recover it. 00:37:29.761 [2024-09-29 16:45:30.217161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.761 [2024-09-29 16:45:30.217215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.761 qpair failed and we were unable to recover it. 00:37:29.761 [2024-09-29 16:45:30.217445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.761 [2024-09-29 16:45:30.217504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.761 qpair failed and we were unable to recover it. 00:37:29.761 [2024-09-29 16:45:30.217633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.761 [2024-09-29 16:45:30.217670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.761 qpair failed and we were unable to recover it. 00:37:29.761 [2024-09-29 16:45:30.217839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.761 [2024-09-29 16:45:30.217873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.761 qpair failed and we were unable to recover it. 00:37:29.761 [2024-09-29 16:45:30.218055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.761 [2024-09-29 16:45:30.218108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.761 qpair failed and we were unable to recover it. 00:37:29.761 [2024-09-29 16:45:30.218243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.761 [2024-09-29 16:45:30.218282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.761 qpair failed and we were unable to recover it. 00:37:29.761 [2024-09-29 16:45:30.218414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.761 [2024-09-29 16:45:30.218451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.761 qpair failed and we were unable to recover it. 00:37:29.761 [2024-09-29 16:45:30.218603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.761 [2024-09-29 16:45:30.218640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.761 qpair failed and we were unable to recover it. 00:37:29.761 [2024-09-29 16:45:30.218799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.761 [2024-09-29 16:45:30.218847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.761 qpair failed and we were unable to recover it. 00:37:29.761 [2024-09-29 16:45:30.218985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.761 [2024-09-29 16:45:30.219032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.761 qpair failed and we were unable to recover it. 00:37:29.761 [2024-09-29 16:45:30.219204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.761 [2024-09-29 16:45:30.219259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.761 qpair failed and we were unable to recover it. 00:37:29.761 [2024-09-29 16:45:30.219537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.761 [2024-09-29 16:45:30.219594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.761 qpair failed and we were unable to recover it. 00:37:29.761 [2024-09-29 16:45:30.219764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.761 [2024-09-29 16:45:30.219799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.761 qpair failed and we were unable to recover it. 00:37:29.761 [2024-09-29 16:45:30.220001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.761 [2024-09-29 16:45:30.220054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.761 qpair failed and we were unable to recover it. 00:37:29.761 [2024-09-29 16:45:30.220318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.761 [2024-09-29 16:45:30.220377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.761 qpair failed and we were unable to recover it. 00:37:29.761 [2024-09-29 16:45:30.220590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.761 [2024-09-29 16:45:30.220629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.761 qpair failed and we were unable to recover it. 00:37:29.761 [2024-09-29 16:45:30.220777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.761 [2024-09-29 16:45:30.220811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.761 qpair failed and we were unable to recover it. 00:37:29.761 [2024-09-29 16:45:30.220951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.762 [2024-09-29 16:45:30.220984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.762 qpair failed and we were unable to recover it. 00:37:29.762 [2024-09-29 16:45:30.221128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.762 [2024-09-29 16:45:30.221162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.762 qpair failed and we were unable to recover it. 00:37:29.762 [2024-09-29 16:45:30.221375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.762 [2024-09-29 16:45:30.221408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.762 qpair failed and we were unable to recover it. 00:37:29.762 [2024-09-29 16:45:30.221582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.762 [2024-09-29 16:45:30.221619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.762 qpair failed and we were unable to recover it. 00:37:29.762 [2024-09-29 16:45:30.221772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.762 [2024-09-29 16:45:30.221805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.762 qpair failed and we were unable to recover it. 00:37:29.762 [2024-09-29 16:45:30.221911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.762 [2024-09-29 16:45:30.221944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.762 qpair failed and we were unable to recover it. 00:37:29.762 [2024-09-29 16:45:30.222059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.762 [2024-09-29 16:45:30.222093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.762 qpair failed and we were unable to recover it. 00:37:29.762 [2024-09-29 16:45:30.222275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.762 [2024-09-29 16:45:30.222311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.762 qpair failed and we were unable to recover it. 00:37:29.762 [2024-09-29 16:45:30.222475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.762 [2024-09-29 16:45:30.222513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.762 qpair failed and we were unable to recover it. 00:37:29.762 [2024-09-29 16:45:30.222685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.762 [2024-09-29 16:45:30.222737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.762 qpair failed and we were unable to recover it. 00:37:29.762 [2024-09-29 16:45:30.222874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.762 [2024-09-29 16:45:30.222907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.762 qpair failed and we were unable to recover it. 00:37:29.762 [2024-09-29 16:45:30.223051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.762 [2024-09-29 16:45:30.223102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.762 qpair failed and we were unable to recover it. 00:37:29.762 [2024-09-29 16:45:30.223257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.762 [2024-09-29 16:45:30.223294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.762 qpair failed and we were unable to recover it. 00:37:29.762 [2024-09-29 16:45:30.223454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.762 [2024-09-29 16:45:30.223490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.762 qpair failed and we were unable to recover it. 00:37:29.762 [2024-09-29 16:45:30.223650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.762 [2024-09-29 16:45:30.223692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.762 qpair failed and we were unable to recover it. 00:37:29.762 [2024-09-29 16:45:30.223825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.762 [2024-09-29 16:45:30.223872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.762 qpair failed and we were unable to recover it. 00:37:29.762 [2024-09-29 16:45:30.224069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.762 [2024-09-29 16:45:30.224122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.762 qpair failed and we were unable to recover it. 00:37:29.762 [2024-09-29 16:45:30.224318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.762 [2024-09-29 16:45:30.224359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.762 qpair failed and we were unable to recover it. 00:37:29.762 [2024-09-29 16:45:30.224502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.762 [2024-09-29 16:45:30.224540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.762 qpair failed and we were unable to recover it. 00:37:29.762 [2024-09-29 16:45:30.224736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.762 [2024-09-29 16:45:30.224769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.762 qpair failed and we were unable to recover it. 00:37:29.762 [2024-09-29 16:45:30.224892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.762 [2024-09-29 16:45:30.224925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.762 qpair failed and we were unable to recover it. 00:37:29.762 [2024-09-29 16:45:30.225085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.762 [2024-09-29 16:45:30.225121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.762 qpair failed and we were unable to recover it. 00:37:29.762 [2024-09-29 16:45:30.225267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.762 [2024-09-29 16:45:30.225304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.762 qpair failed and we were unable to recover it. 00:37:29.762 [2024-09-29 16:45:30.225470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.762 [2024-09-29 16:45:30.225507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.762 qpair failed and we were unable to recover it. 00:37:29.762 [2024-09-29 16:45:30.225659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.762 [2024-09-29 16:45:30.225722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.762 qpair failed and we were unable to recover it. 00:37:29.762 [2024-09-29 16:45:30.225856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.762 [2024-09-29 16:45:30.225903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.762 qpair failed and we were unable to recover it. 00:37:29.762 [2024-09-29 16:45:30.226070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.762 [2024-09-29 16:45:30.226109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.762 qpair failed and we were unable to recover it. 00:37:29.762 [2024-09-29 16:45:30.226235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.762 [2024-09-29 16:45:30.226272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.762 qpair failed and we were unable to recover it. 00:37:29.762 [2024-09-29 16:45:30.226403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.762 [2024-09-29 16:45:30.226440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.762 qpair failed and we were unable to recover it. 00:37:29.762 [2024-09-29 16:45:30.226605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.762 [2024-09-29 16:45:30.226638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.762 qpair failed and we were unable to recover it. 00:37:29.762 [2024-09-29 16:45:30.226783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.762 [2024-09-29 16:45:30.226818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.762 qpair failed and we were unable to recover it. 00:37:29.762 [2024-09-29 16:45:30.226979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.762 [2024-09-29 16:45:30.227016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.762 qpair failed and we were unable to recover it. 00:37:29.762 [2024-09-29 16:45:30.227226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.762 [2024-09-29 16:45:30.227263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.762 qpair failed and we were unable to recover it. 00:37:29.762 [2024-09-29 16:45:30.227384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.762 [2024-09-29 16:45:30.227421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.762 qpair failed and we were unable to recover it. 00:37:29.762 [2024-09-29 16:45:30.227567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.762 [2024-09-29 16:45:30.227602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.762 qpair failed and we were unable to recover it. 00:37:29.762 [2024-09-29 16:45:30.227806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.762 [2024-09-29 16:45:30.227866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.762 qpair failed and we were unable to recover it. 00:37:29.762 [2024-09-29 16:45:30.227998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.762 [2024-09-29 16:45:30.228035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.762 qpair failed and we were unable to recover it. 00:37:29.762 [2024-09-29 16:45:30.228193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.762 [2024-09-29 16:45:30.228246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.762 qpair failed and we were unable to recover it. 00:37:29.762 [2024-09-29 16:45:30.228451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.762 [2024-09-29 16:45:30.228508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.762 qpair failed and we were unable to recover it. 00:37:29.762 [2024-09-29 16:45:30.228682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.763 [2024-09-29 16:45:30.228735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.763 qpair failed and we were unable to recover it. 00:37:29.763 [2024-09-29 16:45:30.228878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.763 [2024-09-29 16:45:30.228912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.763 qpair failed and we were unable to recover it. 00:37:29.763 [2024-09-29 16:45:30.229045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.763 [2024-09-29 16:45:30.229084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.763 qpair failed and we were unable to recover it. 00:37:29.763 [2024-09-29 16:45:30.229362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.763 [2024-09-29 16:45:30.229419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.763 qpair failed and we were unable to recover it. 00:37:29.763 [2024-09-29 16:45:30.229564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.763 [2024-09-29 16:45:30.229599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.763 qpair failed and we were unable to recover it. 00:37:29.763 [2024-09-29 16:45:30.229750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.763 [2024-09-29 16:45:30.229791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.763 qpair failed and we were unable to recover it. 00:37:29.763 [2024-09-29 16:45:30.229966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.763 [2024-09-29 16:45:30.230019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.763 qpair failed and we were unable to recover it. 00:37:29.763 [2024-09-29 16:45:30.230184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.763 [2024-09-29 16:45:30.230256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.763 qpair failed and we were unable to recover it. 00:37:29.763 [2024-09-29 16:45:30.230483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.763 [2024-09-29 16:45:30.230540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.763 qpair failed and we were unable to recover it. 00:37:29.763 [2024-09-29 16:45:30.230724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.763 [2024-09-29 16:45:30.230760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.763 qpair failed and we were unable to recover it. 00:37:29.763 [2024-09-29 16:45:30.230904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.763 [2024-09-29 16:45:30.230963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.763 qpair failed and we were unable to recover it. 00:37:29.763 [2024-09-29 16:45:30.231174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.763 [2024-09-29 16:45:30.231238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.763 qpair failed and we were unable to recover it. 00:37:29.763 [2024-09-29 16:45:30.231427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.763 [2024-09-29 16:45:30.231499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.763 qpair failed and we were unable to recover it. 00:37:29.763 [2024-09-29 16:45:30.231637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.763 [2024-09-29 16:45:30.231677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.763 qpair failed and we were unable to recover it. 00:37:29.763 [2024-09-29 16:45:30.231814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.763 [2024-09-29 16:45:30.231848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.763 qpair failed and we were unable to recover it. 00:37:29.763 [2024-09-29 16:45:30.231978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.763 [2024-09-29 16:45:30.232030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.763 qpair failed and we were unable to recover it. 00:37:29.763 [2024-09-29 16:45:30.232145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.763 [2024-09-29 16:45:30.232179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.763 qpair failed and we were unable to recover it. 00:37:29.763 [2024-09-29 16:45:30.232294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.763 [2024-09-29 16:45:30.232328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.763 qpair failed and we were unable to recover it. 00:37:29.763 [2024-09-29 16:45:30.232470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.763 [2024-09-29 16:45:30.232505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.763 qpair failed and we were unable to recover it. 00:37:29.763 [2024-09-29 16:45:30.232662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.763 [2024-09-29 16:45:30.232717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.763 qpair failed and we were unable to recover it. 00:37:29.763 [2024-09-29 16:45:30.232863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.763 [2024-09-29 16:45:30.232911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.763 qpair failed and we were unable to recover it. 00:37:29.763 [2024-09-29 16:45:30.233056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.763 [2024-09-29 16:45:30.233091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.763 qpair failed and we were unable to recover it. 00:37:29.763 [2024-09-29 16:45:30.233260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.763 [2024-09-29 16:45:30.233294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.763 qpair failed and we were unable to recover it. 00:37:29.763 [2024-09-29 16:45:30.233427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.763 [2024-09-29 16:45:30.233465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.763 qpair failed and we were unable to recover it. 00:37:29.763 [2024-09-29 16:45:30.233611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.763 [2024-09-29 16:45:30.233663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.763 qpair failed and we were unable to recover it. 00:37:29.763 [2024-09-29 16:45:30.233811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.763 [2024-09-29 16:45:30.233846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.763 qpair failed and we were unable to recover it. 00:37:29.763 [2024-09-29 16:45:30.234000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.763 [2024-09-29 16:45:30.234053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.763 qpair failed and we were unable to recover it. 00:37:29.763 [2024-09-29 16:45:30.234299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.763 [2024-09-29 16:45:30.234358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.763 qpair failed and we were unable to recover it. 00:37:29.763 [2024-09-29 16:45:30.234598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.763 [2024-09-29 16:45:30.234656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.763 qpair failed and we were unable to recover it. 00:37:29.763 [2024-09-29 16:45:30.234826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.763 [2024-09-29 16:45:30.234860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.763 qpair failed and we were unable to recover it. 00:37:29.763 [2024-09-29 16:45:30.235029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.763 [2024-09-29 16:45:30.235077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.763 qpair failed and we were unable to recover it. 00:37:29.763 [2024-09-29 16:45:30.235330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.763 [2024-09-29 16:45:30.235390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.763 qpair failed and we were unable to recover it. 00:37:29.763 [2024-09-29 16:45:30.235536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.763 [2024-09-29 16:45:30.235570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.763 qpair failed and we were unable to recover it. 00:37:29.763 [2024-09-29 16:45:30.235692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.763 [2024-09-29 16:45:30.235726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:29.763 qpair failed and we were unable to recover it. 00:37:29.763 [2024-09-29 16:45:30.235882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.763 [2024-09-29 16:45:30.235930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.763 qpair failed and we were unable to recover it. 00:37:29.763 [2024-09-29 16:45:30.236119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.763 [2024-09-29 16:45:30.236175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.763 qpair failed and we were unable to recover it. 00:37:29.763 [2024-09-29 16:45:30.236428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.763 [2024-09-29 16:45:30.236486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.763 qpair failed and we were unable to recover it. 00:37:29.763 [2024-09-29 16:45:30.236654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.763 [2024-09-29 16:45:30.236700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.763 qpair failed and we were unable to recover it. 00:37:29.763 [2024-09-29 16:45:30.236866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.763 [2024-09-29 16:45:30.236901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.763 qpair failed and we were unable to recover it. 00:37:29.763 [2024-09-29 16:45:30.237061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.763 [2024-09-29 16:45:30.237099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.763 qpair failed and we were unable to recover it. 00:37:29.763 [2024-09-29 16:45:30.237295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.763 [2024-09-29 16:45:30.237349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.763 qpair failed and we were unable to recover it. 00:37:29.763 [2024-09-29 16:45:30.237526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.763 [2024-09-29 16:45:30.237573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.763 qpair failed and we were unable to recover it. 00:37:29.763 [2024-09-29 16:45:30.237752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.763 [2024-09-29 16:45:30.237799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.763 qpair failed and we were unable to recover it. 00:37:29.763 [2024-09-29 16:45:30.237952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.763 [2024-09-29 16:45:30.237988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.763 qpair failed and we were unable to recover it. 00:37:29.763 [2024-09-29 16:45:30.238140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.764 [2024-09-29 16:45:30.238178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.764 qpair failed and we were unable to recover it. 00:37:29.764 [2024-09-29 16:45:30.238317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.764 [2024-09-29 16:45:30.238355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.764 qpair failed and we were unable to recover it. 00:37:29.764 [2024-09-29 16:45:30.238509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.764 [2024-09-29 16:45:30.238547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.764 qpair failed and we were unable to recover it. 00:37:29.764 [2024-09-29 16:45:30.238724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.764 [2024-09-29 16:45:30.238772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:29.764 qpair failed and we were unable to recover it. 00:37:29.764 [2024-09-29 16:45:30.238897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.764 [2024-09-29 16:45:30.238934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.764 qpair failed and we were unable to recover it. 00:37:29.764 [2024-09-29 16:45:30.239076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.764 [2024-09-29 16:45:30.239129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.764 qpair failed and we were unable to recover it. 00:37:29.764 [2024-09-29 16:45:30.239315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.764 [2024-09-29 16:45:30.239358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.764 qpair failed and we were unable to recover it. 00:37:29.764 [2024-09-29 16:45:30.239512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.764 [2024-09-29 16:45:30.239567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.764 qpair failed and we were unable to recover it. 00:37:29.764 [2024-09-29 16:45:30.239757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.764 [2024-09-29 16:45:30.239806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.764 qpair failed and we were unable to recover it. 00:37:29.764 [2024-09-29 16:45:30.239965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.764 [2024-09-29 16:45:30.240003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.764 qpair failed and we were unable to recover it. 00:37:29.764 [2024-09-29 16:45:30.240156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.764 [2024-09-29 16:45:30.240195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.764 qpair failed and we were unable to recover it. 00:37:29.764 [2024-09-29 16:45:30.240383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.764 [2024-09-29 16:45:30.240421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.764 qpair failed and we were unable to recover it. 00:37:29.764 [2024-09-29 16:45:30.240555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.764 [2024-09-29 16:45:30.240589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.764 qpair failed and we were unable to recover it. 00:37:29.764 [2024-09-29 16:45:30.240715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.764 [2024-09-29 16:45:30.240750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.764 qpair failed and we were unable to recover it. 00:37:29.764 [2024-09-29 16:45:30.240895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.764 [2024-09-29 16:45:30.240929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.764 qpair failed and we were unable to recover it. 00:37:29.764 [2024-09-29 16:45:30.241067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.764 [2024-09-29 16:45:30.241101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.764 qpair failed and we were unable to recover it. 00:37:29.764 [2024-09-29 16:45:30.241259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.764 [2024-09-29 16:45:30.241296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.764 qpair failed and we were unable to recover it. 00:37:29.764 [2024-09-29 16:45:30.241448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.764 [2024-09-29 16:45:30.241485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:29.764 qpair failed and we were unable to recover it. 00:37:29.764 [2024-09-29 16:45:30.241617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:29.764 [2024-09-29 16:45:30.241682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:29.764 qpair failed and we were unable to recover it. 00:37:29.764 [2024-09-29 16:45:30.241864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.075 [2024-09-29 16:45:30.241911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.075 qpair failed and we were unable to recover it. 00:37:30.075 [2024-09-29 16:45:30.242071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.075 [2024-09-29 16:45:30.242127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.075 qpair failed and we were unable to recover it. 00:37:30.075 [2024-09-29 16:45:30.242273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.075 [2024-09-29 16:45:30.242330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.075 qpair failed and we were unable to recover it. 00:37:30.075 [2024-09-29 16:45:30.242474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.075 [2024-09-29 16:45:30.242511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.075 qpair failed and we were unable to recover it. 00:37:30.075 [2024-09-29 16:45:30.242630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.075 [2024-09-29 16:45:30.242665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.075 qpair failed and we were unable to recover it. 00:37:30.075 [2024-09-29 16:45:30.242790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.075 [2024-09-29 16:45:30.242824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.075 qpair failed and we were unable to recover it. 00:37:30.075 [2024-09-29 16:45:30.242978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.075 [2024-09-29 16:45:30.243025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.075 qpair failed and we were unable to recover it. 00:37:30.075 [2024-09-29 16:45:30.243201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.075 [2024-09-29 16:45:30.243249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.075 qpair failed and we were unable to recover it. 00:37:30.075 [2024-09-29 16:45:30.243386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.075 [2024-09-29 16:45:30.243422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.075 qpair failed and we were unable to recover it. 00:37:30.075 [2024-09-29 16:45:30.243540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.075 [2024-09-29 16:45:30.243576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.075 qpair failed and we were unable to recover it. 00:37:30.075 [2024-09-29 16:45:30.243697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.075 [2024-09-29 16:45:30.243732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.075 qpair failed and we were unable to recover it. 00:37:30.075 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3329855 Killed "${NVMF_APP[@]}" "$@" 00:37:30.075 [2024-09-29 16:45:30.243850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.075 [2024-09-29 16:45:30.243884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.075 qpair failed and we were unable to recover it. 00:37:30.075 [2024-09-29 16:45:30.244040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.076 [2024-09-29 16:45:30.244077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.076 qpair failed and we were unable to recover it. 00:37:30.076 16:45:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:37:30.076 [2024-09-29 16:45:30.244241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.076 [2024-09-29 16:45:30.244279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.076 qpair failed and we were unable to recover it. 00:37:30.076 16:45:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:37:30.076 [2024-09-29 16:45:30.244403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.076 [2024-09-29 16:45:30.244440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.076 qpair failed and we were unable to recover it. 00:37:30.076 16:45:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:37:30.076 [2024-09-29 16:45:30.244590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.076 [2024-09-29 16:45:30.244624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.076 qpair failed and we were unable to recover it. 00:37:30.076 16:45:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:30.076 [2024-09-29 16:45:30.244835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.076 16:45:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:30.076 [2024-09-29 16:45:30.244902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.076 qpair failed and we were unable to recover it. 00:37:30.076 [2024-09-29 16:45:30.245047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.076 [2024-09-29 16:45:30.245103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.076 qpair failed and we were unable to recover it. 00:37:30.076 [2024-09-29 16:45:30.245234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.076 [2024-09-29 16:45:30.245273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.076 qpair failed and we were unable to recover it. 00:37:30.076 [2024-09-29 16:45:30.245438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.076 [2024-09-29 16:45:30.245472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.076 qpair failed and we were unable to recover it. 00:37:30.076 [2024-09-29 16:45:30.245613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.076 [2024-09-29 16:45:30.245648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.076 qpair failed and we were unable to recover it. 00:37:30.076 [2024-09-29 16:45:30.245789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.076 [2024-09-29 16:45:30.245847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.076 qpair failed and we were unable to recover it. 00:37:30.076 [2024-09-29 16:45:30.245965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.076 [2024-09-29 16:45:30.246000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.076 qpair failed and we were unable to recover it. 00:37:30.076 [2024-09-29 16:45:30.246114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.076 [2024-09-29 16:45:30.246150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.076 qpair failed and we were unable to recover it. 00:37:30.076 [2024-09-29 16:45:30.246296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.076 [2024-09-29 16:45:30.246332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.076 qpair failed and we were unable to recover it. 00:37:30.076 [2024-09-29 16:45:30.246458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.076 [2024-09-29 16:45:30.246493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.076 qpair failed and we were unable to recover it. 00:37:30.076 [2024-09-29 16:45:30.246639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.076 [2024-09-29 16:45:30.246687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.076 qpair failed and we were unable to recover it. 00:37:30.076 [2024-09-29 16:45:30.246799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.076 [2024-09-29 16:45:30.246834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.076 qpair failed and we were unable to recover it. 00:37:30.076 [2024-09-29 16:45:30.246944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.076 [2024-09-29 16:45:30.246979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.076 qpair failed and we were unable to recover it. 00:37:30.076 [2024-09-29 16:45:30.247117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.076 [2024-09-29 16:45:30.247153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.076 qpair failed and we were unable to recover it. 00:37:30.076 [2024-09-29 16:45:30.247330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.076 [2024-09-29 16:45:30.247365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.076 qpair failed and we were unable to recover it. 00:37:30.076 [2024-09-29 16:45:30.247507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.076 [2024-09-29 16:45:30.247555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.076 qpair failed and we were unable to recover it. 00:37:30.076 [2024-09-29 16:45:30.247691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.076 [2024-09-29 16:45:30.247741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.076 qpair failed and we were unable to recover it. 00:37:30.076 [2024-09-29 16:45:30.247886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.076 [2024-09-29 16:45:30.247924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.076 qpair failed and we were unable to recover it. 00:37:30.076 [2024-09-29 16:45:30.248063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.076 [2024-09-29 16:45:30.248100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.076 qpair failed and we were unable to recover it. 00:37:30.076 [2024-09-29 16:45:30.248285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.076 [2024-09-29 16:45:30.248323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.076 qpair failed and we were unable to recover it. 00:37:30.076 [2024-09-29 16:45:30.248452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.076 [2024-09-29 16:45:30.248491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.076 qpair failed and we were unable to recover it. 00:37:30.076 [2024-09-29 16:45:30.248621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.076 [2024-09-29 16:45:30.248655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.076 qpair failed and we were unable to recover it. 00:37:30.076 [2024-09-29 16:45:30.248782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.076 [2024-09-29 16:45:30.248817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.076 16:45:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # nvmfpid=3330422 00:37:30.076 qpair failed and we were unable to recover it. 00:37:30.076 16:45:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:37:30.076 16:45:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # waitforlisten 3330422 00:37:30.076 [2024-09-29 16:45:30.248991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.076 [2024-09-29 16:45:30.249040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.076 qpair failed and we were unable to recover it. 00:37:30.077 16:45:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3330422 ']' 00:37:30.077 [2024-09-29 16:45:30.249224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.077 [2024-09-29 16:45:30.249277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.077 qpair failed and we were unable to recover it. 00:37:30.077 16:45:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:30.077 [2024-09-29 16:45:30.249440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.077 16:45:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:30.077 [2024-09-29 16:45:30.249478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.077 qpair failed and we were unable to recover it. 00:37:30.077 16:45:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:30.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:30.077 [2024-09-29 16:45:30.249612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.077 [2024-09-29 16:45:30.249651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.077 qpair failed and we were unable to recover it. 00:37:30.077 16:45:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:30.077 [2024-09-29 16:45:30.249795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.077 [2024-09-29 16:45:30.249829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.077 qpair failed and we were unable to recover it. 00:37:30.077 16:45:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:30.077 [2024-09-29 16:45:30.249974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.077 [2024-09-29 16:45:30.250007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.077 qpair failed and we were unable to recover it. 00:37:30.077 [2024-09-29 16:45:30.250123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.077 [2024-09-29 16:45:30.250157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.077 qpair failed and we were unable to recover it. 00:37:30.077 [2024-09-29 16:45:30.250337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.077 [2024-09-29 16:45:30.250375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.077 qpair failed and we were unable to recover it. 00:37:30.077 [2024-09-29 16:45:30.250540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.077 [2024-09-29 16:45:30.250577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.077 qpair failed and we were unable to recover it. 00:37:30.077 [2024-09-29 16:45:30.250745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.077 [2024-09-29 16:45:30.250780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.077 qpair failed and we were unable to recover it. 00:37:30.077 [2024-09-29 16:45:30.250900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.077 [2024-09-29 16:45:30.250936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.077 qpair failed and we were unable to recover it. 00:37:30.077 [2024-09-29 16:45:30.251105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.077 [2024-09-29 16:45:30.251142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.077 qpair failed and we were unable to recover it. 00:37:30.077 [2024-09-29 16:45:30.251326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.077 [2024-09-29 16:45:30.251364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.077 qpair failed and we were unable to recover it. 00:37:30.077 [2024-09-29 16:45:30.251549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.077 [2024-09-29 16:45:30.251589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.077 qpair failed and we were unable to recover it. 00:37:30.077 [2024-09-29 16:45:30.251765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.077 [2024-09-29 16:45:30.251800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.077 qpair failed and we were unable to recover it. 00:37:30.077 [2024-09-29 16:45:30.251926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.077 [2024-09-29 16:45:30.251980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.077 qpair failed and we were unable to recover it. 00:37:30.077 [2024-09-29 16:45:30.252121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.077 [2024-09-29 16:45:30.252155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.077 qpair failed and we were unable to recover it. 00:37:30.077 [2024-09-29 16:45:30.252301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.077 [2024-09-29 16:45:30.252354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.077 qpair failed and we were unable to recover it. 00:37:30.077 [2024-09-29 16:45:30.252510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.077 [2024-09-29 16:45:30.252547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.077 qpair failed and we were unable to recover it. 00:37:30.077 [2024-09-29 16:45:30.252696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.077 [2024-09-29 16:45:30.252731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.077 qpair failed and we were unable to recover it. 00:37:30.077 [2024-09-29 16:45:30.252854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.077 [2024-09-29 16:45:30.252887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.077 qpair failed and we were unable to recover it. 00:37:30.077 [2024-09-29 16:45:30.253042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.077 [2024-09-29 16:45:30.253076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.077 qpair failed and we were unable to recover it. 00:37:30.077 [2024-09-29 16:45:30.253232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.077 [2024-09-29 16:45:30.253285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.077 qpair failed and we were unable to recover it. 00:37:30.077 [2024-09-29 16:45:30.253437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.077 [2024-09-29 16:45:30.253475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.077 qpair failed and we were unable to recover it. 00:37:30.077 [2024-09-29 16:45:30.253632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.077 [2024-09-29 16:45:30.253669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.077 qpair failed and we were unable to recover it. 00:37:30.077 [2024-09-29 16:45:30.253819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.077 [2024-09-29 16:45:30.253852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.077 qpair failed and we were unable to recover it. 00:37:30.077 [2024-09-29 16:45:30.253968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.077 [2024-09-29 16:45:30.254001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.077 qpair failed and we were unable to recover it. 00:37:30.077 [2024-09-29 16:45:30.254148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.077 [2024-09-29 16:45:30.254187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.077 qpair failed and we were unable to recover it. 00:37:30.077 [2024-09-29 16:45:30.254389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.077 [2024-09-29 16:45:30.254426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.078 qpair failed and we were unable to recover it. 00:37:30.078 [2024-09-29 16:45:30.254553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.078 [2024-09-29 16:45:30.254590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.078 qpair failed and we were unable to recover it. 00:37:30.078 [2024-09-29 16:45:30.254746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.078 [2024-09-29 16:45:30.254781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.078 qpair failed and we were unable to recover it. 00:37:30.078 [2024-09-29 16:45:30.254925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.078 [2024-09-29 16:45:30.254959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.078 qpair failed and we were unable to recover it. 00:37:30.078 [2024-09-29 16:45:30.255068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.078 [2024-09-29 16:45:30.255120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.078 qpair failed and we were unable to recover it. 00:37:30.078 [2024-09-29 16:45:30.255263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.078 [2024-09-29 16:45:30.255301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.078 qpair failed and we were unable to recover it. 00:37:30.078 [2024-09-29 16:45:30.255508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.078 [2024-09-29 16:45:30.255550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.078 qpair failed and we were unable to recover it. 00:37:30.078 [2024-09-29 16:45:30.255702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.078 [2024-09-29 16:45:30.255756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.078 qpair failed and we were unable to recover it. 00:37:30.078 [2024-09-29 16:45:30.255875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.078 [2024-09-29 16:45:30.255909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.078 qpair failed and we were unable to recover it. 00:37:30.078 [2024-09-29 16:45:30.256029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.078 [2024-09-29 16:45:30.256081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.078 qpair failed and we were unable to recover it. 00:37:30.078 [2024-09-29 16:45:30.256236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.078 [2024-09-29 16:45:30.256273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.078 qpair failed and we were unable to recover it. 00:37:30.078 [2024-09-29 16:45:30.256408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.078 [2024-09-29 16:45:30.256447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.078 qpair failed and we were unable to recover it. 00:37:30.078 [2024-09-29 16:45:30.256578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.078 [2024-09-29 16:45:30.256611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.078 qpair failed and we were unable to recover it. 00:37:30.078 [2024-09-29 16:45:30.256725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.078 [2024-09-29 16:45:30.256760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.078 qpair failed and we were unable to recover it. 00:37:30.078 [2024-09-29 16:45:30.256902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.078 [2024-09-29 16:45:30.256946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.078 qpair failed and we were unable to recover it. 00:37:30.078 [2024-09-29 16:45:30.257130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.078 [2024-09-29 16:45:30.257173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.078 qpair failed and we were unable to recover it. 00:37:30.078 [2024-09-29 16:45:30.257337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.078 [2024-09-29 16:45:30.257379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.078 qpair failed and we were unable to recover it. 00:37:30.078 [2024-09-29 16:45:30.257504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.078 [2024-09-29 16:45:30.257542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.078 qpair failed and we were unable to recover it. 00:37:30.078 [2024-09-29 16:45:30.257697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.078 [2024-09-29 16:45:30.257745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.078 qpair failed and we were unable to recover it. 00:37:30.078 [2024-09-29 16:45:30.257869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.078 [2024-09-29 16:45:30.257905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.078 qpair failed and we were unable to recover it. 00:37:30.078 [2024-09-29 16:45:30.258047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.078 [2024-09-29 16:45:30.258101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.078 qpair failed and we were unable to recover it. 00:37:30.078 [2024-09-29 16:45:30.258269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.078 [2024-09-29 16:45:30.258319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.078 qpair failed and we were unable to recover it. 00:37:30.078 [2024-09-29 16:45:30.258433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.078 [2024-09-29 16:45:30.258467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.078 qpair failed and we were unable to recover it. 00:37:30.078 [2024-09-29 16:45:30.258611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.078 [2024-09-29 16:45:30.258646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.078 qpair failed and we were unable to recover it. 00:37:30.078 [2024-09-29 16:45:30.258788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.078 [2024-09-29 16:45:30.258844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.078 qpair failed and we were unable to recover it. 00:37:30.078 [2024-09-29 16:45:30.259049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.078 [2024-09-29 16:45:30.259096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.078 qpair failed and we were unable to recover it. 00:37:30.078 [2024-09-29 16:45:30.259273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.078 [2024-09-29 16:45:30.259308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.078 qpair failed and we were unable to recover it. 00:37:30.078 [2024-09-29 16:45:30.259437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.078 [2024-09-29 16:45:30.259471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.078 qpair failed and we were unable to recover it. 00:37:30.078 [2024-09-29 16:45:30.259619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.078 [2024-09-29 16:45:30.259653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.078 qpair failed and we were unable to recover it. 00:37:30.078 [2024-09-29 16:45:30.259796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.078 [2024-09-29 16:45:30.259844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.078 qpair failed and we were unable to recover it. 00:37:30.078 [2024-09-29 16:45:30.259988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.078 [2024-09-29 16:45:30.260027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.078 qpair failed and we were unable to recover it. 00:37:30.079 [2024-09-29 16:45:30.260157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.079 [2024-09-29 16:45:30.260195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.079 qpair failed and we were unable to recover it. 00:37:30.079 [2024-09-29 16:45:30.260428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.079 [2024-09-29 16:45:30.260486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.079 qpair failed and we were unable to recover it. 00:37:30.079 [2024-09-29 16:45:30.260633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.079 [2024-09-29 16:45:30.260667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.079 qpair failed and we were unable to recover it. 00:37:30.079 [2024-09-29 16:45:30.260808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.079 [2024-09-29 16:45:30.260841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.079 qpair failed and we were unable to recover it. 00:37:30.079 [2024-09-29 16:45:30.261025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.079 [2024-09-29 16:45:30.261062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.079 qpair failed and we were unable to recover it. 00:37:30.079 [2024-09-29 16:45:30.261201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.079 [2024-09-29 16:45:30.261254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.079 qpair failed and we were unable to recover it. 00:37:30.079 [2024-09-29 16:45:30.261404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.079 [2024-09-29 16:45:30.261457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.079 qpair failed and we were unable to recover it. 00:37:30.079 [2024-09-29 16:45:30.261600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.079 [2024-09-29 16:45:30.261639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.079 qpair failed and we were unable to recover it. 00:37:30.079 [2024-09-29 16:45:30.261811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.079 [2024-09-29 16:45:30.261860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.079 qpair failed and we were unable to recover it. 00:37:30.079 [2024-09-29 16:45:30.262017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.079 [2024-09-29 16:45:30.262053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.079 qpair failed and we were unable to recover it. 00:37:30.079 [2024-09-29 16:45:30.262244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.079 [2024-09-29 16:45:30.262296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.079 qpair failed and we were unable to recover it. 00:37:30.079 [2024-09-29 16:45:30.262426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.079 [2024-09-29 16:45:30.262479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.079 qpair failed and we were unable to recover it. 00:37:30.079 [2024-09-29 16:45:30.262605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.079 [2024-09-29 16:45:30.262640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.079 qpair failed and we were unable to recover it. 00:37:30.079 [2024-09-29 16:45:30.262770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.079 [2024-09-29 16:45:30.262805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.079 qpair failed and we were unable to recover it. 00:37:30.079 [2024-09-29 16:45:30.262948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.079 [2024-09-29 16:45:30.262983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.079 qpair failed and we were unable to recover it. 00:37:30.079 [2024-09-29 16:45:30.263123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.079 [2024-09-29 16:45:30.263163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.079 qpair failed and we were unable to recover it. 00:37:30.079 [2024-09-29 16:45:30.263317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.079 [2024-09-29 16:45:30.263353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.079 qpair failed and we were unable to recover it. 00:37:30.079 [2024-09-29 16:45:30.263524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.079 [2024-09-29 16:45:30.263558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.079 qpair failed and we were unable to recover it. 00:37:30.079 [2024-09-29 16:45:30.263709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.079 [2024-09-29 16:45:30.263744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.079 qpair failed and we were unable to recover it. 00:37:30.079 [2024-09-29 16:45:30.263857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.079 [2024-09-29 16:45:30.263891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.079 qpair failed and we were unable to recover it. 00:37:30.079 [2024-09-29 16:45:30.263999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.079 [2024-09-29 16:45:30.264051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.079 qpair failed and we were unable to recover it. 00:37:30.079 [2024-09-29 16:45:30.264179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.079 [2024-09-29 16:45:30.264216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.079 qpair failed and we were unable to recover it. 00:37:30.079 [2024-09-29 16:45:30.264343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.079 [2024-09-29 16:45:30.264381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.079 qpair failed and we were unable to recover it. 00:37:30.079 [2024-09-29 16:45:30.264560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.079 [2024-09-29 16:45:30.264613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.079 qpair failed and we were unable to recover it. 00:37:30.079 [2024-09-29 16:45:30.264778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.079 [2024-09-29 16:45:30.264813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.079 qpair failed and we were unable to recover it. 00:37:30.079 [2024-09-29 16:45:30.264974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.079 [2024-09-29 16:45:30.265010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.079 qpair failed and we were unable to recover it. 00:37:30.079 [2024-09-29 16:45:30.265163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.079 [2024-09-29 16:45:30.265200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.079 qpair failed and we were unable to recover it. 00:37:30.079 [2024-09-29 16:45:30.265352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.079 [2024-09-29 16:45:30.265388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.079 qpair failed and we were unable to recover it. 00:37:30.079 [2024-09-29 16:45:30.265555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.079 [2024-09-29 16:45:30.265591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.079 qpair failed and we were unable to recover it. 00:37:30.079 [2024-09-29 16:45:30.265756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.079 [2024-09-29 16:45:30.265794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.079 qpair failed and we were unable to recover it. 00:37:30.079 [2024-09-29 16:45:30.265950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.079 [2024-09-29 16:45:30.265987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.079 qpair failed and we were unable to recover it. 00:37:30.079 [2024-09-29 16:45:30.266171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.079 [2024-09-29 16:45:30.266207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.079 qpair failed and we were unable to recover it. 00:37:30.079 [2024-09-29 16:45:30.266325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.079 [2024-09-29 16:45:30.266361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.079 qpair failed and we were unable to recover it. 00:37:30.079 [2024-09-29 16:45:30.266536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.079 [2024-09-29 16:45:30.266572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.079 qpair failed and we were unable to recover it. 00:37:30.079 [2024-09-29 16:45:30.266718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.080 [2024-09-29 16:45:30.266754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.080 qpair failed and we were unable to recover it. 00:37:30.080 [2024-09-29 16:45:30.266874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.080 [2024-09-29 16:45:30.266909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.080 qpair failed and we were unable to recover it. 00:37:30.080 [2024-09-29 16:45:30.267099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.080 [2024-09-29 16:45:30.267151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.080 qpair failed and we were unable to recover it. 00:37:30.080 [2024-09-29 16:45:30.267289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.080 [2024-09-29 16:45:30.267339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.080 qpair failed and we were unable to recover it. 00:37:30.080 [2024-09-29 16:45:30.267505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.080 [2024-09-29 16:45:30.267539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.080 qpair failed and we were unable to recover it. 00:37:30.080 [2024-09-29 16:45:30.267687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.080 [2024-09-29 16:45:30.267723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.080 qpair failed and we were unable to recover it. 00:37:30.080 [2024-09-29 16:45:30.267848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.080 [2024-09-29 16:45:30.267898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.080 qpair failed and we were unable to recover it. 00:37:30.080 [2024-09-29 16:45:30.268013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.080 [2024-09-29 16:45:30.268047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.080 qpair failed and we were unable to recover it. 00:37:30.080 [2024-09-29 16:45:30.268194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.080 [2024-09-29 16:45:30.268229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.080 qpair failed and we were unable to recover it. 00:37:30.080 [2024-09-29 16:45:30.268351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.080 [2024-09-29 16:45:30.268385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.080 qpair failed and we were unable to recover it. 00:37:30.080 [2024-09-29 16:45:30.268530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.080 [2024-09-29 16:45:30.268565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.080 qpair failed and we were unable to recover it. 00:37:30.080 [2024-09-29 16:45:30.268685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.080 [2024-09-29 16:45:30.268732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.080 qpair failed and we were unable to recover it. 00:37:30.080 [2024-09-29 16:45:30.268856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.080 [2024-09-29 16:45:30.268891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.080 qpair failed and we were unable to recover it. 00:37:30.080 [2024-09-29 16:45:30.269062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.080 [2024-09-29 16:45:30.269095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.080 qpair failed and we were unable to recover it. 00:37:30.080 [2024-09-29 16:45:30.269214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.080 [2024-09-29 16:45:30.269250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.080 qpair failed and we were unable to recover it. 00:37:30.080 [2024-09-29 16:45:30.269444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.080 [2024-09-29 16:45:30.269497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.080 qpair failed and we were unable to recover it. 00:37:30.080 [2024-09-29 16:45:30.269626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.080 [2024-09-29 16:45:30.269661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.080 qpair failed and we were unable to recover it. 00:37:30.080 [2024-09-29 16:45:30.269847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.080 [2024-09-29 16:45:30.269898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.080 qpair failed and we were unable to recover it. 00:37:30.080 [2024-09-29 16:45:30.270085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.080 [2024-09-29 16:45:30.270150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.080 qpair failed and we were unable to recover it. 00:37:30.080 [2024-09-29 16:45:30.270286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.080 [2024-09-29 16:45:30.270337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.080 qpair failed and we were unable to recover it. 00:37:30.080 [2024-09-29 16:45:30.270478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.080 [2024-09-29 16:45:30.270512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.080 qpair failed and we were unable to recover it. 00:37:30.080 [2024-09-29 16:45:30.270654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.080 [2024-09-29 16:45:30.270702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.080 qpair failed and we were unable to recover it. 00:37:30.080 [2024-09-29 16:45:30.270869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.080 [2024-09-29 16:45:30.270916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.080 qpair failed and we were unable to recover it. 00:37:30.080 [2024-09-29 16:45:30.271120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.080 [2024-09-29 16:45:30.271156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.080 qpair failed and we were unable to recover it. 00:37:30.080 [2024-09-29 16:45:30.271411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.080 [2024-09-29 16:45:30.271446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.080 qpair failed and we were unable to recover it. 00:37:30.080 [2024-09-29 16:45:30.271624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.080 [2024-09-29 16:45:30.271666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.080 qpair failed and we were unable to recover it. 00:37:30.080 [2024-09-29 16:45:30.271823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.080 [2024-09-29 16:45:30.271858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.080 qpair failed and we were unable to recover it. 00:37:30.080 [2024-09-29 16:45:30.271983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.080 [2024-09-29 16:45:30.272016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.080 qpair failed and we were unable to recover it. 00:37:30.080 [2024-09-29 16:45:30.272157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.080 [2024-09-29 16:45:30.272190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.080 qpair failed and we were unable to recover it. 00:37:30.080 [2024-09-29 16:45:30.272330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.080 [2024-09-29 16:45:30.272363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.080 qpair failed and we were unable to recover it. 00:37:30.080 [2024-09-29 16:45:30.272514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.080 [2024-09-29 16:45:30.272562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.080 qpair failed and we were unable to recover it. 00:37:30.080 [2024-09-29 16:45:30.272722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.080 [2024-09-29 16:45:30.272760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.080 qpair failed and we were unable to recover it. 00:37:30.080 [2024-09-29 16:45:30.272876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.081 [2024-09-29 16:45:30.272911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.081 qpair failed and we were unable to recover it. 00:37:30.081 [2024-09-29 16:45:30.273053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.081 [2024-09-29 16:45:30.273087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.081 qpair failed and we were unable to recover it. 00:37:30.081 [2024-09-29 16:45:30.273234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.081 [2024-09-29 16:45:30.273276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.081 qpair failed and we were unable to recover it. 00:37:30.081 [2024-09-29 16:45:30.273409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.081 [2024-09-29 16:45:30.273443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.081 qpair failed and we were unable to recover it. 00:37:30.081 [2024-09-29 16:45:30.273603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.081 [2024-09-29 16:45:30.273652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.081 qpair failed and we were unable to recover it. 00:37:30.081 [2024-09-29 16:45:30.273830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.081 [2024-09-29 16:45:30.273878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.081 qpair failed and we were unable to recover it. 00:37:30.081 [2024-09-29 16:45:30.274055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.081 [2024-09-29 16:45:30.274090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.081 qpair failed and we were unable to recover it. 00:37:30.081 [2024-09-29 16:45:30.274262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.081 [2024-09-29 16:45:30.274297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.081 qpair failed and we were unable to recover it. 00:37:30.081 [2024-09-29 16:45:30.274454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.081 [2024-09-29 16:45:30.274487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.081 qpair failed and we were unable to recover it. 00:37:30.081 [2024-09-29 16:45:30.274603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.081 [2024-09-29 16:45:30.274638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.081 qpair failed and we were unable to recover it. 00:37:30.081 [2024-09-29 16:45:30.274778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.081 [2024-09-29 16:45:30.274813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.081 qpair failed and we were unable to recover it. 00:37:30.081 [2024-09-29 16:45:30.274974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.081 [2024-09-29 16:45:30.275011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.081 qpair failed and we were unable to recover it. 00:37:30.081 [2024-09-29 16:45:30.275161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.081 [2024-09-29 16:45:30.275195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.081 qpair failed and we were unable to recover it. 00:37:30.081 [2024-09-29 16:45:30.275315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.081 [2024-09-29 16:45:30.275348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.081 qpair failed and we were unable to recover it. 00:37:30.081 [2024-09-29 16:45:30.275528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.081 [2024-09-29 16:45:30.275560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.081 qpair failed and we were unable to recover it. 00:37:30.081 [2024-09-29 16:45:30.275738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.081 [2024-09-29 16:45:30.275772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.081 qpair failed and we were unable to recover it. 00:37:30.081 [2024-09-29 16:45:30.275909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.081 [2024-09-29 16:45:30.275965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.081 qpair failed and we were unable to recover it. 00:37:30.081 [2024-09-29 16:45:30.276128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.081 [2024-09-29 16:45:30.276164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.081 qpair failed and we were unable to recover it. 00:37:30.081 [2024-09-29 16:45:30.276338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.081 [2024-09-29 16:45:30.276382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.081 qpair failed and we were unable to recover it. 00:37:30.081 [2024-09-29 16:45:30.276489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.081 [2024-09-29 16:45:30.276523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.081 qpair failed and we were unable to recover it. 00:37:30.081 [2024-09-29 16:45:30.276705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.081 [2024-09-29 16:45:30.276739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.081 qpair failed and we were unable to recover it. 00:37:30.081 [2024-09-29 16:45:30.276853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.081 [2024-09-29 16:45:30.276886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.081 qpair failed and we were unable to recover it. 00:37:30.081 [2024-09-29 16:45:30.277039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.081 [2024-09-29 16:45:30.277074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.081 qpair failed and we were unable to recover it. 00:37:30.081 [2024-09-29 16:45:30.277217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.081 [2024-09-29 16:45:30.277251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.081 qpair failed and we were unable to recover it. 00:37:30.081 [2024-09-29 16:45:30.277421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.081 [2024-09-29 16:45:30.277454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.081 qpair failed and we were unable to recover it. 00:37:30.081 [2024-09-29 16:45:30.277575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.081 [2024-09-29 16:45:30.277618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.081 qpair failed and we were unable to recover it. 00:37:30.081 [2024-09-29 16:45:30.277759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.081 [2024-09-29 16:45:30.277794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.081 qpair failed and we were unable to recover it. 00:37:30.081 [2024-09-29 16:45:30.277906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.081 [2024-09-29 16:45:30.277940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.081 qpair failed and we were unable to recover it. 00:37:30.081 [2024-09-29 16:45:30.278075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.081 [2024-09-29 16:45:30.278108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.081 qpair failed and we were unable to recover it. 00:37:30.081 [2024-09-29 16:45:30.278249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.081 [2024-09-29 16:45:30.278287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.081 qpair failed and we were unable to recover it. 00:37:30.081 [2024-09-29 16:45:30.278436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.081 [2024-09-29 16:45:30.278469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.081 qpair failed and we were unable to recover it. 00:37:30.081 [2024-09-29 16:45:30.278641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.081 [2024-09-29 16:45:30.278688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.081 qpair failed and we were unable to recover it. 00:37:30.081 [2024-09-29 16:45:30.278811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.081 [2024-09-29 16:45:30.278845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.081 qpair failed and we were unable to recover it. 00:37:30.081 [2024-09-29 16:45:30.278995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.081 [2024-09-29 16:45:30.279029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.081 qpair failed and we were unable to recover it. 00:37:30.081 [2024-09-29 16:45:30.279167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.081 [2024-09-29 16:45:30.279201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.081 qpair failed and we were unable to recover it. 00:37:30.081 [2024-09-29 16:45:30.279342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.081 [2024-09-29 16:45:30.279376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.081 qpair failed and we were unable to recover it. 00:37:30.081 [2024-09-29 16:45:30.279515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.081 [2024-09-29 16:45:30.279549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.081 qpair failed and we were unable to recover it. 00:37:30.081 [2024-09-29 16:45:30.279705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.081 [2024-09-29 16:45:30.279741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.081 qpair failed and we were unable to recover it. 00:37:30.081 [2024-09-29 16:45:30.279856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.082 [2024-09-29 16:45:30.279890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.082 qpair failed and we were unable to recover it. 00:37:30.082 [2024-09-29 16:45:30.280052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.082 [2024-09-29 16:45:30.280085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.082 qpair failed and we were unable to recover it. 00:37:30.082 [2024-09-29 16:45:30.280224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.082 [2024-09-29 16:45:30.280257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.082 qpair failed and we were unable to recover it. 00:37:30.082 [2024-09-29 16:45:30.280385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.082 [2024-09-29 16:45:30.280432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.082 qpair failed and we were unable to recover it. 00:37:30.082 [2024-09-29 16:45:30.280577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.082 [2024-09-29 16:45:30.280623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.082 qpair failed and we were unable to recover it. 00:37:30.082 [2024-09-29 16:45:30.280776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.082 [2024-09-29 16:45:30.280813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.082 qpair failed and we were unable to recover it. 00:37:30.082 [2024-09-29 16:45:30.280968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.082 [2024-09-29 16:45:30.281002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.082 qpair failed and we were unable to recover it. 00:37:30.082 [2024-09-29 16:45:30.281181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.082 [2024-09-29 16:45:30.281215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.082 qpair failed and we were unable to recover it. 00:37:30.082 [2024-09-29 16:45:30.281359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.082 [2024-09-29 16:45:30.281393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.082 qpair failed and we were unable to recover it. 00:37:30.082 [2024-09-29 16:45:30.281507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.082 [2024-09-29 16:45:30.281541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.082 qpair failed and we were unable to recover it. 00:37:30.082 [2024-09-29 16:45:30.281664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.082 [2024-09-29 16:45:30.281711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.082 qpair failed and we were unable to recover it. 00:37:30.082 [2024-09-29 16:45:30.281832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.082 [2024-09-29 16:45:30.281866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.082 qpair failed and we were unable to recover it. 00:37:30.082 [2024-09-29 16:45:30.282007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.082 [2024-09-29 16:45:30.282042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.082 qpair failed and we were unable to recover it. 00:37:30.082 [2024-09-29 16:45:30.282221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.082 [2024-09-29 16:45:30.282254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.082 qpair failed and we were unable to recover it. 00:37:30.082 [2024-09-29 16:45:30.282397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.082 [2024-09-29 16:45:30.282431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.082 qpair failed and we were unable to recover it. 00:37:30.082 [2024-09-29 16:45:30.282578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.082 [2024-09-29 16:45:30.282614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.082 qpair failed and we were unable to recover it. 00:37:30.082 [2024-09-29 16:45:30.282756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.082 [2024-09-29 16:45:30.282802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.082 qpair failed and we were unable to recover it. 00:37:30.082 [2024-09-29 16:45:30.282962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.082 [2024-09-29 16:45:30.282997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.082 qpair failed and we were unable to recover it. 00:37:30.082 [2024-09-29 16:45:30.283178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.082 [2024-09-29 16:45:30.283212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.082 qpair failed and we were unable to recover it. 00:37:30.082 [2024-09-29 16:45:30.283327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.082 [2024-09-29 16:45:30.283362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.082 qpair failed and we were unable to recover it. 00:37:30.082 [2024-09-29 16:45:30.283493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.082 [2024-09-29 16:45:30.283542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.082 qpair failed and we were unable to recover it. 00:37:30.082 [2024-09-29 16:45:30.283723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.082 [2024-09-29 16:45:30.283771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.082 qpair failed and we were unable to recover it. 00:37:30.082 [2024-09-29 16:45:30.283897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.082 [2024-09-29 16:45:30.283934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.082 qpair failed and we were unable to recover it. 00:37:30.082 [2024-09-29 16:45:30.284079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.082 [2024-09-29 16:45:30.284114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.082 qpair failed and we were unable to recover it. 00:37:30.082 [2024-09-29 16:45:30.284256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.082 [2024-09-29 16:45:30.284290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.082 qpair failed and we were unable to recover it. 00:37:30.082 [2024-09-29 16:45:30.284416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.082 [2024-09-29 16:45:30.284450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.082 qpair failed and we were unable to recover it. 00:37:30.082 [2024-09-29 16:45:30.284596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.082 [2024-09-29 16:45:30.284632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.082 qpair failed and we were unable to recover it. 00:37:30.082 [2024-09-29 16:45:30.284811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.082 [2024-09-29 16:45:30.284859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.082 qpair failed and we were unable to recover it. 00:37:30.082 [2024-09-29 16:45:30.285009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.082 [2024-09-29 16:45:30.285045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.082 qpair failed and we were unable to recover it. 00:37:30.082 [2024-09-29 16:45:30.285173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.082 [2024-09-29 16:45:30.285208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.082 qpair failed and we were unable to recover it. 00:37:30.082 [2024-09-29 16:45:30.285352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.082 [2024-09-29 16:45:30.285387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.082 qpair failed and we were unable to recover it. 00:37:30.082 [2024-09-29 16:45:30.285560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.082 [2024-09-29 16:45:30.285600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.082 qpair failed and we were unable to recover it. 00:37:30.082 [2024-09-29 16:45:30.285757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.082 [2024-09-29 16:45:30.285792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.082 qpair failed and we were unable to recover it. 00:37:30.082 [2024-09-29 16:45:30.285913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.082 [2024-09-29 16:45:30.285949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.082 qpair failed and we were unable to recover it. 00:37:30.082 [2024-09-29 16:45:30.286105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.082 [2024-09-29 16:45:30.286139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.082 qpair failed and we were unable to recover it. 00:37:30.082 [2024-09-29 16:45:30.286280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.082 [2024-09-29 16:45:30.286315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.082 qpair failed and we were unable to recover it. 00:37:30.082 [2024-09-29 16:45:30.286439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.082 [2024-09-29 16:45:30.286473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.082 qpair failed and we were unable to recover it. 00:37:30.082 [2024-09-29 16:45:30.286585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.082 [2024-09-29 16:45:30.286619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.082 qpair failed and we were unable to recover it. 00:37:30.082 [2024-09-29 16:45:30.286781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.082 [2024-09-29 16:45:30.286817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.082 qpair failed and we were unable to recover it. 00:37:30.083 [2024-09-29 16:45:30.286981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.083 [2024-09-29 16:45:30.287028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.083 qpair failed and we were unable to recover it. 00:37:30.083 [2024-09-29 16:45:30.287173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.083 [2024-09-29 16:45:30.287208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.083 qpair failed and we were unable to recover it. 00:37:30.083 [2024-09-29 16:45:30.287327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.083 [2024-09-29 16:45:30.287361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.083 qpair failed and we were unable to recover it. 00:37:30.083 [2024-09-29 16:45:30.287486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.083 [2024-09-29 16:45:30.287519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.083 qpair failed and we were unable to recover it. 00:37:30.083 [2024-09-29 16:45:30.287666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.083 [2024-09-29 16:45:30.287706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.083 qpair failed and we were unable to recover it. 00:37:30.083 [2024-09-29 16:45:30.287849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.083 [2024-09-29 16:45:30.287884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.083 qpair failed and we were unable to recover it. 00:37:30.083 [2024-09-29 16:45:30.288023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.083 [2024-09-29 16:45:30.288057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.083 qpair failed and we were unable to recover it. 00:37:30.083 [2024-09-29 16:45:30.288209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.083 [2024-09-29 16:45:30.288243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.083 qpair failed and we were unable to recover it. 00:37:30.083 [2024-09-29 16:45:30.288359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.083 [2024-09-29 16:45:30.288394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.083 qpair failed and we were unable to recover it. 00:37:30.083 [2024-09-29 16:45:30.288546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.083 [2024-09-29 16:45:30.288582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.083 qpair failed and we were unable to recover it. 00:37:30.083 [2024-09-29 16:45:30.288732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.083 [2024-09-29 16:45:30.288767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.083 qpair failed and we were unable to recover it. 00:37:30.083 [2024-09-29 16:45:30.288914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.083 [2024-09-29 16:45:30.288949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.083 qpair failed and we were unable to recover it. 00:37:30.083 [2024-09-29 16:45:30.289098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.083 [2024-09-29 16:45:30.289131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.083 qpair failed and we were unable to recover it. 00:37:30.083 [2024-09-29 16:45:30.289269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.083 [2024-09-29 16:45:30.289302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.083 qpair failed and we were unable to recover it. 00:37:30.083 [2024-09-29 16:45:30.289447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.083 [2024-09-29 16:45:30.289480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.083 qpair failed and we were unable to recover it. 00:37:30.083 [2024-09-29 16:45:30.289621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.083 [2024-09-29 16:45:30.289656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.083 qpair failed and we were unable to recover it. 00:37:30.083 [2024-09-29 16:45:30.289810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.083 [2024-09-29 16:45:30.289845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.083 qpair failed and we were unable to recover it. 00:37:30.083 [2024-09-29 16:45:30.289987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.083 [2024-09-29 16:45:30.290022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.083 qpair failed and we were unable to recover it. 00:37:30.083 [2024-09-29 16:45:30.290131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.083 [2024-09-29 16:45:30.290165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.083 qpair failed and we were unable to recover it. 00:37:30.083 [2024-09-29 16:45:30.290315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.083 [2024-09-29 16:45:30.290350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.083 qpair failed and we were unable to recover it. 00:37:30.083 [2024-09-29 16:45:30.290512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.083 [2024-09-29 16:45:30.290548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.083 qpair failed and we were unable to recover it. 00:37:30.083 [2024-09-29 16:45:30.290726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.083 [2024-09-29 16:45:30.290774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.083 qpair failed and we were unable to recover it. 00:37:30.083 [2024-09-29 16:45:30.290927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.083 [2024-09-29 16:45:30.290963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.083 qpair failed and we were unable to recover it. 00:37:30.083 [2024-09-29 16:45:30.291082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.083 [2024-09-29 16:45:30.291116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.083 qpair failed and we were unable to recover it. 00:37:30.083 [2024-09-29 16:45:30.291265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.083 [2024-09-29 16:45:30.291298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.083 qpair failed and we were unable to recover it. 00:37:30.083 [2024-09-29 16:45:30.291416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.083 [2024-09-29 16:45:30.291450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.083 qpair failed and we were unable to recover it. 00:37:30.083 [2024-09-29 16:45:30.291592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.083 [2024-09-29 16:45:30.291640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.083 qpair failed and we were unable to recover it. 00:37:30.083 [2024-09-29 16:45:30.291812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.083 [2024-09-29 16:45:30.291859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.083 qpair failed and we were unable to recover it. 00:37:30.083 [2024-09-29 16:45:30.292041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.083 [2024-09-29 16:45:30.292077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.083 qpair failed and we were unable to recover it. 00:37:30.083 [2024-09-29 16:45:30.292246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.083 [2024-09-29 16:45:30.292280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.083 qpair failed and we were unable to recover it. 00:37:30.083 [2024-09-29 16:45:30.292397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.083 [2024-09-29 16:45:30.292431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.083 qpair failed and we were unable to recover it. 00:37:30.083 [2024-09-29 16:45:30.292582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.083 [2024-09-29 16:45:30.292615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.083 qpair failed and we were unable to recover it. 00:37:30.083 [2024-09-29 16:45:30.292753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.083 [2024-09-29 16:45:30.292802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.083 qpair failed and we were unable to recover it. 00:37:30.083 [2024-09-29 16:45:30.292963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.083 [2024-09-29 16:45:30.293011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.083 qpair failed and we were unable to recover it. 00:37:30.083 [2024-09-29 16:45:30.293194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.083 [2024-09-29 16:45:30.293230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.083 qpair failed and we were unable to recover it. 00:37:30.083 [2024-09-29 16:45:30.293334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.083 [2024-09-29 16:45:30.293368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.083 qpair failed and we were unable to recover it. 00:37:30.083 [2024-09-29 16:45:30.293515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.083 [2024-09-29 16:45:30.293549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.083 qpair failed and we were unable to recover it. 00:37:30.083 [2024-09-29 16:45:30.293701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.083 [2024-09-29 16:45:30.293738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.083 qpair failed and we were unable to recover it. 00:37:30.083 [2024-09-29 16:45:30.293880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.084 [2024-09-29 16:45:30.293916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.084 qpair failed and we were unable to recover it. 00:37:30.084 [2024-09-29 16:45:30.294065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.084 [2024-09-29 16:45:30.294099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.084 qpair failed and we were unable to recover it. 00:37:30.084 [2024-09-29 16:45:30.294246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.084 [2024-09-29 16:45:30.294280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.084 qpair failed and we were unable to recover it. 00:37:30.084 [2024-09-29 16:45:30.294416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.084 [2024-09-29 16:45:30.294450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.084 qpair failed and we were unable to recover it. 00:37:30.084 [2024-09-29 16:45:30.294597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.084 [2024-09-29 16:45:30.294642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.084 qpair failed and we were unable to recover it. 00:37:30.084 [2024-09-29 16:45:30.294764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.084 [2024-09-29 16:45:30.294798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.084 qpair failed and we were unable to recover it. 00:37:30.084 [2024-09-29 16:45:30.294919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.084 [2024-09-29 16:45:30.294964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.084 qpair failed and we were unable to recover it. 00:37:30.084 [2024-09-29 16:45:30.295093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.084 [2024-09-29 16:45:30.295140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.084 qpair failed and we were unable to recover it. 00:37:30.084 [2024-09-29 16:45:30.295269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.084 [2024-09-29 16:45:30.295306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.084 qpair failed and we were unable to recover it. 00:37:30.084 [2024-09-29 16:45:30.295479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.084 [2024-09-29 16:45:30.295513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.084 qpair failed and we were unable to recover it. 00:37:30.084 [2024-09-29 16:45:30.295656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.084 [2024-09-29 16:45:30.295698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.084 qpair failed and we were unable to recover it. 00:37:30.084 [2024-09-29 16:45:30.295855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.084 [2024-09-29 16:45:30.295890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.084 qpair failed and we were unable to recover it. 00:37:30.084 [2024-09-29 16:45:30.296037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.084 [2024-09-29 16:45:30.296070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.084 qpair failed and we were unable to recover it. 00:37:30.084 [2024-09-29 16:45:30.296241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.084 [2024-09-29 16:45:30.296275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.084 qpair failed and we were unable to recover it. 00:37:30.084 [2024-09-29 16:45:30.296379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.084 [2024-09-29 16:45:30.296413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.084 qpair failed and we were unable to recover it. 00:37:30.084 [2024-09-29 16:45:30.296564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.084 [2024-09-29 16:45:30.296601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.084 qpair failed and we were unable to recover it. 00:37:30.084 [2024-09-29 16:45:30.296754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.084 [2024-09-29 16:45:30.296789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.084 qpair failed and we were unable to recover it. 00:37:30.084 [2024-09-29 16:45:30.296909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.084 [2024-09-29 16:45:30.296942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.084 qpair failed and we were unable to recover it. 00:37:30.084 [2024-09-29 16:45:30.297099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.084 [2024-09-29 16:45:30.297132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.084 qpair failed and we were unable to recover it. 00:37:30.084 [2024-09-29 16:45:30.297274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.084 [2024-09-29 16:45:30.297307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.084 qpair failed and we were unable to recover it. 00:37:30.084 [2024-09-29 16:45:30.297442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.084 [2024-09-29 16:45:30.297476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.084 qpair failed and we were unable to recover it. 00:37:30.084 [2024-09-29 16:45:30.297638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.084 [2024-09-29 16:45:30.297703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.084 qpair failed and we were unable to recover it. 00:37:30.084 [2024-09-29 16:45:30.297886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.084 [2024-09-29 16:45:30.297921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.084 qpair failed and we were unable to recover it. 00:37:30.084 [2024-09-29 16:45:30.298084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.084 [2024-09-29 16:45:30.298118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.084 qpair failed and we were unable to recover it. 00:37:30.084 [2024-09-29 16:45:30.298285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.084 [2024-09-29 16:45:30.298319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.084 qpair failed and we were unable to recover it. 00:37:30.084 [2024-09-29 16:45:30.298461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.084 [2024-09-29 16:45:30.298495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.084 qpair failed and we were unable to recover it. 00:37:30.084 [2024-09-29 16:45:30.298610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.084 [2024-09-29 16:45:30.298644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.084 qpair failed and we were unable to recover it. 00:37:30.084 [2024-09-29 16:45:30.298835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.084 [2024-09-29 16:45:30.298869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.084 qpair failed and we were unable to recover it. 00:37:30.084 [2024-09-29 16:45:30.298994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.084 [2024-09-29 16:45:30.299036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.084 qpair failed and we were unable to recover it. 00:37:30.084 [2024-09-29 16:45:30.299180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.084 [2024-09-29 16:45:30.299214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.084 qpair failed and we were unable to recover it. 00:37:30.084 [2024-09-29 16:45:30.299352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.084 [2024-09-29 16:45:30.299387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.084 qpair failed and we were unable to recover it. 00:37:30.084 [2024-09-29 16:45:30.299544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.084 [2024-09-29 16:45:30.299592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.084 qpair failed and we were unable to recover it. 00:37:30.084 [2024-09-29 16:45:30.299772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.084 [2024-09-29 16:45:30.299810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.084 qpair failed and we were unable to recover it. 00:37:30.084 [2024-09-29 16:45:30.299958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.084 [2024-09-29 16:45:30.299992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.084 qpair failed and we were unable to recover it. 00:37:30.084 [2024-09-29 16:45:30.300158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.084 [2024-09-29 16:45:30.300197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.084 qpair failed and we were unable to recover it. 00:37:30.084 [2024-09-29 16:45:30.300314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.084 [2024-09-29 16:45:30.300348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.084 qpair failed and we were unable to recover it. 00:37:30.084 [2024-09-29 16:45:30.300524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.084 [2024-09-29 16:45:30.300560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.084 qpair failed and we were unable to recover it. 00:37:30.084 [2024-09-29 16:45:30.300719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.084 [2024-09-29 16:45:30.300754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.084 qpair failed and we were unable to recover it. 00:37:30.084 [2024-09-29 16:45:30.300925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.085 [2024-09-29 16:45:30.300958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.085 qpair failed and we were unable to recover it. 00:37:30.085 [2024-09-29 16:45:30.301109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.085 [2024-09-29 16:45:30.301141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.085 qpair failed and we were unable to recover it. 00:37:30.085 [2024-09-29 16:45:30.301286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.085 [2024-09-29 16:45:30.301318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.085 qpair failed and we were unable to recover it. 00:37:30.085 [2024-09-29 16:45:30.301439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.085 [2024-09-29 16:45:30.301472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.085 qpair failed and we were unable to recover it. 00:37:30.085 [2024-09-29 16:45:30.301618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.085 [2024-09-29 16:45:30.301654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.085 qpair failed and we were unable to recover it. 00:37:30.085 [2024-09-29 16:45:30.301837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.085 [2024-09-29 16:45:30.301871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.085 qpair failed and we were unable to recover it. 00:37:30.085 [2024-09-29 16:45:30.302007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.085 [2024-09-29 16:45:30.302041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.085 qpair failed and we were unable to recover it. 00:37:30.085 [2024-09-29 16:45:30.302185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.085 [2024-09-29 16:45:30.302219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.085 qpair failed and we were unable to recover it. 00:37:30.085 [2024-09-29 16:45:30.302351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.085 [2024-09-29 16:45:30.302399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.085 qpair failed and we were unable to recover it. 00:37:30.085 [2024-09-29 16:45:30.302556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.085 [2024-09-29 16:45:30.302592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.085 qpair failed and we were unable to recover it. 00:37:30.085 [2024-09-29 16:45:30.302742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.085 [2024-09-29 16:45:30.302778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.085 qpair failed and we were unable to recover it. 00:37:30.085 [2024-09-29 16:45:30.302917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.085 [2024-09-29 16:45:30.302950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.085 qpair failed and we were unable to recover it. 00:37:30.085 [2024-09-29 16:45:30.303106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.085 [2024-09-29 16:45:30.303139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.085 qpair failed and we were unable to recover it. 00:37:30.085 [2024-09-29 16:45:30.303279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.085 [2024-09-29 16:45:30.303313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.085 qpair failed and we were unable to recover it. 00:37:30.085 [2024-09-29 16:45:30.303452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.085 [2024-09-29 16:45:30.303485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.085 qpair failed and we were unable to recover it. 00:37:30.085 [2024-09-29 16:45:30.303620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.085 [2024-09-29 16:45:30.303662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.085 qpair failed and we were unable to recover it. 00:37:30.085 [2024-09-29 16:45:30.303799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.085 [2024-09-29 16:45:30.303832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.085 qpair failed and we were unable to recover it. 00:37:30.085 [2024-09-29 16:45:30.303984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.085 [2024-09-29 16:45:30.304019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.085 qpair failed and we were unable to recover it. 00:37:30.085 [2024-09-29 16:45:30.304139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.085 [2024-09-29 16:45:30.304174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.085 qpair failed and we were unable to recover it. 00:37:30.085 [2024-09-29 16:45:30.304292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.085 [2024-09-29 16:45:30.304326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.085 qpair failed and we were unable to recover it. 00:37:30.085 [2024-09-29 16:45:30.304471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.085 [2024-09-29 16:45:30.304505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.085 qpair failed and we were unable to recover it. 00:37:30.085 [2024-09-29 16:45:30.304647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.085 [2024-09-29 16:45:30.304700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.085 qpair failed and we were unable to recover it. 00:37:30.085 [2024-09-29 16:45:30.304857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.085 [2024-09-29 16:45:30.304906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.085 qpair failed and we were unable to recover it. 00:37:30.085 [2024-09-29 16:45:30.305042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.085 [2024-09-29 16:45:30.305076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.085 qpair failed and we were unable to recover it. 00:37:30.085 [2024-09-29 16:45:30.305230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.085 [2024-09-29 16:45:30.305264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.085 qpair failed and we were unable to recover it. 00:37:30.085 [2024-09-29 16:45:30.305378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.085 [2024-09-29 16:45:30.305410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.085 qpair failed and we were unable to recover it. 00:37:30.085 [2024-09-29 16:45:30.305574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.085 [2024-09-29 16:45:30.305607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.085 qpair failed and we were unable to recover it. 00:37:30.085 [2024-09-29 16:45:30.305758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.085 [2024-09-29 16:45:30.305793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.085 qpair failed and we were unable to recover it. 00:37:30.085 [2024-09-29 16:45:30.305913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.085 [2024-09-29 16:45:30.305945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.085 qpair failed and we were unable to recover it. 00:37:30.085 [2024-09-29 16:45:30.306065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.085 [2024-09-29 16:45:30.306098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.085 qpair failed and we were unable to recover it. 00:37:30.085 [2024-09-29 16:45:30.306214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.085 [2024-09-29 16:45:30.306246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.085 qpair failed and we were unable to recover it. 00:37:30.085 [2024-09-29 16:45:30.306401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.085 [2024-09-29 16:45:30.306438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.085 qpair failed and we were unable to recover it. 00:37:30.085 [2024-09-29 16:45:30.306559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.085 [2024-09-29 16:45:30.306597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.085 qpair failed and we were unable to recover it. 00:37:30.085 [2024-09-29 16:45:30.306783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.086 [2024-09-29 16:45:30.306819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.086 qpair failed and we were unable to recover it. 00:37:30.086 [2024-09-29 16:45:30.306937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.086 [2024-09-29 16:45:30.306983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.086 qpair failed and we were unable to recover it. 00:37:30.086 [2024-09-29 16:45:30.307100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.086 [2024-09-29 16:45:30.307134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.086 qpair failed and we were unable to recover it. 00:37:30.086 [2024-09-29 16:45:30.307281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.086 [2024-09-29 16:45:30.307320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.086 qpair failed and we were unable to recover it. 00:37:30.086 [2024-09-29 16:45:30.307435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.086 [2024-09-29 16:45:30.307470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.086 qpair failed and we were unable to recover it. 00:37:30.086 [2024-09-29 16:45:30.307599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.086 [2024-09-29 16:45:30.307635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.086 qpair failed and we were unable to recover it. 00:37:30.086 [2024-09-29 16:45:30.307771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.086 [2024-09-29 16:45:30.307805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.086 qpair failed and we were unable to recover it. 00:37:30.086 [2024-09-29 16:45:30.307947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.086 [2024-09-29 16:45:30.307992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.086 qpair failed and we were unable to recover it. 00:37:30.086 [2024-09-29 16:45:30.308133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.086 [2024-09-29 16:45:30.308167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.086 qpair failed and we were unable to recover it. 00:37:30.086 [2024-09-29 16:45:30.308314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.086 [2024-09-29 16:45:30.308347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.086 qpair failed and we were unable to recover it. 00:37:30.086 [2024-09-29 16:45:30.308521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.086 [2024-09-29 16:45:30.308555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.086 qpair failed and we were unable to recover it. 00:37:30.086 [2024-09-29 16:45:30.308699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.086 [2024-09-29 16:45:30.308747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.086 qpair failed and we were unable to recover it. 00:37:30.086 [2024-09-29 16:45:30.308899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.086 [2024-09-29 16:45:30.308935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.086 qpair failed and we were unable to recover it. 00:37:30.086 [2024-09-29 16:45:30.309100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.086 [2024-09-29 16:45:30.309136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.086 qpair failed and we were unable to recover it. 00:37:30.086 [2024-09-29 16:45:30.309308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.086 [2024-09-29 16:45:30.309355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.086 qpair failed and we were unable to recover it. 00:37:30.086 [2024-09-29 16:45:30.309474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.086 [2024-09-29 16:45:30.309508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.086 qpair failed and we were unable to recover it. 00:37:30.086 [2024-09-29 16:45:30.309635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.086 [2024-09-29 16:45:30.309681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.086 qpair failed and we were unable to recover it. 00:37:30.086 [2024-09-29 16:45:30.309839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.086 [2024-09-29 16:45:30.309874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.086 qpair failed and we were unable to recover it. 00:37:30.086 [2024-09-29 16:45:30.309994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.086 [2024-09-29 16:45:30.310028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.086 qpair failed and we were unable to recover it. 00:37:30.086 [2024-09-29 16:45:30.310178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.086 [2024-09-29 16:45:30.310212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.086 qpair failed and we were unable to recover it. 00:37:30.086 [2024-09-29 16:45:30.310346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.086 [2024-09-29 16:45:30.310395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.086 qpair failed and we were unable to recover it. 00:37:30.086 [2024-09-29 16:45:30.310524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.086 [2024-09-29 16:45:30.310562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.086 qpair failed and we were unable to recover it. 00:37:30.086 [2024-09-29 16:45:30.310697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.086 [2024-09-29 16:45:30.310731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.086 qpair failed and we were unable to recover it. 00:37:30.086 [2024-09-29 16:45:30.310855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.086 [2024-09-29 16:45:30.310889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.086 qpair failed and we were unable to recover it. 00:37:30.086 [2024-09-29 16:45:30.311992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.086 [2024-09-29 16:45:30.312058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.086 qpair failed and we were unable to recover it. 00:37:30.086 [2024-09-29 16:45:30.312258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.086 [2024-09-29 16:45:30.312294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.086 qpair failed and we were unable to recover it. 00:37:30.086 [2024-09-29 16:45:30.312469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.086 [2024-09-29 16:45:30.312504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.086 qpair failed and we were unable to recover it. 00:37:30.086 [2024-09-29 16:45:30.312643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.086 [2024-09-29 16:45:30.312700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.086 qpair failed and we were unable to recover it. 00:37:30.086 [2024-09-29 16:45:30.312822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.086 [2024-09-29 16:45:30.312856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.086 qpair failed and we were unable to recover it. 00:37:30.086 [2024-09-29 16:45:30.312970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.086 [2024-09-29 16:45:30.313004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.086 qpair failed and we were unable to recover it. 00:37:30.086 [2024-09-29 16:45:30.313134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.086 [2024-09-29 16:45:30.313167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.086 qpair failed and we were unable to recover it. 00:37:30.086 [2024-09-29 16:45:30.313355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.086 [2024-09-29 16:45:30.313403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.086 qpair failed and we were unable to recover it. 00:37:30.086 [2024-09-29 16:45:30.313536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.086 [2024-09-29 16:45:30.313583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.086 qpair failed and we were unable to recover it. 00:37:30.086 [2024-09-29 16:45:30.313722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.086 [2024-09-29 16:45:30.313761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.086 qpair failed and we were unable to recover it. 00:37:30.086 [2024-09-29 16:45:30.313886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.086 [2024-09-29 16:45:30.313927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.086 qpair failed and we were unable to recover it. 00:37:30.086 [2024-09-29 16:45:30.314079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.086 [2024-09-29 16:45:30.314113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.086 qpair failed and we were unable to recover it. 00:37:30.086 [2024-09-29 16:45:30.314242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.086 [2024-09-29 16:45:30.314276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.086 qpair failed and we were unable to recover it. 00:37:30.086 [2024-09-29 16:45:30.314487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.086 [2024-09-29 16:45:30.314522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.086 qpair failed and we were unable to recover it. 00:37:30.086 [2024-09-29 16:45:30.314664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.087 [2024-09-29 16:45:30.314716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.087 qpair failed and we were unable to recover it. 00:37:30.087 [2024-09-29 16:45:30.314836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.087 [2024-09-29 16:45:30.314872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.087 qpair failed and we were unable to recover it. 00:37:30.087 [2024-09-29 16:45:30.316364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.087 [2024-09-29 16:45:30.316401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.087 qpair failed and we were unable to recover it. 00:37:30.087 [2024-09-29 16:45:30.316598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.087 [2024-09-29 16:45:30.316633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.087 qpair failed and we were unable to recover it. 00:37:30.087 [2024-09-29 16:45:30.316840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.087 [2024-09-29 16:45:30.316902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.087 qpair failed and we were unable to recover it. 00:37:30.087 [2024-09-29 16:45:30.317073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.087 [2024-09-29 16:45:30.317127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.087 qpair failed and we were unable to recover it. 00:37:30.087 [2024-09-29 16:45:30.317307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.087 [2024-09-29 16:45:30.317343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.087 qpair failed and we were unable to recover it. 00:37:30.087 [2024-09-29 16:45:30.317456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.087 [2024-09-29 16:45:30.317490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.087 qpair failed and we were unable to recover it. 00:37:30.087 [2024-09-29 16:45:30.317645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.087 [2024-09-29 16:45:30.317700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.087 qpair failed and we were unable to recover it. 00:37:30.087 [2024-09-29 16:45:30.317836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.087 [2024-09-29 16:45:30.317884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.087 qpair failed and we were unable to recover it. 00:37:30.087 [2024-09-29 16:45:30.318019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.087 [2024-09-29 16:45:30.318056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.087 qpair failed and we were unable to recover it. 00:37:30.087 [2024-09-29 16:45:30.318178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.087 [2024-09-29 16:45:30.318214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.087 qpair failed and we were unable to recover it. 00:37:30.087 [2024-09-29 16:45:30.318343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.087 [2024-09-29 16:45:30.318379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.087 qpair failed and we were unable to recover it. 00:37:30.087 [2024-09-29 16:45:30.318516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.087 [2024-09-29 16:45:30.318550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.087 qpair failed and we were unable to recover it. 00:37:30.087 [2024-09-29 16:45:30.318680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.087 [2024-09-29 16:45:30.318715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.087 qpair failed and we were unable to recover it. 00:37:30.087 [2024-09-29 16:45:30.318834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.087 [2024-09-29 16:45:30.318871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.087 qpair failed and we were unable to recover it. 00:37:30.087 [2024-09-29 16:45:30.319022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.087 [2024-09-29 16:45:30.319060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.087 qpair failed and we were unable to recover it. 00:37:30.087 [2024-09-29 16:45:30.319179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.087 [2024-09-29 16:45:30.319214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.087 qpair failed and we were unable to recover it. 00:37:30.087 [2024-09-29 16:45:30.319360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.087 [2024-09-29 16:45:30.319393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.087 qpair failed and we were unable to recover it. 00:37:30.087 [2024-09-29 16:45:30.319517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.087 [2024-09-29 16:45:30.319551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.087 qpair failed and we were unable to recover it. 00:37:30.087 [2024-09-29 16:45:30.319700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.087 [2024-09-29 16:45:30.319734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.087 qpair failed and we were unable to recover it. 00:37:30.087 [2024-09-29 16:45:30.319843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.087 [2024-09-29 16:45:30.319875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.087 qpair failed and we were unable to recover it. 00:37:30.087 [2024-09-29 16:45:30.320004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.087 [2024-09-29 16:45:30.320037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.087 qpair failed and we were unable to recover it. 00:37:30.087 [2024-09-29 16:45:30.320184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.087 [2024-09-29 16:45:30.320216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.087 qpair failed and we were unable to recover it. 00:37:30.087 [2024-09-29 16:45:30.320325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.087 [2024-09-29 16:45:30.320357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.087 qpair failed and we were unable to recover it. 00:37:30.087 [2024-09-29 16:45:30.320504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.087 [2024-09-29 16:45:30.320538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.087 qpair failed and we were unable to recover it. 00:37:30.087 [2024-09-29 16:45:30.320693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.087 [2024-09-29 16:45:30.320736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.087 qpair failed and we were unable to recover it. 00:37:30.087 [2024-09-29 16:45:30.320845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.087 [2024-09-29 16:45:30.320879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.087 qpair failed and we were unable to recover it. 00:37:30.087 [2024-09-29 16:45:30.321033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.087 [2024-09-29 16:45:30.321066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.087 qpair failed and we were unable to recover it. 00:37:30.087 [2024-09-29 16:45:30.321195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.087 [2024-09-29 16:45:30.321229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.087 qpair failed and we were unable to recover it. 00:37:30.087 [2024-09-29 16:45:30.321379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.087 [2024-09-29 16:45:30.321412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.087 qpair failed and we were unable to recover it. 00:37:30.087 [2024-09-29 16:45:30.321578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.087 [2024-09-29 16:45:30.321611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.087 qpair failed and we were unable to recover it. 00:37:30.087 [2024-09-29 16:45:30.321785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.087 [2024-09-29 16:45:30.321824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.087 qpair failed and we were unable to recover it. 00:37:30.087 [2024-09-29 16:45:30.321974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.087 [2024-09-29 16:45:30.322007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.087 qpair failed and we were unable to recover it. 00:37:30.087 [2024-09-29 16:45:30.322193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.087 [2024-09-29 16:45:30.322228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.087 qpair failed and we were unable to recover it. 00:37:30.087 [2024-09-29 16:45:30.322404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.087 [2024-09-29 16:45:30.322438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.087 qpair failed and we were unable to recover it. 00:37:30.087 [2024-09-29 16:45:30.322555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.087 [2024-09-29 16:45:30.322587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.087 qpair failed and we were unable to recover it. 00:37:30.087 [2024-09-29 16:45:30.322710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.087 [2024-09-29 16:45:30.322743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.087 qpair failed and we were unable to recover it. 00:37:30.087 [2024-09-29 16:45:30.322867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.087 [2024-09-29 16:45:30.322902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.087 qpair failed and we were unable to recover it. 00:37:30.088 [2024-09-29 16:45:30.323027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.088 [2024-09-29 16:45:30.323061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.088 qpair failed and we were unable to recover it. 00:37:30.088 [2024-09-29 16:45:30.323200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.088 [2024-09-29 16:45:30.323234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.088 qpair failed and we were unable to recover it. 00:37:30.088 [2024-09-29 16:45:30.323378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.088 [2024-09-29 16:45:30.323412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.088 qpair failed and we were unable to recover it. 00:37:30.088 [2024-09-29 16:45:30.323582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.088 [2024-09-29 16:45:30.323616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.088 qpair failed and we were unable to recover it. 00:37:30.088 [2024-09-29 16:45:30.323777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.088 [2024-09-29 16:45:30.323811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.088 qpair failed and we were unable to recover it. 00:37:30.088 [2024-09-29 16:45:30.323935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.088 [2024-09-29 16:45:30.323971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.088 qpair failed and we were unable to recover it. 00:37:30.088 [2024-09-29 16:45:30.324117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.088 [2024-09-29 16:45:30.324156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.088 qpair failed and we were unable to recover it. 00:37:30.088 [2024-09-29 16:45:30.324263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.088 [2024-09-29 16:45:30.324295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.088 qpair failed and we were unable to recover it. 00:37:30.088 [2024-09-29 16:45:30.324419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.088 [2024-09-29 16:45:30.324452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.088 qpair failed and we were unable to recover it. 00:37:30.088 [2024-09-29 16:45:30.324564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.088 [2024-09-29 16:45:30.324597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.088 qpair failed and we were unable to recover it. 00:37:30.088 [2024-09-29 16:45:30.324756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.088 [2024-09-29 16:45:30.324789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.088 qpair failed and we were unable to recover it. 00:37:30.088 [2024-09-29 16:45:30.324937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.088 [2024-09-29 16:45:30.324975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.088 qpair failed and we were unable to recover it. 00:37:30.088 [2024-09-29 16:45:30.325088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.088 [2024-09-29 16:45:30.325121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.088 qpair failed and we were unable to recover it. 00:37:30.088 [2024-09-29 16:45:30.325258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.088 [2024-09-29 16:45:30.325304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.088 qpair failed and we were unable to recover it. 00:37:30.088 [2024-09-29 16:45:30.325460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.088 [2024-09-29 16:45:30.325499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.088 qpair failed and we were unable to recover it. 00:37:30.088 [2024-09-29 16:45:30.325642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.088 [2024-09-29 16:45:30.325693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.088 qpair failed and we were unable to recover it. 00:37:30.088 [2024-09-29 16:45:30.325812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.088 [2024-09-29 16:45:30.325847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.088 qpair failed and we were unable to recover it. 00:37:30.088 [2024-09-29 16:45:30.325974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.088 [2024-09-29 16:45:30.326009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.088 qpair failed and we were unable to recover it. 00:37:30.088 [2024-09-29 16:45:30.326154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.088 [2024-09-29 16:45:30.326189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.088 qpair failed and we were unable to recover it. 00:37:30.088 [2024-09-29 16:45:30.326309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.088 [2024-09-29 16:45:30.326343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.088 qpair failed and we were unable to recover it. 00:37:30.088 [2024-09-29 16:45:30.326493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.088 [2024-09-29 16:45:30.326527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.088 qpair failed and we were unable to recover it. 00:37:30.088 [2024-09-29 16:45:30.326666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.088 [2024-09-29 16:45:30.326708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.088 qpair failed and we were unable to recover it. 00:37:30.088 [2024-09-29 16:45:30.326825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.088 [2024-09-29 16:45:30.326858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.088 qpair failed and we were unable to recover it. 00:37:30.088 [2024-09-29 16:45:30.326962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.088 [2024-09-29 16:45:30.326995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.088 qpair failed and we were unable to recover it. 00:37:30.088 [2024-09-29 16:45:30.327110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.088 [2024-09-29 16:45:30.327147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.088 qpair failed and we were unable to recover it. 00:37:30.088 [2024-09-29 16:45:30.327267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.088 [2024-09-29 16:45:30.327306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.088 qpair failed and we were unable to recover it. 00:37:30.088 [2024-09-29 16:45:30.327409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.088 [2024-09-29 16:45:30.327441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.088 qpair failed and we were unable to recover it. 00:37:30.088 [2024-09-29 16:45:30.327607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.088 [2024-09-29 16:45:30.327641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.088 qpair failed and we were unable to recover it. 00:37:30.088 [2024-09-29 16:45:30.327804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.088 [2024-09-29 16:45:30.327860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.088 qpair failed and we were unable to recover it. 00:37:30.088 [2024-09-29 16:45:30.328072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.088 [2024-09-29 16:45:30.328120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.088 qpair failed and we were unable to recover it. 00:37:30.088 [2024-09-29 16:45:30.328277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.088 [2024-09-29 16:45:30.328313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.088 qpair failed and we were unable to recover it. 00:37:30.088 [2024-09-29 16:45:30.328500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.088 [2024-09-29 16:45:30.328535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.088 qpair failed and we were unable to recover it. 00:37:30.088 [2024-09-29 16:45:30.328657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.088 [2024-09-29 16:45:30.328697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.088 qpair failed and we were unable to recover it. 00:37:30.088 [2024-09-29 16:45:30.328853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.088 [2024-09-29 16:45:30.328893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.088 qpair failed and we were unable to recover it. 00:37:30.088 [2024-09-29 16:45:30.329047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.088 [2024-09-29 16:45:30.329082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.088 qpair failed and we were unable to recover it. 00:37:30.088 [2024-09-29 16:45:30.329230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.088 [2024-09-29 16:45:30.329265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.088 qpair failed and we were unable to recover it. 00:37:30.088 [2024-09-29 16:45:30.329408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.088 [2024-09-29 16:45:30.329443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.088 qpair failed and we were unable to recover it. 00:37:30.088 [2024-09-29 16:45:30.329583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.088 [2024-09-29 16:45:30.329617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.088 qpair failed and we were unable to recover it. 00:37:30.088 [2024-09-29 16:45:30.329777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.089 [2024-09-29 16:45:30.329813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.089 qpair failed and we were unable to recover it. 00:37:30.089 [2024-09-29 16:45:30.329956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.089 [2024-09-29 16:45:30.329999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.089 qpair failed and we were unable to recover it. 00:37:30.089 [2024-09-29 16:45:30.330197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.089 [2024-09-29 16:45:30.330231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.089 qpair failed and we were unable to recover it. 00:37:30.089 [2024-09-29 16:45:30.330348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.089 [2024-09-29 16:45:30.330383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.089 qpair failed and we were unable to recover it. 00:37:30.089 [2024-09-29 16:45:30.330546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.089 [2024-09-29 16:45:30.330581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.089 qpair failed and we were unable to recover it. 00:37:30.089 [2024-09-29 16:45:30.330753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.089 [2024-09-29 16:45:30.330788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.089 qpair failed and we were unable to recover it. 00:37:30.089 [2024-09-29 16:45:30.330900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.089 [2024-09-29 16:45:30.330935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.089 qpair failed and we were unable to recover it. 00:37:30.089 [2024-09-29 16:45:30.331075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.089 [2024-09-29 16:45:30.331109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.089 qpair failed and we were unable to recover it. 00:37:30.089 [2024-09-29 16:45:30.331257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.089 [2024-09-29 16:45:30.331295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.089 qpair failed and we were unable to recover it. 00:37:30.089 [2024-09-29 16:45:30.331423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.089 [2024-09-29 16:45:30.331457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.089 qpair failed and we were unable to recover it. 00:37:30.089 [2024-09-29 16:45:30.331578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.089 [2024-09-29 16:45:30.331611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.089 qpair failed and we were unable to recover it. 00:37:30.089 [2024-09-29 16:45:30.331731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.089 [2024-09-29 16:45:30.331765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.089 qpair failed and we were unable to recover it. 00:37:30.089 [2024-09-29 16:45:30.331914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.089 [2024-09-29 16:45:30.331948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.089 qpair failed and we were unable to recover it. 00:37:30.089 [2024-09-29 16:45:30.332073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.089 [2024-09-29 16:45:30.332108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.089 qpair failed and we were unable to recover it. 00:37:30.089 [2024-09-29 16:45:30.332231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.089 [2024-09-29 16:45:30.332266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.089 qpair failed and we were unable to recover it. 00:37:30.089 [2024-09-29 16:45:30.332384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.089 [2024-09-29 16:45:30.332419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.089 qpair failed and we were unable to recover it. 00:37:30.089 [2024-09-29 16:45:30.332543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.089 [2024-09-29 16:45:30.332577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.089 qpair failed and we were unable to recover it. 00:37:30.089 [2024-09-29 16:45:30.332708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.089 [2024-09-29 16:45:30.332742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.089 qpair failed and we were unable to recover it. 00:37:30.089 [2024-09-29 16:45:30.332908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.089 [2024-09-29 16:45:30.332942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.089 qpair failed and we were unable to recover it. 00:37:30.089 [2024-09-29 16:45:30.333074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.089 [2024-09-29 16:45:30.333117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.089 qpair failed and we were unable to recover it. 00:37:30.089 [2024-09-29 16:45:30.333256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.089 [2024-09-29 16:45:30.333305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.089 qpair failed and we were unable to recover it. 00:37:30.089 [2024-09-29 16:45:30.333465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.089 [2024-09-29 16:45:30.333513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.089 qpair failed and we were unable to recover it. 00:37:30.089 [2024-09-29 16:45:30.333693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.089 [2024-09-29 16:45:30.333739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.089 qpair failed and we were unable to recover it. 00:37:30.089 [2024-09-29 16:45:30.333921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.089 [2024-09-29 16:45:30.333975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.089 qpair failed and we were unable to recover it. 00:37:30.089 [2024-09-29 16:45:30.334138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.089 [2024-09-29 16:45:30.334197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.089 qpair failed and we were unable to recover it. 00:37:30.089 [2024-09-29 16:45:30.334357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.089 [2024-09-29 16:45:30.334412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.089 qpair failed and we were unable to recover it. 00:37:30.089 [2024-09-29 16:45:30.334562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.089 [2024-09-29 16:45:30.334602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.089 qpair failed and we were unable to recover it. 00:37:30.089 [2024-09-29 16:45:30.334769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.089 [2024-09-29 16:45:30.334807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.089 qpair failed and we were unable to recover it. 00:37:30.089 [2024-09-29 16:45:30.334926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.089 [2024-09-29 16:45:30.334962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.089 qpair failed and we were unable to recover it. 00:37:30.089 [2024-09-29 16:45:30.335158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.089 [2024-09-29 16:45:30.335200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.089 qpair failed and we were unable to recover it. 00:37:30.089 [2024-09-29 16:45:30.335355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.089 [2024-09-29 16:45:30.335392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.089 qpair failed and we were unable to recover it. 00:37:30.089 [2024-09-29 16:45:30.335547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.089 [2024-09-29 16:45:30.335596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.089 qpair failed and we were unable to recover it. 00:37:30.089 [2024-09-29 16:45:30.335766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.089 [2024-09-29 16:45:30.335815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.089 qpair failed and we were unable to recover it. 00:37:30.089 [2024-09-29 16:45:30.335984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.089 [2024-09-29 16:45:30.336032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.089 qpair failed and we were unable to recover it. 00:37:30.089 [2024-09-29 16:45:30.336214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.089 [2024-09-29 16:45:30.336255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.090 qpair failed and we were unable to recover it. 00:37:30.090 [2024-09-29 16:45:30.336468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.090 [2024-09-29 16:45:30.336525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.090 qpair failed and we were unable to recover it. 00:37:30.090 [2024-09-29 16:45:30.336695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.090 [2024-09-29 16:45:30.336744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.090 qpair failed and we were unable to recover it. 00:37:30.090 [2024-09-29 16:45:30.336889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.090 [2024-09-29 16:45:30.336926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.090 qpair failed and we were unable to recover it. 00:37:30.090 [2024-09-29 16:45:30.337086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.090 [2024-09-29 16:45:30.337121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.090 qpair failed and we were unable to recover it. 00:37:30.090 [2024-09-29 16:45:30.337258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.090 [2024-09-29 16:45:30.337293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.090 qpair failed and we were unable to recover it. 00:37:30.090 [2024-09-29 16:45:30.337422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.090 [2024-09-29 16:45:30.337461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.090 qpair failed and we were unable to recover it. 00:37:30.090 [2024-09-29 16:45:30.337606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.090 [2024-09-29 16:45:30.337641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.090 qpair failed and we were unable to recover it. 00:37:30.090 [2024-09-29 16:45:30.337818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.090 [2024-09-29 16:45:30.337870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.090 qpair failed and we were unable to recover it. 00:37:30.090 [2024-09-29 16:45:30.338029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.090 [2024-09-29 16:45:30.338079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.090 qpair failed and we were unable to recover it. 00:37:30.090 [2024-09-29 16:45:30.338256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.090 [2024-09-29 16:45:30.338309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.090 qpair failed and we were unable to recover it. 00:37:30.090 [2024-09-29 16:45:30.338440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.090 [2024-09-29 16:45:30.338476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.090 qpair failed and we were unable to recover it. 00:37:30.090 [2024-09-29 16:45:30.338621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.090 [2024-09-29 16:45:30.338666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.090 qpair failed and we were unable to recover it. 00:37:30.090 [2024-09-29 16:45:30.338827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.090 [2024-09-29 16:45:30.338862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.090 qpair failed and we were unable to recover it. 00:37:30.090 [2024-09-29 16:45:30.338985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.090 [2024-09-29 16:45:30.339025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.090 qpair failed and we were unable to recover it. 00:37:30.090 [2024-09-29 16:45:30.339158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.090 [2024-09-29 16:45:30.339193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.090 qpair failed and we were unable to recover it. 00:37:30.090 [2024-09-29 16:45:30.339315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.090 [2024-09-29 16:45:30.339350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.090 qpair failed and we were unable to recover it. 00:37:30.090 [2024-09-29 16:45:30.339521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.090 [2024-09-29 16:45:30.339555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.090 qpair failed and we were unable to recover it. 00:37:30.090 [2024-09-29 16:45:30.339710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.090 [2024-09-29 16:45:30.339746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.090 qpair failed and we were unable to recover it. 00:37:30.090 [2024-09-29 16:45:30.339862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.090 [2024-09-29 16:45:30.339896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.090 qpair failed and we were unable to recover it. 00:37:30.090 [2024-09-29 16:45:30.340030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.090 [2024-09-29 16:45:30.340064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.090 qpair failed and we were unable to recover it. 00:37:30.090 [2024-09-29 16:45:30.340169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.090 [2024-09-29 16:45:30.340203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.090 qpair failed and we were unable to recover it. 00:37:30.090 [2024-09-29 16:45:30.340356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.090 [2024-09-29 16:45:30.340390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.090 qpair failed and we were unable to recover it. 00:37:30.090 [2024-09-29 16:45:30.340511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.090 [2024-09-29 16:45:30.340545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.090 qpair failed and we were unable to recover it. 00:37:30.090 [2024-09-29 16:45:30.340667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.090 [2024-09-29 16:45:30.340722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.090 qpair failed and we were unable to recover it. 00:37:30.090 [2024-09-29 16:45:30.340829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.090 [2024-09-29 16:45:30.340863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.090 qpair failed and we were unable to recover it. 00:37:30.090 [2024-09-29 16:45:30.341017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.090 [2024-09-29 16:45:30.341060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.090 qpair failed and we were unable to recover it. 00:37:30.090 [2024-09-29 16:45:30.341192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.090 [2024-09-29 16:45:30.341226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.090 qpair failed and we were unable to recover it. 00:37:30.090 [2024-09-29 16:45:30.341370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.090 [2024-09-29 16:45:30.341405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.090 qpair failed and we were unable to recover it. 00:37:30.090 [2024-09-29 16:45:30.341549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.090 [2024-09-29 16:45:30.341584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.090 qpair failed and we were unable to recover it. 00:37:30.090 [2024-09-29 16:45:30.341744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.090 [2024-09-29 16:45:30.341778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.090 qpair failed and we were unable to recover it. 00:37:30.090 [2024-09-29 16:45:30.341938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.090 [2024-09-29 16:45:30.341979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.090 qpair failed and we were unable to recover it. 00:37:30.090 [2024-09-29 16:45:30.342098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.090 [2024-09-29 16:45:30.342145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.090 qpair failed and we were unable to recover it. 00:37:30.090 [2024-09-29 16:45:30.342305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.090 [2024-09-29 16:45:30.342350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.090 qpair failed and we were unable to recover it. 00:37:30.090 [2024-09-29 16:45:30.342473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.090 [2024-09-29 16:45:30.342508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.090 qpair failed and we were unable to recover it. 00:37:30.090 [2024-09-29 16:45:30.342989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.090 [2024-09-29 16:45:30.343039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.090 qpair failed and we were unable to recover it. 00:37:30.090 [2024-09-29 16:45:30.343231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.090 [2024-09-29 16:45:30.343267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.090 qpair failed and we were unable to recover it. 00:37:30.090 [2024-09-29 16:45:30.343381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.090 [2024-09-29 16:45:30.343415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.090 qpair failed and we were unable to recover it. 00:37:30.090 [2024-09-29 16:45:30.343596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.091 [2024-09-29 16:45:30.343630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.091 qpair failed and we were unable to recover it. 00:37:30.091 [2024-09-29 16:45:30.343773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.091 [2024-09-29 16:45:30.343808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.091 qpair failed and we were unable to recover it. 00:37:30.091 [2024-09-29 16:45:30.343925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.091 [2024-09-29 16:45:30.343972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.091 qpair failed and we were unable to recover it. 00:37:30.091 [2024-09-29 16:45:30.344092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.091 [2024-09-29 16:45:30.344139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.091 qpair failed and we were unable to recover it. 00:37:30.091 [2024-09-29 16:45:30.344260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.091 [2024-09-29 16:45:30.344295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.091 qpair failed and we were unable to recover it. 00:37:30.091 [2024-09-29 16:45:30.344464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.091 [2024-09-29 16:45:30.344511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.091 qpair failed and we were unable to recover it. 00:37:30.091 [2024-09-29 16:45:30.344704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.091 [2024-09-29 16:45:30.344742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.091 qpair failed and we were unable to recover it. 00:37:30.091 [2024-09-29 16:45:30.344870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.091 [2024-09-29 16:45:30.344916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.091 qpair failed and we were unable to recover it. 00:37:30.091 [2024-09-29 16:45:30.345077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.091 [2024-09-29 16:45:30.345111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.091 qpair failed and we were unable to recover it. 00:37:30.091 [2024-09-29 16:45:30.345255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.091 [2024-09-29 16:45:30.345289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.091 qpair failed and we were unable to recover it. 00:37:30.091 [2024-09-29 16:45:30.345435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.091 [2024-09-29 16:45:30.345468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.091 qpair failed and we were unable to recover it. 00:37:30.091 [2024-09-29 16:45:30.345620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.091 [2024-09-29 16:45:30.345655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.091 qpair failed and we were unable to recover it. 00:37:30.091 [2024-09-29 16:45:30.345789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.091 [2024-09-29 16:45:30.345824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.091 qpair failed and we were unable to recover it. 00:37:30.091 [2024-09-29 16:45:30.345937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.091 [2024-09-29 16:45:30.345976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.091 qpair failed and we were unable to recover it. 00:37:30.091 [2024-09-29 16:45:30.346124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.091 [2024-09-29 16:45:30.346163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.091 qpair failed and we were unable to recover it. 00:37:30.091 [2024-09-29 16:45:30.346278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.091 [2024-09-29 16:45:30.346312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.091 qpair failed and we were unable to recover it. 00:37:30.091 [2024-09-29 16:45:30.346458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.091 [2024-09-29 16:45:30.346492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.091 qpair failed and we were unable to recover it. 00:37:30.091 [2024-09-29 16:45:30.346681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.091 [2024-09-29 16:45:30.346715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.091 qpair failed and we were unable to recover it. 00:37:30.091 [2024-09-29 16:45:30.346839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.091 [2024-09-29 16:45:30.346874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.091 qpair failed and we were unable to recover it. 00:37:30.091 [2024-09-29 16:45:30.346992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.091 [2024-09-29 16:45:30.347026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.091 qpair failed and we were unable to recover it. 00:37:30.091 [2024-09-29 16:45:30.347149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.091 [2024-09-29 16:45:30.347184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.091 qpair failed and we were unable to recover it. 00:37:30.091 [2024-09-29 16:45:30.347334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.091 [2024-09-29 16:45:30.347368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.091 qpair failed and we were unable to recover it. 00:37:30.091 [2024-09-29 16:45:30.347511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.091 [2024-09-29 16:45:30.347544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.091 qpair failed and we were unable to recover it. 00:37:30.091 [2024-09-29 16:45:30.347685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.091 [2024-09-29 16:45:30.347719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.091 qpair failed and we were unable to recover it. 00:37:30.091 [2024-09-29 16:45:30.347860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.091 [2024-09-29 16:45:30.347895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.091 qpair failed and we were unable to recover it. 00:37:30.091 [2024-09-29 16:45:30.348043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.091 [2024-09-29 16:45:30.348076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.091 qpair failed and we were unable to recover it. 00:37:30.091 [2024-09-29 16:45:30.348224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.091 [2024-09-29 16:45:30.348257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.091 qpair failed and we were unable to recover it. 00:37:30.091 [2024-09-29 16:45:30.348378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.091 [2024-09-29 16:45:30.348412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.091 qpair failed and we were unable to recover it. 00:37:30.091 [2024-09-29 16:45:30.348527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.091 [2024-09-29 16:45:30.348562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.091 qpair failed and we were unable to recover it. 00:37:30.091 [2024-09-29 16:45:30.348725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.091 [2024-09-29 16:45:30.348759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.091 qpair failed and we were unable to recover it. 00:37:30.091 [2024-09-29 16:45:30.348899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.091 [2024-09-29 16:45:30.348933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.091 qpair failed and we were unable to recover it. 00:37:30.091 [2024-09-29 16:45:30.349059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.091 [2024-09-29 16:45:30.349092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.091 qpair failed and we were unable to recover it. 00:37:30.091 [2024-09-29 16:45:30.349243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.091 [2024-09-29 16:45:30.349277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.091 [2024-09-29 16:45:30.349251] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:37:30.091 qpair failed and we were unable to recover it. 00:37:30.091 [2024-09-29 16:45:30.349367] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:30.091 [2024-09-29 16:45:30.349439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.091 [2024-09-29 16:45:30.349472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.091 qpair failed and we were unable to recover it. 00:37:30.091 [2024-09-29 16:45:30.350265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.091 [2024-09-29 16:45:30.350304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.091 qpair failed and we were unable to recover it. 00:37:30.091 [2024-09-29 16:45:30.350488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.091 [2024-09-29 16:45:30.350523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.091 qpair failed and we were unable to recover it. 00:37:30.091 [2024-09-29 16:45:30.350650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.091 [2024-09-29 16:45:30.350698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.091 qpair failed and we were unable to recover it. 00:37:30.091 [2024-09-29 16:45:30.350847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.091 [2024-09-29 16:45:30.350881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.091 qpair failed and we were unable to recover it. 00:37:30.092 [2024-09-29 16:45:30.350997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.092 [2024-09-29 16:45:30.351030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.092 qpair failed and we were unable to recover it. 00:37:30.092 [2024-09-29 16:45:30.351189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.092 [2024-09-29 16:45:30.351223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.092 qpair failed and we were unable to recover it. 00:37:30.092 [2024-09-29 16:45:30.351359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.092 [2024-09-29 16:45:30.351401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.092 qpair failed and we were unable to recover it. 00:37:30.092 [2024-09-29 16:45:30.351546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.092 [2024-09-29 16:45:30.351582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.092 qpair failed and we were unable to recover it. 00:37:30.092 [2024-09-29 16:45:30.351743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.092 [2024-09-29 16:45:30.351792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.092 qpair failed and we were unable to recover it. 00:37:30.092 [2024-09-29 16:45:30.351922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.092 [2024-09-29 16:45:30.351957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.092 qpair failed and we were unable to recover it. 00:37:30.092 [2024-09-29 16:45:30.352142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.092 [2024-09-29 16:45:30.352176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.092 qpair failed and we were unable to recover it. 00:37:30.092 [2024-09-29 16:45:30.352317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.092 [2024-09-29 16:45:30.352351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.092 qpair failed and we were unable to recover it. 00:37:30.092 [2024-09-29 16:45:30.352481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.092 [2024-09-29 16:45:30.352515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.092 qpair failed and we were unable to recover it. 00:37:30.092 [2024-09-29 16:45:30.352628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.092 [2024-09-29 16:45:30.352668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.092 qpair failed and we were unable to recover it. 00:37:30.092 [2024-09-29 16:45:30.352825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.092 [2024-09-29 16:45:30.352860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.092 qpair failed and we were unable to recover it. 00:37:30.092 [2024-09-29 16:45:30.352980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.092 [2024-09-29 16:45:30.353013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.092 qpair failed and we were unable to recover it. 00:37:30.092 [2024-09-29 16:45:30.353179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.092 [2024-09-29 16:45:30.353212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.092 qpair failed and we were unable to recover it. 00:37:30.092 [2024-09-29 16:45:30.353365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.092 [2024-09-29 16:45:30.353399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.092 qpair failed and we were unable to recover it. 00:37:30.092 [2024-09-29 16:45:30.353516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.092 [2024-09-29 16:45:30.353551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.092 qpair failed and we were unable to recover it. 00:37:30.092 [2024-09-29 16:45:30.354313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.092 [2024-09-29 16:45:30.354352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.092 qpair failed and we were unable to recover it. 00:37:30.092 [2024-09-29 16:45:30.354528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.092 [2024-09-29 16:45:30.354562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.092 qpair failed and we were unable to recover it. 00:37:30.092 [2024-09-29 16:45:30.354692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.092 [2024-09-29 16:45:30.354732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.092 qpair failed and we were unable to recover it. 00:37:30.092 [2024-09-29 16:45:30.354855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.092 [2024-09-29 16:45:30.354888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.092 qpair failed and we were unable to recover it. 00:37:30.092 [2024-09-29 16:45:30.355034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.092 [2024-09-29 16:45:30.355068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.092 qpair failed and we were unable to recover it. 00:37:30.092 [2024-09-29 16:45:30.355184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.092 [2024-09-29 16:45:30.355217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.092 qpair failed and we were unable to recover it. 00:37:30.092 [2024-09-29 16:45:30.355358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.092 [2024-09-29 16:45:30.355391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.092 qpair failed and we were unable to recover it. 00:37:30.092 [2024-09-29 16:45:30.355516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.092 [2024-09-29 16:45:30.355550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.092 qpair failed and we were unable to recover it. 00:37:30.092 [2024-09-29 16:45:30.355668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.092 [2024-09-29 16:45:30.355707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.092 qpair failed and we were unable to recover it. 00:37:30.092 [2024-09-29 16:45:30.355853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.092 [2024-09-29 16:45:30.355886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.092 qpair failed and we were unable to recover it. 00:37:30.092 [2024-09-29 16:45:30.356024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.092 [2024-09-29 16:45:30.356057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.092 qpair failed and we were unable to recover it. 00:37:30.092 [2024-09-29 16:45:30.356199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.092 [2024-09-29 16:45:30.356232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.092 qpair failed and we were unable to recover it. 00:37:30.092 [2024-09-29 16:45:30.356345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.092 [2024-09-29 16:45:30.356378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.092 qpair failed and we were unable to recover it. 00:37:30.092 [2024-09-29 16:45:30.356500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.092 [2024-09-29 16:45:30.356536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.092 qpair failed and we were unable to recover it. 00:37:30.092 [2024-09-29 16:45:30.356651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.092 [2024-09-29 16:45:30.356698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.092 qpair failed and we were unable to recover it. 00:37:30.092 [2024-09-29 16:45:30.356813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.092 [2024-09-29 16:45:30.356847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.092 qpair failed and we were unable to recover it. 00:37:30.092 [2024-09-29 16:45:30.356986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.092 [2024-09-29 16:45:30.357020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.092 qpair failed and we were unable to recover it. 00:37:30.092 [2024-09-29 16:45:30.357143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.092 [2024-09-29 16:45:30.357177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.092 qpair failed and we were unable to recover it. 00:37:30.092 [2024-09-29 16:45:30.357315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.092 [2024-09-29 16:45:30.357349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.092 qpair failed and we were unable to recover it. 00:37:30.092 [2024-09-29 16:45:30.357475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.092 [2024-09-29 16:45:30.357510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.092 qpair failed and we were unable to recover it. 00:37:30.092 [2024-09-29 16:45:30.358362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.092 [2024-09-29 16:45:30.358402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.092 qpair failed and we were unable to recover it. 00:37:30.092 [2024-09-29 16:45:30.358577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.092 [2024-09-29 16:45:30.358611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.092 qpair failed and we were unable to recover it. 00:37:30.092 [2024-09-29 16:45:30.358768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.092 [2024-09-29 16:45:30.358802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.092 qpair failed and we were unable to recover it. 00:37:30.092 [2024-09-29 16:45:30.358949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.093 [2024-09-29 16:45:30.358984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.093 qpair failed and we were unable to recover it. 00:37:30.093 [2024-09-29 16:45:30.359103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.093 [2024-09-29 16:45:30.359136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.093 qpair failed and we were unable to recover it. 00:37:30.093 [2024-09-29 16:45:30.359290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.093 [2024-09-29 16:45:30.359324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.093 qpair failed and we were unable to recover it. 00:37:30.093 [2024-09-29 16:45:30.359449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.093 [2024-09-29 16:45:30.359483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.093 qpair failed and we were unable to recover it. 00:37:30.093 [2024-09-29 16:45:30.359596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.093 [2024-09-29 16:45:30.359631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.093 qpair failed and we were unable to recover it. 00:37:30.093 [2024-09-29 16:45:30.359757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.093 [2024-09-29 16:45:30.359791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.093 qpair failed and we were unable to recover it. 00:37:30.093 [2024-09-29 16:45:30.359944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.093 [2024-09-29 16:45:30.359981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.093 qpair failed and we were unable to recover it. 00:37:30.093 [2024-09-29 16:45:30.360127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.093 [2024-09-29 16:45:30.360162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.093 qpair failed and we were unable to recover it. 00:37:30.093 [2024-09-29 16:45:30.360279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.093 [2024-09-29 16:45:30.360313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.093 qpair failed and we were unable to recover it. 00:37:30.093 [2024-09-29 16:45:30.360432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.093 [2024-09-29 16:45:30.360465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.093 qpair failed and we were unable to recover it. 00:37:30.093 [2024-09-29 16:45:30.360611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.093 [2024-09-29 16:45:30.360668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.093 qpair failed and we were unable to recover it. 00:37:30.093 [2024-09-29 16:45:30.360812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.093 [2024-09-29 16:45:30.360848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.093 qpair failed and we were unable to recover it. 00:37:30.093 [2024-09-29 16:45:30.360993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.093 [2024-09-29 16:45:30.361027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.093 qpair failed and we were unable to recover it. 00:37:30.093 [2024-09-29 16:45:30.361155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.093 [2024-09-29 16:45:30.361190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.093 qpair failed and we were unable to recover it. 00:37:30.093 [2024-09-29 16:45:30.361364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.093 [2024-09-29 16:45:30.361399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.093 qpair failed and we were unable to recover it. 00:37:30.093 [2024-09-29 16:45:30.361518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.093 [2024-09-29 16:45:30.361558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.093 qpair failed and we were unable to recover it. 00:37:30.093 [2024-09-29 16:45:30.361684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.093 [2024-09-29 16:45:30.361719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.093 qpair failed and we were unable to recover it. 00:37:30.093 [2024-09-29 16:45:30.361834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.093 [2024-09-29 16:45:30.361868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.093 qpair failed and we were unable to recover it. 00:37:30.093 [2024-09-29 16:45:30.361991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.093 [2024-09-29 16:45:30.362025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.093 qpair failed and we were unable to recover it. 00:37:30.093 [2024-09-29 16:45:30.362181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.093 [2024-09-29 16:45:30.362239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.093 qpair failed and we were unable to recover it. 00:37:30.093 [2024-09-29 16:45:30.362361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.093 [2024-09-29 16:45:30.362396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.093 qpair failed and we were unable to recover it. 00:37:30.093 [2024-09-29 16:45:30.363235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.093 [2024-09-29 16:45:30.363279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.093 qpair failed and we were unable to recover it. 00:37:30.093 [2024-09-29 16:45:30.363482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.093 [2024-09-29 16:45:30.363516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.093 qpair failed and we were unable to recover it. 00:37:30.093 [2024-09-29 16:45:30.363627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.093 [2024-09-29 16:45:30.363685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.093 qpair failed and we were unable to recover it. 00:37:30.093 [2024-09-29 16:45:30.363810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.093 [2024-09-29 16:45:30.363843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.093 qpair failed and we were unable to recover it. 00:37:30.093 [2024-09-29 16:45:30.363971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.093 [2024-09-29 16:45:30.364004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.093 qpair failed and we were unable to recover it. 00:37:30.093 [2024-09-29 16:45:30.364166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.093 [2024-09-29 16:45:30.364204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.093 qpair failed and we were unable to recover it. 00:37:30.093 [2024-09-29 16:45:30.364336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.093 [2024-09-29 16:45:30.364369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.093 qpair failed and we were unable to recover it. 00:37:30.093 [2024-09-29 16:45:30.364539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.093 [2024-09-29 16:45:30.364573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.093 qpair failed and we were unable to recover it. 00:37:30.093 [2024-09-29 16:45:30.364708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.093 [2024-09-29 16:45:30.364742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.093 qpair failed and we were unable to recover it. 00:37:30.093 [2024-09-29 16:45:30.364861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.093 [2024-09-29 16:45:30.364894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.093 qpair failed and we were unable to recover it. 00:37:30.093 [2024-09-29 16:45:30.365047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.093 [2024-09-29 16:45:30.365080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.093 qpair failed and we were unable to recover it. 00:37:30.093 [2024-09-29 16:45:30.365197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.093 [2024-09-29 16:45:30.365234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.093 qpair failed and we were unable to recover it. 00:37:30.093 [2024-09-29 16:45:30.365364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.093 [2024-09-29 16:45:30.365399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.093 qpair failed and we were unable to recover it. 00:37:30.093 [2024-09-29 16:45:30.365574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.093 [2024-09-29 16:45:30.365608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.093 qpair failed and we were unable to recover it. 00:37:30.093 [2024-09-29 16:45:30.365756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.093 [2024-09-29 16:45:30.365791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.093 qpair failed and we were unable to recover it. 00:37:30.093 [2024-09-29 16:45:30.365913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.093 [2024-09-29 16:45:30.365947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.093 qpair failed and we were unable to recover it. 00:37:30.093 [2024-09-29 16:45:30.366115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.093 [2024-09-29 16:45:30.366161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.093 qpair failed and we were unable to recover it. 00:37:30.093 [2024-09-29 16:45:30.366287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.093 [2024-09-29 16:45:30.366324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.093 qpair failed and we were unable to recover it. 00:37:30.094 [2024-09-29 16:45:30.366482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.094 [2024-09-29 16:45:30.366530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.094 qpair failed and we were unable to recover it. 00:37:30.094 [2024-09-29 16:45:30.366689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.094 [2024-09-29 16:45:30.366726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.094 qpair failed and we were unable to recover it. 00:37:30.094 [2024-09-29 16:45:30.366848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.094 [2024-09-29 16:45:30.366883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.094 qpair failed and we were unable to recover it. 00:37:30.094 [2024-09-29 16:45:30.367044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.094 [2024-09-29 16:45:30.367078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.094 qpair failed and we were unable to recover it. 00:37:30.094 [2024-09-29 16:45:30.367213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.094 [2024-09-29 16:45:30.367247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.094 qpair failed and we were unable to recover it. 00:37:30.094 [2024-09-29 16:45:30.367376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.094 [2024-09-29 16:45:30.367411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.094 qpair failed and we were unable to recover it. 00:37:30.094 [2024-09-29 16:45:30.367542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.094 [2024-09-29 16:45:30.367576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.094 qpair failed and we were unable to recover it. 00:37:30.094 [2024-09-29 16:45:30.367722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.094 [2024-09-29 16:45:30.367763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.094 qpair failed and we were unable to recover it. 00:37:30.094 [2024-09-29 16:45:30.367912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.094 [2024-09-29 16:45:30.367948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.094 qpair failed and we were unable to recover it. 00:37:30.094 [2024-09-29 16:45:30.368105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.094 [2024-09-29 16:45:30.368139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.094 qpair failed and we were unable to recover it. 00:37:30.094 [2024-09-29 16:45:30.368271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.094 [2024-09-29 16:45:30.368304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.094 qpair failed and we were unable to recover it. 00:37:30.094 [2024-09-29 16:45:30.368451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.094 [2024-09-29 16:45:30.368486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.094 qpair failed and we were unable to recover it. 00:37:30.094 [2024-09-29 16:45:30.368629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.094 [2024-09-29 16:45:30.368678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.094 qpair failed and we were unable to recover it. 00:37:30.094 [2024-09-29 16:45:30.368796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.094 [2024-09-29 16:45:30.368830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.094 qpair failed and we were unable to recover it. 00:37:30.094 [2024-09-29 16:45:30.368951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.094 [2024-09-29 16:45:30.368997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.094 qpair failed and we were unable to recover it. 00:37:30.094 [2024-09-29 16:45:30.369131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.094 [2024-09-29 16:45:30.369165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.094 qpair failed and we were unable to recover it. 00:37:30.094 [2024-09-29 16:45:30.369305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.094 [2024-09-29 16:45:30.369339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.094 qpair failed and we were unable to recover it. 00:37:30.094 [2024-09-29 16:45:30.369458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.094 [2024-09-29 16:45:30.369494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.094 qpair failed and we were unable to recover it. 00:37:30.094 [2024-09-29 16:45:30.369681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.094 [2024-09-29 16:45:30.369716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.094 qpair failed and we were unable to recover it. 00:37:30.094 [2024-09-29 16:45:30.369834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.094 [2024-09-29 16:45:30.369867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.094 qpair failed and we were unable to recover it. 00:37:30.094 [2024-09-29 16:45:30.370021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.094 [2024-09-29 16:45:30.370060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.094 qpair failed and we were unable to recover it. 00:37:30.094 [2024-09-29 16:45:30.370207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.094 [2024-09-29 16:45:30.370240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.094 qpair failed and we were unable to recover it. 00:37:30.094 [2024-09-29 16:45:30.370377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.094 [2024-09-29 16:45:30.370424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.094 qpair failed and we were unable to recover it. 00:37:30.094 [2024-09-29 16:45:30.370578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.094 [2024-09-29 16:45:30.370622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.094 qpair failed and we were unable to recover it. 00:37:30.094 [2024-09-29 16:45:30.370785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.094 [2024-09-29 16:45:30.370820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.094 qpair failed and we were unable to recover it. 00:37:30.094 [2024-09-29 16:45:30.370938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.094 [2024-09-29 16:45:30.370979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.094 qpair failed and we were unable to recover it. 00:37:30.094 [2024-09-29 16:45:30.371104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.094 [2024-09-29 16:45:30.371139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.094 qpair failed and we were unable to recover it. 00:37:30.094 [2024-09-29 16:45:30.371253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.094 [2024-09-29 16:45:30.371288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.094 qpair failed and we were unable to recover it. 00:37:30.094 [2024-09-29 16:45:30.371442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.094 [2024-09-29 16:45:30.371476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.094 qpair failed and we were unable to recover it. 00:37:30.094 [2024-09-29 16:45:30.371615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.094 [2024-09-29 16:45:30.371710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.094 qpair failed and we were unable to recover it. 00:37:30.094 [2024-09-29 16:45:30.371845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.094 [2024-09-29 16:45:30.371881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.094 qpair failed and we were unable to recover it. 00:37:30.094 [2024-09-29 16:45:30.372038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.095 [2024-09-29 16:45:30.372074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.095 qpair failed and we were unable to recover it. 00:37:30.095 [2024-09-29 16:45:30.372209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.095 [2024-09-29 16:45:30.372243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.095 qpair failed and we were unable to recover it. 00:37:30.095 [2024-09-29 16:45:30.372384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.095 [2024-09-29 16:45:30.372417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.095 qpair failed and we were unable to recover it. 00:37:30.095 [2024-09-29 16:45:30.372572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.095 [2024-09-29 16:45:30.372607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.095 qpair failed and we were unable to recover it. 00:37:30.095 [2024-09-29 16:45:30.372863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.095 [2024-09-29 16:45:30.372900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.095 qpair failed and we were unable to recover it. 00:37:30.095 [2024-09-29 16:45:30.373020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.095 [2024-09-29 16:45:30.373065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.095 qpair failed and we were unable to recover it. 00:37:30.095 [2024-09-29 16:45:30.373189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.095 [2024-09-29 16:45:30.373223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.095 qpair failed and we were unable to recover it. 00:37:30.095 [2024-09-29 16:45:30.373341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.095 [2024-09-29 16:45:30.373374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.095 qpair failed and we were unable to recover it. 00:37:30.095 [2024-09-29 16:45:30.373512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.095 [2024-09-29 16:45:30.373546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.095 qpair failed and we were unable to recover it. 00:37:30.095 [2024-09-29 16:45:30.373691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.095 [2024-09-29 16:45:30.373739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.095 qpair failed and we were unable to recover it. 00:37:30.095 [2024-09-29 16:45:30.373914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.095 [2024-09-29 16:45:30.373951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.095 qpair failed and we were unable to recover it. 00:37:30.095 [2024-09-29 16:45:30.374109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.095 [2024-09-29 16:45:30.374148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.095 qpair failed and we were unable to recover it. 00:37:30.095 [2024-09-29 16:45:30.374278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.095 [2024-09-29 16:45:30.374312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.095 qpair failed and we were unable to recover it. 00:37:30.095 [2024-09-29 16:45:30.374424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.095 [2024-09-29 16:45:30.374458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.095 qpair failed and we were unable to recover it. 00:37:30.095 [2024-09-29 16:45:30.374577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.095 [2024-09-29 16:45:30.374613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.095 qpair failed and we were unable to recover it. 00:37:30.095 [2024-09-29 16:45:30.374754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.095 [2024-09-29 16:45:30.374790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.095 qpair failed and we were unable to recover it. 00:37:30.095 [2024-09-29 16:45:30.374971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.095 [2024-09-29 16:45:30.375005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.095 qpair failed and we were unable to recover it. 00:37:30.095 [2024-09-29 16:45:30.375185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.095 [2024-09-29 16:45:30.375219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.095 qpair failed and we were unable to recover it. 00:37:30.095 [2024-09-29 16:45:30.375330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.095 [2024-09-29 16:45:30.375364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.095 qpair failed and we were unable to recover it. 00:37:30.095 [2024-09-29 16:45:30.375512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.095 [2024-09-29 16:45:30.375545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.095 qpair failed and we were unable to recover it. 00:37:30.095 [2024-09-29 16:45:30.375701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.095 [2024-09-29 16:45:30.375735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.095 qpair failed and we were unable to recover it. 00:37:30.095 [2024-09-29 16:45:30.375868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.095 [2024-09-29 16:45:30.375916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.095 qpair failed and we were unable to recover it. 00:37:30.095 [2024-09-29 16:45:30.376081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.095 [2024-09-29 16:45:30.376116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.095 qpair failed and we were unable to recover it. 00:37:30.095 [2024-09-29 16:45:30.376289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.095 [2024-09-29 16:45:30.376330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.095 qpair failed and we were unable to recover it. 00:37:30.095 [2024-09-29 16:45:30.376444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.095 [2024-09-29 16:45:30.376477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.095 qpair failed and we were unable to recover it. 00:37:30.095 [2024-09-29 16:45:30.376611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.095 [2024-09-29 16:45:30.376650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.095 qpair failed and we were unable to recover it. 00:37:30.095 [2024-09-29 16:45:30.376802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.095 [2024-09-29 16:45:30.376836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.095 qpair failed and we were unable to recover it. 00:37:30.095 [2024-09-29 16:45:30.376959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.095 [2024-09-29 16:45:30.377000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.095 qpair failed and we were unable to recover it. 00:37:30.095 [2024-09-29 16:45:30.377143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.095 [2024-09-29 16:45:30.377177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.095 qpair failed and we were unable to recover it. 00:37:30.095 [2024-09-29 16:45:30.377331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.095 [2024-09-29 16:45:30.377369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.095 qpair failed and we were unable to recover it. 00:37:30.095 [2024-09-29 16:45:30.377508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.095 [2024-09-29 16:45:30.377542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.095 qpair failed and we were unable to recover it. 00:37:30.095 [2024-09-29 16:45:30.377699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.095 [2024-09-29 16:45:30.377734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.095 qpair failed and we were unable to recover it. 00:37:30.095 [2024-09-29 16:45:30.377846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.095 [2024-09-29 16:45:30.377880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.095 qpair failed and we were unable to recover it. 00:37:30.095 [2024-09-29 16:45:30.378033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.095 [2024-09-29 16:45:30.378068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.095 qpair failed and we were unable to recover it. 00:37:30.095 [2024-09-29 16:45:30.378212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.095 [2024-09-29 16:45:30.378247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.095 qpair failed and we were unable to recover it. 00:37:30.095 [2024-09-29 16:45:30.378388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.095 [2024-09-29 16:45:30.378422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.095 qpair failed and we were unable to recover it. 00:37:30.095 [2024-09-29 16:45:30.378554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.095 [2024-09-29 16:45:30.378590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.095 qpair failed and we were unable to recover it. 00:37:30.095 [2024-09-29 16:45:30.378727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.095 [2024-09-29 16:45:30.378761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.095 qpair failed and we were unable to recover it. 00:37:30.095 [2024-09-29 16:45:30.378929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.096 [2024-09-29 16:45:30.378970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.096 qpair failed and we were unable to recover it. 00:37:30.096 [2024-09-29 16:45:30.379098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.096 [2024-09-29 16:45:30.379132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.096 qpair failed and we were unable to recover it. 00:37:30.096 [2024-09-29 16:45:30.379268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.096 [2024-09-29 16:45:30.379317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.096 qpair failed and we were unable to recover it. 00:37:30.096 [2024-09-29 16:45:30.379481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.096 [2024-09-29 16:45:30.379516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.096 qpair failed and we were unable to recover it. 00:37:30.096 [2024-09-29 16:45:30.379669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.096 [2024-09-29 16:45:30.379712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.096 qpair failed and we were unable to recover it. 00:37:30.096 [2024-09-29 16:45:30.379875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.096 [2024-09-29 16:45:30.379910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.096 qpair failed and we were unable to recover it. 00:37:30.096 [2024-09-29 16:45:30.380027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.096 [2024-09-29 16:45:30.380060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.096 qpair failed and we were unable to recover it. 00:37:30.096 [2024-09-29 16:45:30.380202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.096 [2024-09-29 16:45:30.380236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.096 qpair failed and we were unable to recover it. 00:37:30.096 [2024-09-29 16:45:30.380359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.096 [2024-09-29 16:45:30.380393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.096 qpair failed and we were unable to recover it. 00:37:30.096 [2024-09-29 16:45:30.380540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.096 [2024-09-29 16:45:30.380574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.096 qpair failed and we were unable to recover it. 00:37:30.096 [2024-09-29 16:45:30.380721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.096 [2024-09-29 16:45:30.380769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.096 qpair failed and we were unable to recover it. 00:37:30.096 [2024-09-29 16:45:30.380905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.096 [2024-09-29 16:45:30.380938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.096 qpair failed and we were unable to recover it. 00:37:30.096 [2024-09-29 16:45:30.381091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.096 [2024-09-29 16:45:30.381125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.096 qpair failed and we were unable to recover it. 00:37:30.096 [2024-09-29 16:45:30.381244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.096 [2024-09-29 16:45:30.381277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.096 qpair failed and we were unable to recover it. 00:37:30.096 [2024-09-29 16:45:30.381394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.096 [2024-09-29 16:45:30.381427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.096 qpair failed and we were unable to recover it. 00:37:30.096 [2024-09-29 16:45:30.381551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.096 [2024-09-29 16:45:30.381584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.096 qpair failed and we were unable to recover it. 00:37:30.096 [2024-09-29 16:45:30.381713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.096 [2024-09-29 16:45:30.381748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.096 qpair failed and we were unable to recover it. 00:37:30.096 [2024-09-29 16:45:30.381894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.096 [2024-09-29 16:45:30.381929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.096 qpair failed and we were unable to recover it. 00:37:30.096 [2024-09-29 16:45:30.382100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.096 [2024-09-29 16:45:30.382134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.096 qpair failed and we were unable to recover it. 00:37:30.096 [2024-09-29 16:45:30.382276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.096 [2024-09-29 16:45:30.382310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.096 qpair failed and we were unable to recover it. 00:37:30.096 [2024-09-29 16:45:30.382462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.096 [2024-09-29 16:45:30.382496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.096 qpair failed and we were unable to recover it. 00:37:30.096 [2024-09-29 16:45:30.382639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.096 [2024-09-29 16:45:30.382684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.096 qpair failed and we were unable to recover it. 00:37:30.096 [2024-09-29 16:45:30.382795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.096 [2024-09-29 16:45:30.382828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.096 qpair failed and we were unable to recover it. 00:37:30.096 [2024-09-29 16:45:30.382973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.096 [2024-09-29 16:45:30.383006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.096 qpair failed and we were unable to recover it. 00:37:30.096 [2024-09-29 16:45:30.383130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.096 [2024-09-29 16:45:30.383164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.096 qpair failed and we were unable to recover it. 00:37:30.096 [2024-09-29 16:45:30.383351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.096 [2024-09-29 16:45:30.383384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.096 qpair failed and we were unable to recover it. 00:37:30.096 [2024-09-29 16:45:30.383527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.096 [2024-09-29 16:45:30.383566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.096 qpair failed and we were unable to recover it. 00:37:30.096 [2024-09-29 16:45:30.383745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.096 [2024-09-29 16:45:30.383780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.096 qpair failed and we were unable to recover it. 00:37:30.096 [2024-09-29 16:45:30.383896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.096 [2024-09-29 16:45:30.383930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.096 qpair failed and we were unable to recover it. 00:37:30.096 [2024-09-29 16:45:30.384117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.096 [2024-09-29 16:45:30.384152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.096 qpair failed and we were unable to recover it. 00:37:30.096 [2024-09-29 16:45:30.384331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.096 [2024-09-29 16:45:30.384377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.096 qpair failed and we were unable to recover it. 00:37:30.096 [2024-09-29 16:45:30.384541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.096 [2024-09-29 16:45:30.384595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.096 qpair failed and we were unable to recover it. 00:37:30.096 [2024-09-29 16:45:30.384743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.096 [2024-09-29 16:45:30.384779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.096 qpair failed and we were unable to recover it. 00:37:30.096 [2024-09-29 16:45:30.384901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.096 [2024-09-29 16:45:30.384935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.096 qpair failed and we were unable to recover it. 00:37:30.096 [2024-09-29 16:45:30.385087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.096 [2024-09-29 16:45:30.385130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.096 qpair failed and we were unable to recover it. 00:37:30.096 [2024-09-29 16:45:30.385248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.096 [2024-09-29 16:45:30.385282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.096 qpair failed and we were unable to recover it. 00:37:30.096 [2024-09-29 16:45:30.385413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.096 [2024-09-29 16:45:30.385446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.096 qpair failed and we were unable to recover it. 00:37:30.096 [2024-09-29 16:45:30.385562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.096 [2024-09-29 16:45:30.385596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.096 qpair failed and we were unable to recover it. 00:37:30.096 [2024-09-29 16:45:30.385753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.096 [2024-09-29 16:45:30.385787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.096 qpair failed and we were unable to recover it. 00:37:30.097 [2024-09-29 16:45:30.385936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.097 [2024-09-29 16:45:30.385974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.097 qpair failed and we were unable to recover it. 00:37:30.097 [2024-09-29 16:45:30.386118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.097 [2024-09-29 16:45:30.386161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.097 qpair failed and we were unable to recover it. 00:37:30.097 [2024-09-29 16:45:30.386286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.097 [2024-09-29 16:45:30.386320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.097 qpair failed and we were unable to recover it. 00:37:30.097 [2024-09-29 16:45:30.386466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.097 [2024-09-29 16:45:30.386500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.097 qpair failed and we were unable to recover it. 00:37:30.097 [2024-09-29 16:45:30.386616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.097 [2024-09-29 16:45:30.386651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.097 qpair failed and we were unable to recover it. 00:37:30.097 [2024-09-29 16:45:30.386775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.097 [2024-09-29 16:45:30.386811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.097 qpair failed and we were unable to recover it. 00:37:30.097 [2024-09-29 16:45:30.386938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.097 [2024-09-29 16:45:30.386973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.097 qpair failed and we were unable to recover it. 00:37:30.097 [2024-09-29 16:45:30.387134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.097 [2024-09-29 16:45:30.387167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.097 qpair failed and we were unable to recover it. 00:37:30.097 [2024-09-29 16:45:30.387317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.097 [2024-09-29 16:45:30.387357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.097 qpair failed and we were unable to recover it. 00:37:30.097 [2024-09-29 16:45:30.387504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.097 [2024-09-29 16:45:30.387538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.097 qpair failed and we were unable to recover it. 00:37:30.097 [2024-09-29 16:45:30.387679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.097 [2024-09-29 16:45:30.387713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.097 qpair failed and we were unable to recover it. 00:37:30.097 [2024-09-29 16:45:30.387836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.097 [2024-09-29 16:45:30.387869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.097 qpair failed and we were unable to recover it. 00:37:30.097 [2024-09-29 16:45:30.388003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.097 [2024-09-29 16:45:30.388036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.097 qpair failed and we were unable to recover it. 00:37:30.097 [2024-09-29 16:45:30.388179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.097 [2024-09-29 16:45:30.388212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.097 qpair failed and we were unable to recover it. 00:37:30.097 [2024-09-29 16:45:30.388325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.097 [2024-09-29 16:45:30.388359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.097 qpair failed and we were unable to recover it. 00:37:30.097 [2024-09-29 16:45:30.388508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.097 [2024-09-29 16:45:30.388544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.097 qpair failed and we were unable to recover it. 00:37:30.097 [2024-09-29 16:45:30.388679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.097 [2024-09-29 16:45:30.388715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.097 qpair failed and we were unable to recover it. 00:37:30.097 [2024-09-29 16:45:30.388848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.097 [2024-09-29 16:45:30.388883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.097 qpair failed and we were unable to recover it. 00:37:30.097 [2024-09-29 16:45:30.389026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.097 [2024-09-29 16:45:30.389062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.097 qpair failed and we were unable to recover it. 00:37:30.097 [2024-09-29 16:45:30.389205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.097 [2024-09-29 16:45:30.389239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.097 qpair failed and we were unable to recover it. 00:37:30.097 [2024-09-29 16:45:30.389384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.097 [2024-09-29 16:45:30.389418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.097 qpair failed and we were unable to recover it. 00:37:30.097 [2024-09-29 16:45:30.389572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.097 [2024-09-29 16:45:30.389607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.097 qpair failed and we were unable to recover it. 00:37:30.097 [2024-09-29 16:45:30.389747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.097 [2024-09-29 16:45:30.389782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.097 qpair failed and we were unable to recover it. 00:37:30.097 [2024-09-29 16:45:30.389922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.097 [2024-09-29 16:45:30.389956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.097 qpair failed and we were unable to recover it. 00:37:30.097 [2024-09-29 16:45:30.390090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.097 [2024-09-29 16:45:30.390133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.097 qpair failed and we were unable to recover it. 00:37:30.097 [2024-09-29 16:45:30.390313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.097 [2024-09-29 16:45:30.390347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.097 qpair failed and we were unable to recover it. 00:37:30.097 [2024-09-29 16:45:30.390461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.097 [2024-09-29 16:45:30.390496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.097 qpair failed and we were unable to recover it. 00:37:30.097 [2024-09-29 16:45:30.390698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.097 [2024-09-29 16:45:30.390732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.097 qpair failed and we were unable to recover it. 00:37:30.097 [2024-09-29 16:45:30.390842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.097 [2024-09-29 16:45:30.390876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.097 qpair failed and we were unable to recover it. 00:37:30.097 [2024-09-29 16:45:30.390992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.097 [2024-09-29 16:45:30.391026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.097 qpair failed and we were unable to recover it. 00:37:30.097 [2024-09-29 16:45:30.391181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.097 [2024-09-29 16:45:30.391213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.097 qpair failed and we were unable to recover it. 00:37:30.097 [2024-09-29 16:45:30.391371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.097 [2024-09-29 16:45:30.391404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.097 qpair failed and we were unable to recover it. 00:37:30.097 [2024-09-29 16:45:30.391548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.097 [2024-09-29 16:45:30.391592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.097 qpair failed and we were unable to recover it. 00:37:30.097 [2024-09-29 16:45:30.391709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.097 [2024-09-29 16:45:30.391743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.097 qpair failed and we were unable to recover it. 00:37:30.097 [2024-09-29 16:45:30.391887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.097 [2024-09-29 16:45:30.391921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.097 qpair failed and we were unable to recover it. 00:37:30.097 [2024-09-29 16:45:30.392040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.097 [2024-09-29 16:45:30.392074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.097 qpair failed and we were unable to recover it. 00:37:30.097 [2024-09-29 16:45:30.392219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.097 [2024-09-29 16:45:30.392257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.097 qpair failed and we were unable to recover it. 00:37:30.097 [2024-09-29 16:45:30.392373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.097 [2024-09-29 16:45:30.392406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.097 qpair failed and we were unable to recover it. 00:37:30.097 [2024-09-29 16:45:30.392557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.098 [2024-09-29 16:45:30.392591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.098 qpair failed and we were unable to recover it. 00:37:30.098 [2024-09-29 16:45:30.392747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.098 [2024-09-29 16:45:30.392782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.098 qpair failed and we were unable to recover it. 00:37:30.098 [2024-09-29 16:45:30.392900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.098 [2024-09-29 16:45:30.392933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.098 qpair failed and we were unable to recover it. 00:37:30.098 [2024-09-29 16:45:30.393083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.098 [2024-09-29 16:45:30.393124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.098 qpair failed and we were unable to recover it. 00:37:30.098 [2024-09-29 16:45:30.393269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.098 [2024-09-29 16:45:30.393302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.098 qpair failed and we were unable to recover it. 00:37:30.098 [2024-09-29 16:45:30.393448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.098 [2024-09-29 16:45:30.393498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.098 qpair failed and we were unable to recover it. 00:37:30.098 [2024-09-29 16:45:30.393660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.098 [2024-09-29 16:45:30.393711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.098 qpair failed and we were unable to recover it. 00:37:30.098 [2024-09-29 16:45:30.393827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.098 [2024-09-29 16:45:30.393860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.098 qpair failed and we were unable to recover it. 00:37:30.098 [2024-09-29 16:45:30.394031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.098 [2024-09-29 16:45:30.394064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.098 qpair failed and we were unable to recover it. 00:37:30.098 [2024-09-29 16:45:30.394239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.098 [2024-09-29 16:45:30.394282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.098 qpair failed and we were unable to recover it. 00:37:30.098 [2024-09-29 16:45:30.394395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.098 [2024-09-29 16:45:30.394428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.098 qpair failed and we were unable to recover it. 00:37:30.098 [2024-09-29 16:45:30.394601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.098 [2024-09-29 16:45:30.394634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.098 qpair failed and we were unable to recover it. 00:37:30.098 [2024-09-29 16:45:30.394770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.098 [2024-09-29 16:45:30.394804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.098 qpair failed and we were unable to recover it. 00:37:30.098 [2024-09-29 16:45:30.394942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.098 [2024-09-29 16:45:30.394982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.098 qpair failed and we were unable to recover it. 00:37:30.098 [2024-09-29 16:45:30.395124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.098 [2024-09-29 16:45:30.395174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.098 qpair failed and we were unable to recover it. 00:37:30.098 [2024-09-29 16:45:30.395332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.098 [2024-09-29 16:45:30.395366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.098 qpair failed and we were unable to recover it. 00:37:30.098 [2024-09-29 16:45:30.395539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.098 [2024-09-29 16:45:30.395572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.098 qpair failed and we were unable to recover it. 00:37:30.098 [2024-09-29 16:45:30.395724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.098 [2024-09-29 16:45:30.395759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.098 qpair failed and we were unable to recover it. 00:37:30.098 [2024-09-29 16:45:30.395882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.098 [2024-09-29 16:45:30.395916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.098 qpair failed and we were unable to recover it. 00:37:30.098 [2024-09-29 16:45:30.396043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.098 [2024-09-29 16:45:30.396076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.098 qpair failed and we were unable to recover it. 00:37:30.098 [2024-09-29 16:45:30.396201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.098 [2024-09-29 16:45:30.396234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.098 qpair failed and we were unable to recover it. 00:37:30.098 [2024-09-29 16:45:30.396374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.098 [2024-09-29 16:45:30.396407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.098 qpair failed and we were unable to recover it. 00:37:30.098 [2024-09-29 16:45:30.396539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.098 [2024-09-29 16:45:30.396573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.098 qpair failed and we were unable to recover it. 00:37:30.098 [2024-09-29 16:45:30.396712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.098 [2024-09-29 16:45:30.396746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.098 qpair failed and we were unable to recover it. 00:37:30.098 [2024-09-29 16:45:30.396858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.098 [2024-09-29 16:45:30.396892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.098 qpair failed and we were unable to recover it. 00:37:30.098 [2024-09-29 16:45:30.397016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.098 [2024-09-29 16:45:30.397059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.098 qpair failed and we were unable to recover it. 00:37:30.098 [2024-09-29 16:45:30.397203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.098 [2024-09-29 16:45:30.397236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.098 qpair failed and we were unable to recover it. 00:37:30.098 [2024-09-29 16:45:30.397345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.098 [2024-09-29 16:45:30.397379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.098 qpair failed and we were unable to recover it. 00:37:30.098 [2024-09-29 16:45:30.397518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.098 [2024-09-29 16:45:30.397551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.098 qpair failed and we were unable to recover it. 00:37:30.098 [2024-09-29 16:45:30.397706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.098 [2024-09-29 16:45:30.397740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.098 qpair failed and we were unable to recover it. 00:37:30.098 [2024-09-29 16:45:30.397890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.098 [2024-09-29 16:45:30.397924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.098 qpair failed and we were unable to recover it. 00:37:30.098 [2024-09-29 16:45:30.398078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.098 [2024-09-29 16:45:30.398111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.098 qpair failed and we were unable to recover it. 00:37:30.098 [2024-09-29 16:45:30.398232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.098 [2024-09-29 16:45:30.398265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.098 qpair failed and we were unable to recover it. 00:37:30.098 [2024-09-29 16:45:30.398438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.098 [2024-09-29 16:45:30.398471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.098 qpair failed and we were unable to recover it. 00:37:30.098 [2024-09-29 16:45:30.398606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.098 [2024-09-29 16:45:30.398668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.098 qpair failed and we were unable to recover it. 00:37:30.098 [2024-09-29 16:45:30.398823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.098 [2024-09-29 16:45:30.398868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.098 qpair failed and we were unable to recover it. 00:37:30.098 [2024-09-29 16:45:30.399000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.098 [2024-09-29 16:45:30.399036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.098 qpair failed and we were unable to recover it. 00:37:30.098 [2024-09-29 16:45:30.399201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.098 [2024-09-29 16:45:30.399245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.098 qpair failed and we were unable to recover it. 00:37:30.099 [2024-09-29 16:45:30.399396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.099 [2024-09-29 16:45:30.399431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.099 qpair failed and we were unable to recover it. 00:37:30.099 [2024-09-29 16:45:30.399600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.099 [2024-09-29 16:45:30.399648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.099 qpair failed and we were unable to recover it. 00:37:30.099 [2024-09-29 16:45:30.399803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.099 [2024-09-29 16:45:30.399840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.099 qpair failed and we were unable to recover it. 00:37:30.099 [2024-09-29 16:45:30.399976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.099 [2024-09-29 16:45:30.400015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.099 qpair failed and we were unable to recover it. 00:37:30.099 [2024-09-29 16:45:30.400169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.099 [2024-09-29 16:45:30.400213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.099 qpair failed and we were unable to recover it. 00:37:30.099 [2024-09-29 16:45:30.400342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.099 [2024-09-29 16:45:30.400377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.099 qpair failed and we were unable to recover it. 00:37:30.099 [2024-09-29 16:45:30.400523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.099 [2024-09-29 16:45:30.400557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.099 qpair failed and we were unable to recover it. 00:37:30.099 [2024-09-29 16:45:30.400683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.099 [2024-09-29 16:45:30.400717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.099 qpair failed and we were unable to recover it. 00:37:30.099 [2024-09-29 16:45:30.400837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.099 [2024-09-29 16:45:30.400871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.099 qpair failed and we were unable to recover it. 00:37:30.099 [2024-09-29 16:45:30.400992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.099 [2024-09-29 16:45:30.401025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.099 qpair failed and we were unable to recover it. 00:37:30.099 [2024-09-29 16:45:30.401203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.099 [2024-09-29 16:45:30.401237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.099 qpair failed and we were unable to recover it. 00:37:30.099 [2024-09-29 16:45:30.401390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.099 [2024-09-29 16:45:30.401427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.099 qpair failed and we were unable to recover it. 00:37:30.099 [2024-09-29 16:45:30.401563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.099 [2024-09-29 16:45:30.401597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.099 qpair failed and we were unable to recover it. 00:37:30.099 [2024-09-29 16:45:30.401716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.099 [2024-09-29 16:45:30.401750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.099 qpair failed and we were unable to recover it. 00:37:30.099 [2024-09-29 16:45:30.402812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.099 [2024-09-29 16:45:30.402852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.099 qpair failed and we were unable to recover it. 00:37:30.099 [2024-09-29 16:45:30.403014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.099 [2024-09-29 16:45:30.403061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.099 qpair failed and we were unable to recover it. 00:37:30.099 [2024-09-29 16:45:30.403825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.099 [2024-09-29 16:45:30.403864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.099 qpair failed and we were unable to recover it. 00:37:30.099 [2024-09-29 16:45:30.404019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.099 [2024-09-29 16:45:30.404074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.099 qpair failed and we were unable to recover it. 00:37:30.099 [2024-09-29 16:45:30.404205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.099 [2024-09-29 16:45:30.404240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.099 qpair failed and we were unable to recover it. 00:37:30.099 [2024-09-29 16:45:30.404363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.099 [2024-09-29 16:45:30.404396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.099 qpair failed and we were unable to recover it. 00:37:30.099 [2024-09-29 16:45:30.404542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.099 [2024-09-29 16:45:30.404576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.099 qpair failed and we were unable to recover it. 00:37:30.099 [2024-09-29 16:45:30.404734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.099 [2024-09-29 16:45:30.404783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.099 qpair failed and we were unable to recover it. 00:37:30.099 [2024-09-29 16:45:30.404910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.099 [2024-09-29 16:45:30.404946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.099 qpair failed and we were unable to recover it. 00:37:30.099 [2024-09-29 16:45:30.405121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.099 [2024-09-29 16:45:30.405162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.099 qpair failed and we were unable to recover it. 00:37:30.099 [2024-09-29 16:45:30.405324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.099 [2024-09-29 16:45:30.405359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.099 qpair failed and we were unable to recover it. 00:37:30.099 [2024-09-29 16:45:30.405585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.099 [2024-09-29 16:45:30.405645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.099 qpair failed and we were unable to recover it. 00:37:30.099 [2024-09-29 16:45:30.405900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.099 [2024-09-29 16:45:30.405934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.099 qpair failed and we were unable to recover it. 00:37:30.099 [2024-09-29 16:45:30.406090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.099 [2024-09-29 16:45:30.406124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.099 qpair failed and we were unable to recover it. 00:37:30.099 [2024-09-29 16:45:30.406276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.099 [2024-09-29 16:45:30.406310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.099 qpair failed and we were unable to recover it. 00:37:30.099 [2024-09-29 16:45:30.406469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.099 [2024-09-29 16:45:30.406520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.099 qpair failed and we were unable to recover it. 00:37:30.099 [2024-09-29 16:45:30.406730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.099 [2024-09-29 16:45:30.406766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.099 qpair failed and we were unable to recover it. 00:37:30.099 [2024-09-29 16:45:30.406890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.099 [2024-09-29 16:45:30.406924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.099 qpair failed and we were unable to recover it. 00:37:30.099 [2024-09-29 16:45:30.407066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.099 [2024-09-29 16:45:30.407099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.099 qpair failed and we were unable to recover it. 00:37:30.099 [2024-09-29 16:45:30.407227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.099 [2024-09-29 16:45:30.407261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.099 qpair failed and we were unable to recover it. 00:37:30.099 [2024-09-29 16:45:30.407407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.099 [2024-09-29 16:45:30.407441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.100 qpair failed and we were unable to recover it. 00:37:30.100 [2024-09-29 16:45:30.407670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.100 [2024-09-29 16:45:30.407715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.100 qpair failed and we were unable to recover it. 00:37:30.100 [2024-09-29 16:45:30.407842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.100 [2024-09-29 16:45:30.407878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.100 qpair failed and we were unable to recover it. 00:37:30.100 [2024-09-29 16:45:30.408000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.100 [2024-09-29 16:45:30.408034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.100 qpair failed and we were unable to recover it. 00:37:30.100 [2024-09-29 16:45:30.408173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.100 [2024-09-29 16:45:30.408207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.100 qpair failed and we were unable to recover it. 00:37:30.100 [2024-09-29 16:45:30.408345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.100 [2024-09-29 16:45:30.408386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.100 qpair failed and we were unable to recover it. 00:37:30.100 [2024-09-29 16:45:30.408589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.100 [2024-09-29 16:45:30.408627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.100 qpair failed and we were unable to recover it. 00:37:30.100 [2024-09-29 16:45:30.408762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.100 [2024-09-29 16:45:30.408797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.100 qpair failed and we were unable to recover it. 00:37:30.100 [2024-09-29 16:45:30.408941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.100 [2024-09-29 16:45:30.408977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.100 qpair failed and we were unable to recover it. 00:37:30.100 [2024-09-29 16:45:30.409137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.100 [2024-09-29 16:45:30.409171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.100 qpair failed and we were unable to recover it. 00:37:30.100 [2024-09-29 16:45:30.409287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.100 [2024-09-29 16:45:30.409321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.100 qpair failed and we were unable to recover it. 00:37:30.100 [2024-09-29 16:45:30.409455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.100 [2024-09-29 16:45:30.409488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.100 qpair failed and we were unable to recover it. 00:37:30.100 [2024-09-29 16:45:30.409600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.100 [2024-09-29 16:45:30.409633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.100 qpair failed and we were unable to recover it. 00:37:30.100 [2024-09-29 16:45:30.409767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.100 [2024-09-29 16:45:30.409801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.100 qpair failed and we were unable to recover it. 00:37:30.100 [2024-09-29 16:45:30.409944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.100 [2024-09-29 16:45:30.409980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.100 qpair failed and we were unable to recover it. 00:37:30.100 [2024-09-29 16:45:30.410096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.100 [2024-09-29 16:45:30.410129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.100 qpair failed and we were unable to recover it. 00:37:30.100 [2024-09-29 16:45:30.410251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.100 [2024-09-29 16:45:30.410285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.100 qpair failed and we were unable to recover it. 00:37:30.100 [2024-09-29 16:45:30.410435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.100 [2024-09-29 16:45:30.410468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.100 qpair failed and we were unable to recover it. 00:37:30.100 [2024-09-29 16:45:30.410585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.100 [2024-09-29 16:45:30.410619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.100 qpair failed and we were unable to recover it. 00:37:30.100 [2024-09-29 16:45:30.410760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.100 [2024-09-29 16:45:30.410795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.100 qpair failed and we were unable to recover it. 00:37:30.100 [2024-09-29 16:45:30.410911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.100 [2024-09-29 16:45:30.410945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.100 qpair failed and we were unable to recover it. 00:37:30.100 [2024-09-29 16:45:30.411114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.100 [2024-09-29 16:45:30.411147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.100 qpair failed and we were unable to recover it. 00:37:30.100 [2024-09-29 16:45:30.411295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.100 [2024-09-29 16:45:30.411328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.100 qpair failed and we were unable to recover it. 00:37:30.100 [2024-09-29 16:45:30.411467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.100 [2024-09-29 16:45:30.411500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.100 qpair failed and we were unable to recover it. 00:37:30.100 [2024-09-29 16:45:30.411627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.100 [2024-09-29 16:45:30.411679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.100 qpair failed and we were unable to recover it. 00:37:30.100 [2024-09-29 16:45:30.411849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.100 [2024-09-29 16:45:30.411882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.100 qpair failed and we were unable to recover it. 00:37:30.100 [2024-09-29 16:45:30.411990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.100 [2024-09-29 16:45:30.412024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.100 qpair failed and we were unable to recover it. 00:37:30.100 [2024-09-29 16:45:30.412144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.100 [2024-09-29 16:45:30.412177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.100 qpair failed and we were unable to recover it. 00:37:30.100 [2024-09-29 16:45:30.412337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.100 [2024-09-29 16:45:30.412370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.100 qpair failed and we were unable to recover it. 00:37:30.100 [2024-09-29 16:45:30.412509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.100 [2024-09-29 16:45:30.412546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.100 qpair failed and we were unable to recover it. 00:37:30.100 [2024-09-29 16:45:30.412714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.100 [2024-09-29 16:45:30.412747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.100 qpair failed and we were unable to recover it. 00:37:30.100 [2024-09-29 16:45:30.412865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.100 [2024-09-29 16:45:30.412899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.100 qpair failed and we were unable to recover it. 00:37:30.100 [2024-09-29 16:45:30.413061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.100 [2024-09-29 16:45:30.413093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.100 qpair failed and we were unable to recover it. 00:37:30.100 [2024-09-29 16:45:30.413276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.100 [2024-09-29 16:45:30.413309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.100 qpair failed and we were unable to recover it. 00:37:30.100 [2024-09-29 16:45:30.414260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.100 [2024-09-29 16:45:30.414312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.100 qpair failed and we were unable to recover it. 00:37:30.100 [2024-09-29 16:45:30.414506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.100 [2024-09-29 16:45:30.414542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.100 qpair failed and we were unable to recover it. 00:37:30.100 [2024-09-29 16:45:30.414684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.100 [2024-09-29 16:45:30.414719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.100 qpair failed and we were unable to recover it. 00:37:30.100 [2024-09-29 16:45:30.414844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.100 [2024-09-29 16:45:30.414877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.100 qpair failed and we were unable to recover it. 00:37:30.100 [2024-09-29 16:45:30.415031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.101 [2024-09-29 16:45:30.415067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.101 qpair failed and we were unable to recover it. 00:37:30.101 [2024-09-29 16:45:30.415219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.101 [2024-09-29 16:45:30.415254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.101 qpair failed and we were unable to recover it. 00:37:30.101 [2024-09-29 16:45:30.415395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.101 [2024-09-29 16:45:30.415428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.101 qpair failed and we were unable to recover it. 00:37:30.101 [2024-09-29 16:45:30.415585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.101 [2024-09-29 16:45:30.415619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.101 qpair failed and we were unable to recover it. 00:37:30.101 [2024-09-29 16:45:30.415790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.101 [2024-09-29 16:45:30.415825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.101 qpair failed and we were unable to recover it. 00:37:30.101 [2024-09-29 16:45:30.416004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.101 [2024-09-29 16:45:30.416052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.101 qpair failed and we were unable to recover it. 00:37:30.101 [2024-09-29 16:45:30.416174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.101 [2024-09-29 16:45:30.416210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.101 qpair failed and we were unable to recover it. 00:37:30.101 [2024-09-29 16:45:30.416340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.101 [2024-09-29 16:45:30.416390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.101 qpair failed and we were unable to recover it. 00:37:30.101 [2024-09-29 16:45:30.416515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.101 [2024-09-29 16:45:30.416549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.101 qpair failed and we were unable to recover it. 00:37:30.101 [2024-09-29 16:45:30.416703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.101 [2024-09-29 16:45:30.416739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.101 qpair failed and we were unable to recover it. 00:37:30.101 [2024-09-29 16:45:30.416867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.101 [2024-09-29 16:45:30.416901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.101 qpair failed and we were unable to recover it. 00:37:30.101 [2024-09-29 16:45:30.417014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.101 [2024-09-29 16:45:30.417048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.101 qpair failed and we were unable to recover it. 00:37:30.101 [2024-09-29 16:45:30.417201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.101 [2024-09-29 16:45:30.417234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.101 qpair failed and we were unable to recover it. 00:37:30.101 [2024-09-29 16:45:30.417375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.101 [2024-09-29 16:45:30.417410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.101 qpair failed and we were unable to recover it. 00:37:30.101 [2024-09-29 16:45:30.417517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.101 [2024-09-29 16:45:30.417550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.101 qpair failed and we were unable to recover it. 00:37:30.101 [2024-09-29 16:45:30.417710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.101 [2024-09-29 16:45:30.417744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.101 qpair failed and we were unable to recover it. 00:37:30.101 [2024-09-29 16:45:30.417872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.101 [2024-09-29 16:45:30.417905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.101 qpair failed and we were unable to recover it. 00:37:30.101 [2024-09-29 16:45:30.418064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.101 [2024-09-29 16:45:30.418099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.101 qpair failed and we were unable to recover it. 00:37:30.101 [2024-09-29 16:45:30.418258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.101 [2024-09-29 16:45:30.418293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.101 qpair failed and we were unable to recover it. 00:37:30.101 [2024-09-29 16:45:30.418440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.101 [2024-09-29 16:45:30.418474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.101 qpair failed and we were unable to recover it. 00:37:30.101 [2024-09-29 16:45:30.418597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.101 [2024-09-29 16:45:30.418630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.101 qpair failed and we were unable to recover it. 00:37:30.101 [2024-09-29 16:45:30.418789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.101 [2024-09-29 16:45:30.418836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.101 qpair failed and we were unable to recover it. 00:37:30.101 [2024-09-29 16:45:30.419004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.101 [2024-09-29 16:45:30.419045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.101 qpair failed and we were unable to recover it. 00:37:30.101 [2024-09-29 16:45:30.419206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.101 [2024-09-29 16:45:30.419243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.101 qpair failed and we were unable to recover it. 00:37:30.101 [2024-09-29 16:45:30.419373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.101 [2024-09-29 16:45:30.419408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.101 qpair failed and we were unable to recover it. 00:37:30.101 [2024-09-29 16:45:30.419602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.101 [2024-09-29 16:45:30.419638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.101 qpair failed and we were unable to recover it. 00:37:30.101 [2024-09-29 16:45:30.419797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.101 [2024-09-29 16:45:30.419834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.101 qpair failed and we were unable to recover it. 00:37:30.101 [2024-09-29 16:45:30.419973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.101 [2024-09-29 16:45:30.420008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.101 qpair failed and we were unable to recover it. 00:37:30.101 [2024-09-29 16:45:30.420188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.101 [2024-09-29 16:45:30.420223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.101 qpair failed and we were unable to recover it. 00:37:30.101 [2024-09-29 16:45:30.420344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.101 [2024-09-29 16:45:30.420379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.101 qpair failed and we were unable to recover it. 00:37:30.101 [2024-09-29 16:45:30.420551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.101 [2024-09-29 16:45:30.420585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.101 qpair failed and we were unable to recover it. 00:37:30.101 [2024-09-29 16:45:30.420752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.101 [2024-09-29 16:45:30.420805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.101 qpair failed and we were unable to recover it. 00:37:30.101 [2024-09-29 16:45:30.420944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.101 [2024-09-29 16:45:30.421003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.101 qpair failed and we were unable to recover it. 00:37:30.101 [2024-09-29 16:45:30.421143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.101 [2024-09-29 16:45:30.421179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.101 qpair failed and we were unable to recover it. 00:37:30.101 [2024-09-29 16:45:30.421380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.101 [2024-09-29 16:45:30.421418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.101 qpair failed and we were unable to recover it. 00:37:30.101 [2024-09-29 16:45:30.421537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.101 [2024-09-29 16:45:30.421572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.101 qpair failed and we were unable to recover it. 00:37:30.101 [2024-09-29 16:45:30.421732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.101 [2024-09-29 16:45:30.421768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.101 qpair failed and we were unable to recover it. 00:37:30.101 [2024-09-29 16:45:30.421894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.101 [2024-09-29 16:45:30.421928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.101 qpair failed and we were unable to recover it. 00:37:30.101 [2024-09-29 16:45:30.422096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.101 [2024-09-29 16:45:30.422131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.101 qpair failed and we were unable to recover it. 00:37:30.102 [2024-09-29 16:45:30.422302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.102 [2024-09-29 16:45:30.422336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.102 qpair failed and we were unable to recover it. 00:37:30.102 [2024-09-29 16:45:30.422539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.102 [2024-09-29 16:45:30.422575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.102 qpair failed and we were unable to recover it. 00:37:30.102 [2024-09-29 16:45:30.422695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.102 [2024-09-29 16:45:30.422730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.102 qpair failed and we were unable to recover it. 00:37:30.102 [2024-09-29 16:45:30.422879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.102 [2024-09-29 16:45:30.422928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.102 qpair failed and we were unable to recover it. 00:37:30.102 [2024-09-29 16:45:30.423796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.102 [2024-09-29 16:45:30.423838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.102 qpair failed and we were unable to recover it. 00:37:30.102 [2024-09-29 16:45:30.423978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.102 [2024-09-29 16:45:30.424015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.102 qpair failed and we were unable to recover it. 00:37:30.102 [2024-09-29 16:45:30.424181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.102 [2024-09-29 16:45:30.424216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.102 qpair failed and we were unable to recover it. 00:37:30.102 [2024-09-29 16:45:30.424379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.102 [2024-09-29 16:45:30.424414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.102 qpair failed and we were unable to recover it. 00:37:30.102 [2024-09-29 16:45:30.424585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.102 [2024-09-29 16:45:30.424620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.102 qpair failed and we were unable to recover it. 00:37:30.102 [2024-09-29 16:45:30.424774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.102 [2024-09-29 16:45:30.424809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.102 qpair failed and we were unable to recover it. 00:37:30.102 [2024-09-29 16:45:30.424930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.102 [2024-09-29 16:45:30.424964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.102 qpair failed and we were unable to recover it. 00:37:30.102 [2024-09-29 16:45:30.425115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.102 [2024-09-29 16:45:30.425170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.102 qpair failed and we were unable to recover it. 00:37:30.102 [2024-09-29 16:45:30.425335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.102 [2024-09-29 16:45:30.425374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.102 qpair failed and we were unable to recover it. 00:37:30.102 [2024-09-29 16:45:30.428689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.102 [2024-09-29 16:45:30.428747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.102 qpair failed and we were unable to recover it. 00:37:30.102 [2024-09-29 16:45:30.428911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.102 [2024-09-29 16:45:30.428946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.102 qpair failed and we were unable to recover it. 00:37:30.102 [2024-09-29 16:45:30.429129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.102 [2024-09-29 16:45:30.429164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.102 qpair failed and we were unable to recover it. 00:37:30.102 [2024-09-29 16:45:30.429327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.102 [2024-09-29 16:45:30.429362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.102 qpair failed and we were unable to recover it. 00:37:30.102 [2024-09-29 16:45:30.429498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.102 [2024-09-29 16:45:30.429533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.102 qpair failed and we were unable to recover it. 00:37:30.102 [2024-09-29 16:45:30.429653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.102 [2024-09-29 16:45:30.429705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.102 qpair failed and we were unable to recover it. 00:37:30.102 [2024-09-29 16:45:30.429840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.102 [2024-09-29 16:45:30.429880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.102 qpair failed and we were unable to recover it. 00:37:30.102 [2024-09-29 16:45:30.430049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.102 [2024-09-29 16:45:30.430108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.102 qpair failed and we were unable to recover it. 00:37:30.102 [2024-09-29 16:45:30.430241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.102 [2024-09-29 16:45:30.430279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.102 qpair failed and we were unable to recover it. 00:37:30.102 [2024-09-29 16:45:30.430438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.102 [2024-09-29 16:45:30.430473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.102 qpair failed and we were unable to recover it. 00:37:30.102 [2024-09-29 16:45:30.430599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.102 [2024-09-29 16:45:30.430633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.102 qpair failed and we were unable to recover it. 00:37:30.102 [2024-09-29 16:45:30.430771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.102 [2024-09-29 16:45:30.430807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.102 qpair failed and we were unable to recover it. 00:37:30.102 [2024-09-29 16:45:30.430932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.102 [2024-09-29 16:45:30.430967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.102 qpair failed and we were unable to recover it. 00:37:30.102 [2024-09-29 16:45:30.431094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.102 [2024-09-29 16:45:30.431128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.102 qpair failed and we were unable to recover it. 00:37:30.102 [2024-09-29 16:45:30.431299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.102 [2024-09-29 16:45:30.431348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.102 qpair failed and we were unable to recover it. 00:37:30.102 [2024-09-29 16:45:30.431540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.102 [2024-09-29 16:45:30.431576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.102 qpair failed and we were unable to recover it. 00:37:30.102 [2024-09-29 16:45:30.431707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.102 [2024-09-29 16:45:30.431744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.102 qpair failed and we were unable to recover it. 00:37:30.102 [2024-09-29 16:45:30.431889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.102 [2024-09-29 16:45:30.431923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.102 qpair failed and we were unable to recover it. 00:37:30.102 [2024-09-29 16:45:30.432081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.102 [2024-09-29 16:45:30.432115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.102 qpair failed and we were unable to recover it. 00:37:30.102 [2024-09-29 16:45:30.432244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.102 [2024-09-29 16:45:30.432284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.102 qpair failed and we were unable to recover it. 00:37:30.102 [2024-09-29 16:45:30.432405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.102 [2024-09-29 16:45:30.432440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.102 qpair failed and we were unable to recover it. 00:37:30.102 [2024-09-29 16:45:30.432587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.102 [2024-09-29 16:45:30.432622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.102 qpair failed and we were unable to recover it. 00:37:30.102 [2024-09-29 16:45:30.432781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.102 [2024-09-29 16:45:30.432829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.102 qpair failed and we were unable to recover it. 00:37:30.102 [2024-09-29 16:45:30.432963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.102 [2024-09-29 16:45:30.433008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.102 qpair failed and we were unable to recover it. 00:37:30.102 [2024-09-29 16:45:30.433195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.102 [2024-09-29 16:45:30.433230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.102 qpair failed and we were unable to recover it. 00:37:30.102 [2024-09-29 16:45:30.433350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.103 [2024-09-29 16:45:30.433384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.103 qpair failed and we were unable to recover it. 00:37:30.103 [2024-09-29 16:45:30.433510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.103 [2024-09-29 16:45:30.433547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.103 qpair failed and we were unable to recover it. 00:37:30.103 [2024-09-29 16:45:30.433713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.103 [2024-09-29 16:45:30.433748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.103 qpair failed and we were unable to recover it. 00:37:30.103 [2024-09-29 16:45:30.433868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.103 [2024-09-29 16:45:30.433904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.103 qpair failed and we were unable to recover it. 00:37:30.103 [2024-09-29 16:45:30.434039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.103 [2024-09-29 16:45:30.434074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.103 qpair failed and we were unable to recover it. 00:37:30.103 [2024-09-29 16:45:30.434224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.103 [2024-09-29 16:45:30.434258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.103 qpair failed and we were unable to recover it. 00:37:30.103 [2024-09-29 16:45:30.434404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.103 [2024-09-29 16:45:30.434450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.103 qpair failed and we were unable to recover it. 00:37:30.103 [2024-09-29 16:45:30.434594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.103 [2024-09-29 16:45:30.434629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.103 qpair failed and we were unable to recover it. 00:37:30.103 [2024-09-29 16:45:30.434794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.103 [2024-09-29 16:45:30.434842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.103 qpair failed and we were unable to recover it. 00:37:30.103 [2024-09-29 16:45:30.434966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.103 [2024-09-29 16:45:30.435012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.103 qpair failed and we were unable to recover it. 00:37:30.103 [2024-09-29 16:45:30.435186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.103 [2024-09-29 16:45:30.435220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.103 qpair failed and we were unable to recover it. 00:37:30.103 [2024-09-29 16:45:30.435344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.103 [2024-09-29 16:45:30.435378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.103 qpair failed and we were unable to recover it. 00:37:30.103 [2024-09-29 16:45:30.435524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.103 [2024-09-29 16:45:30.435559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.103 qpair failed and we were unable to recover it. 00:37:30.103 [2024-09-29 16:45:30.435688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.103 [2024-09-29 16:45:30.435724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.103 qpair failed and we were unable to recover it. 00:37:30.103 [2024-09-29 16:45:30.435862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.103 [2024-09-29 16:45:30.435911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.103 qpair failed and we were unable to recover it. 00:37:30.103 [2024-09-29 16:45:30.436043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.103 [2024-09-29 16:45:30.436079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.103 qpair failed and we were unable to recover it. 00:37:30.103 [2024-09-29 16:45:30.436232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.103 [2024-09-29 16:45:30.436271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.103 qpair failed and we were unable to recover it. 00:37:30.103 [2024-09-29 16:45:30.436397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.103 [2024-09-29 16:45:30.436431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.103 qpair failed and we were unable to recover it. 00:37:30.103 [2024-09-29 16:45:30.436584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.103 [2024-09-29 16:45:30.436618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.103 qpair failed and we were unable to recover it. 00:37:30.103 [2024-09-29 16:45:30.436753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.103 [2024-09-29 16:45:30.436788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.103 qpair failed and we were unable to recover it. 00:37:30.103 [2024-09-29 16:45:30.436903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.103 [2024-09-29 16:45:30.436937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.103 qpair failed and we were unable to recover it. 00:37:30.103 [2024-09-29 16:45:30.437060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.103 [2024-09-29 16:45:30.437094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.103 qpair failed and we were unable to recover it. 00:37:30.103 [2024-09-29 16:45:30.437234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.103 [2024-09-29 16:45:30.437268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.103 qpair failed and we were unable to recover it. 00:37:30.103 [2024-09-29 16:45:30.437418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.103 [2024-09-29 16:45:30.437463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.103 qpair failed and we were unable to recover it. 00:37:30.103 [2024-09-29 16:45:30.437612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.103 [2024-09-29 16:45:30.437646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.103 qpair failed and we were unable to recover it. 00:37:30.103 [2024-09-29 16:45:30.437770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.103 [2024-09-29 16:45:30.437804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.103 qpair failed and we were unable to recover it. 00:37:30.103 [2024-09-29 16:45:30.437919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.103 [2024-09-29 16:45:30.437953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.103 qpair failed and we were unable to recover it. 00:37:30.103 [2024-09-29 16:45:30.438098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.103 [2024-09-29 16:45:30.438132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.103 qpair failed and we were unable to recover it. 00:37:30.103 [2024-09-29 16:45:30.438253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.103 [2024-09-29 16:45:30.438289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.103 qpair failed and we were unable to recover it. 00:37:30.103 [2024-09-29 16:45:30.438445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.103 [2024-09-29 16:45:30.438494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.103 qpair failed and we were unable to recover it. 00:37:30.103 [2024-09-29 16:45:30.438658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.103 [2024-09-29 16:45:30.438714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.103 qpair failed and we were unable to recover it. 00:37:30.103 [2024-09-29 16:45:30.438838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.103 [2024-09-29 16:45:30.438873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.103 qpair failed and we were unable to recover it. 00:37:30.103 [2024-09-29 16:45:30.438987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.103 [2024-09-29 16:45:30.439023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.103 qpair failed and we were unable to recover it. 00:37:30.103 [2024-09-29 16:45:30.439127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.103 [2024-09-29 16:45:30.439160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.103 qpair failed and we were unable to recover it. 00:37:30.104 [2024-09-29 16:45:30.439324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.104 [2024-09-29 16:45:30.439364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.104 qpair failed and we were unable to recover it. 00:37:30.104 [2024-09-29 16:45:30.439478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.104 [2024-09-29 16:45:30.439510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.104 qpair failed and we were unable to recover it. 00:37:30.104 [2024-09-29 16:45:30.439645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.104 [2024-09-29 16:45:30.439687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.104 qpair failed and we were unable to recover it. 00:37:30.104 [2024-09-29 16:45:30.439836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.104 [2024-09-29 16:45:30.439870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.104 qpair failed and we were unable to recover it. 00:37:30.104 [2024-09-29 16:45:30.439986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.104 [2024-09-29 16:45:30.440022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.104 qpair failed and we were unable to recover it. 00:37:30.104 [2024-09-29 16:45:30.440145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.104 [2024-09-29 16:45:30.440185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.104 qpair failed and we were unable to recover it. 00:37:30.104 [2024-09-29 16:45:30.440310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.104 [2024-09-29 16:45:30.440345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.104 qpair failed and we were unable to recover it. 00:37:30.104 [2024-09-29 16:45:30.440492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.104 [2024-09-29 16:45:30.440526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.104 qpair failed and we were unable to recover it. 00:37:30.104 [2024-09-29 16:45:30.440678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.104 [2024-09-29 16:45:30.440713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.104 qpair failed and we were unable to recover it. 00:37:30.104 [2024-09-29 16:45:30.440855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.104 [2024-09-29 16:45:30.440888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.104 qpair failed and we were unable to recover it. 00:37:30.104 [2024-09-29 16:45:30.441000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.104 [2024-09-29 16:45:30.441033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.104 qpair failed and we were unable to recover it. 00:37:30.104 [2024-09-29 16:45:30.441174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.104 [2024-09-29 16:45:30.441207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.104 qpair failed and we were unable to recover it. 00:37:30.104 [2024-09-29 16:45:30.441333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.104 [2024-09-29 16:45:30.441366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.104 qpair failed and we were unable to recover it. 00:37:30.104 [2024-09-29 16:45:30.441522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.104 [2024-09-29 16:45:30.441555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.104 qpair failed and we were unable to recover it. 00:37:30.104 [2024-09-29 16:45:30.441705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.104 [2024-09-29 16:45:30.441739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.104 qpair failed and we were unable to recover it. 00:37:30.104 [2024-09-29 16:45:30.441857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.104 [2024-09-29 16:45:30.441890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.104 qpair failed and we were unable to recover it. 00:37:30.104 [2024-09-29 16:45:30.442030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.104 [2024-09-29 16:45:30.442064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.104 qpair failed and we were unable to recover it. 00:37:30.104 [2024-09-29 16:45:30.442165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.104 [2024-09-29 16:45:30.442198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.104 qpair failed and we were unable to recover it. 00:37:30.104 [2024-09-29 16:45:30.442335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.104 [2024-09-29 16:45:30.442367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.104 qpair failed and we were unable to recover it. 00:37:30.104 [2024-09-29 16:45:30.442484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.104 [2024-09-29 16:45:30.442518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.104 qpair failed and we were unable to recover it. 00:37:30.104 [2024-09-29 16:45:30.442634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.104 [2024-09-29 16:45:30.442680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.104 qpair failed and we were unable to recover it. 00:37:30.104 [2024-09-29 16:45:30.442808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.104 [2024-09-29 16:45:30.442840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.104 qpair failed and we were unable to recover it. 00:37:30.104 [2024-09-29 16:45:30.442968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.104 [2024-09-29 16:45:30.443009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.104 qpair failed and we were unable to recover it. 00:37:30.104 [2024-09-29 16:45:30.443156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.104 [2024-09-29 16:45:30.443189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.104 qpair failed and we were unable to recover it. 00:37:30.104 [2024-09-29 16:45:30.443358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.104 [2024-09-29 16:45:30.443391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.104 qpair failed and we were unable to recover it. 00:37:30.104 [2024-09-29 16:45:30.443518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.104 [2024-09-29 16:45:30.443561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.104 qpair failed and we were unable to recover it. 00:37:30.104 [2024-09-29 16:45:30.443705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.104 [2024-09-29 16:45:30.443753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.104 qpair failed and we were unable to recover it. 00:37:30.104 [2024-09-29 16:45:30.443942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.104 [2024-09-29 16:45:30.443995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.104 qpair failed and we were unable to recover it. 00:37:30.104 [2024-09-29 16:45:30.444119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.104 [2024-09-29 16:45:30.444154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.104 qpair failed and we were unable to recover it. 00:37:30.104 [2024-09-29 16:45:30.444297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.104 [2024-09-29 16:45:30.444330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.104 qpair failed and we were unable to recover it. 00:37:30.104 [2024-09-29 16:45:30.444475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.104 [2024-09-29 16:45:30.444508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.104 qpair failed and we were unable to recover it. 00:37:30.104 [2024-09-29 16:45:30.444626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.104 [2024-09-29 16:45:30.444660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.104 qpair failed and we were unable to recover it. 00:37:30.104 [2024-09-29 16:45:30.444789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.104 [2024-09-29 16:45:30.444821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.104 qpair failed and we were unable to recover it. 00:37:30.104 [2024-09-29 16:45:30.444947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.104 [2024-09-29 16:45:30.445002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.104 qpair failed and we were unable to recover it. 00:37:30.104 [2024-09-29 16:45:30.445127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.104 [2024-09-29 16:45:30.445165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.104 qpair failed and we were unable to recover it. 00:37:30.104 [2024-09-29 16:45:30.445310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.104 [2024-09-29 16:45:30.445344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.104 qpair failed and we were unable to recover it. 00:37:30.104 [2024-09-29 16:45:30.445459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.104 [2024-09-29 16:45:30.445493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.104 qpair failed and we were unable to recover it. 00:37:30.104 [2024-09-29 16:45:30.445645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.104 [2024-09-29 16:45:30.445697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.104 qpair failed and we were unable to recover it. 00:37:30.104 [2024-09-29 16:45:30.445825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.105 [2024-09-29 16:45:30.445859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.105 qpair failed and we were unable to recover it. 00:37:30.105 [2024-09-29 16:45:30.445983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.105 [2024-09-29 16:45:30.446016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.105 qpair failed and we were unable to recover it. 00:37:30.105 [2024-09-29 16:45:30.446159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.105 [2024-09-29 16:45:30.446196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.105 qpair failed and we were unable to recover it. 00:37:30.105 [2024-09-29 16:45:30.446315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.105 [2024-09-29 16:45:30.446348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.105 qpair failed and we were unable to recover it. 00:37:30.105 [2024-09-29 16:45:30.446494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.105 [2024-09-29 16:45:30.446527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.105 qpair failed and we were unable to recover it. 00:37:30.105 [2024-09-29 16:45:30.446683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.105 [2024-09-29 16:45:30.446718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.105 qpair failed and we were unable to recover it. 00:37:30.105 [2024-09-29 16:45:30.446835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.105 [2024-09-29 16:45:30.446869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.105 qpair failed and we were unable to recover it. 00:37:30.105 [2024-09-29 16:45:30.446991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.105 [2024-09-29 16:45:30.447025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.105 qpair failed and we were unable to recover it. 00:37:30.105 [2024-09-29 16:45:30.447192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.105 [2024-09-29 16:45:30.447225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.105 qpair failed and we were unable to recover it. 00:37:30.105 [2024-09-29 16:45:30.447344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.105 [2024-09-29 16:45:30.447378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.105 qpair failed and we were unable to recover it. 00:37:30.105 [2024-09-29 16:45:30.447494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.105 [2024-09-29 16:45:30.447530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.105 qpair failed and we were unable to recover it. 00:37:30.105 [2024-09-29 16:45:30.447679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.105 [2024-09-29 16:45:30.447716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.105 qpair failed and we were unable to recover it. 00:37:30.105 [2024-09-29 16:45:30.447849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.105 [2024-09-29 16:45:30.447897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.105 qpair failed and we were unable to recover it. 00:37:30.105 [2024-09-29 16:45:30.448043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.105 [2024-09-29 16:45:30.448077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.105 qpair failed and we were unable to recover it. 00:37:30.105 [2024-09-29 16:45:30.448215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.105 [2024-09-29 16:45:30.448248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.105 qpair failed and we were unable to recover it. 00:37:30.105 [2024-09-29 16:45:30.448366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.105 [2024-09-29 16:45:30.448399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.105 qpair failed and we were unable to recover it. 00:37:30.105 [2024-09-29 16:45:30.448523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.105 [2024-09-29 16:45:30.448557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.105 qpair failed and we were unable to recover it. 00:37:30.105 [2024-09-29 16:45:30.448686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.105 [2024-09-29 16:45:30.448719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.105 qpair failed and we were unable to recover it. 00:37:30.105 [2024-09-29 16:45:30.448841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.105 [2024-09-29 16:45:30.448875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.105 qpair failed and we were unable to recover it. 00:37:30.105 [2024-09-29 16:45:30.449032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.105 [2024-09-29 16:45:30.449076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.105 qpair failed and we were unable to recover it. 00:37:30.105 [2024-09-29 16:45:30.449219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.105 [2024-09-29 16:45:30.449252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.105 qpair failed and we were unable to recover it. 00:37:30.105 [2024-09-29 16:45:30.449365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.105 [2024-09-29 16:45:30.449398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.105 qpair failed and we were unable to recover it. 00:37:30.105 [2024-09-29 16:45:30.449538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.105 [2024-09-29 16:45:30.449572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.105 qpair failed and we were unable to recover it. 00:37:30.105 [2024-09-29 16:45:30.449726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.105 [2024-09-29 16:45:30.449786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.105 qpair failed and we were unable to recover it. 00:37:30.105 [2024-09-29 16:45:30.449917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.105 [2024-09-29 16:45:30.449954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.105 qpair failed and we were unable to recover it. 00:37:30.105 [2024-09-29 16:45:30.450084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.105 [2024-09-29 16:45:30.450120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.105 qpair failed and we were unable to recover it. 00:37:30.105 [2024-09-29 16:45:30.450342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.105 [2024-09-29 16:45:30.450376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.105 qpair failed and we were unable to recover it. 00:37:30.105 [2024-09-29 16:45:30.450521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.105 [2024-09-29 16:45:30.450556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.105 qpair failed and we were unable to recover it. 00:37:30.105 [2024-09-29 16:45:30.450731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.105 [2024-09-29 16:45:30.450780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.105 qpair failed and we were unable to recover it. 00:37:30.105 [2024-09-29 16:45:30.450919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.105 [2024-09-29 16:45:30.450957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.105 qpair failed and we were unable to recover it. 00:37:30.105 [2024-09-29 16:45:30.451086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.105 [2024-09-29 16:45:30.451121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.105 qpair failed and we were unable to recover it. 00:37:30.105 [2024-09-29 16:45:30.451239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.105 [2024-09-29 16:45:30.451274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.105 qpair failed and we were unable to recover it. 00:37:30.105 [2024-09-29 16:45:30.451445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.105 [2024-09-29 16:45:30.451480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.105 qpair failed and we were unable to recover it. 00:37:30.105 [2024-09-29 16:45:30.451640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.105 [2024-09-29 16:45:30.451709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.105 qpair failed and we were unable to recover it. 00:37:30.105 [2024-09-29 16:45:30.451833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.105 [2024-09-29 16:45:30.451870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.105 qpair failed and we were unable to recover it. 00:37:30.105 [2024-09-29 16:45:30.451982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.105 [2024-09-29 16:45:30.452016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.105 qpair failed and we were unable to recover it. 00:37:30.105 [2024-09-29 16:45:30.452158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.105 [2024-09-29 16:45:30.452192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.105 qpair failed and we were unable to recover it. 00:37:30.105 [2024-09-29 16:45:30.452316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.105 [2024-09-29 16:45:30.452352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.105 qpair failed and we were unable to recover it. 00:37:30.105 [2024-09-29 16:45:30.452494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.106 [2024-09-29 16:45:30.452528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.106 qpair failed and we were unable to recover it. 00:37:30.106 [2024-09-29 16:45:30.452695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.106 [2024-09-29 16:45:30.452743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.106 qpair failed and we were unable to recover it. 00:37:30.106 [2024-09-29 16:45:30.452867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.106 [2024-09-29 16:45:30.452904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.106 qpair failed and we were unable to recover it. 00:37:30.106 [2024-09-29 16:45:30.453054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.106 [2024-09-29 16:45:30.453089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.106 qpair failed and we were unable to recover it. 00:37:30.106 [2024-09-29 16:45:30.453199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.106 [2024-09-29 16:45:30.453240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.106 qpair failed and we were unable to recover it. 00:37:30.106 [2024-09-29 16:45:30.453357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.106 [2024-09-29 16:45:30.453390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.106 qpair failed and we were unable to recover it. 00:37:30.106 [2024-09-29 16:45:30.453500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.106 [2024-09-29 16:45:30.453533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.106 qpair failed and we were unable to recover it. 00:37:30.106 [2024-09-29 16:45:30.453704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.106 [2024-09-29 16:45:30.453753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.106 qpair failed and we were unable to recover it. 00:37:30.106 [2024-09-29 16:45:30.453881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.106 [2024-09-29 16:45:30.453917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.106 qpair failed and we were unable to recover it. 00:37:30.106 [2024-09-29 16:45:30.454068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.106 [2024-09-29 16:45:30.454103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.106 qpair failed and we were unable to recover it. 00:37:30.106 [2024-09-29 16:45:30.454243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.106 [2024-09-29 16:45:30.454277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.106 qpair failed and we were unable to recover it. 00:37:30.106 [2024-09-29 16:45:30.454405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.106 [2024-09-29 16:45:30.454454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.106 qpair failed and we were unable to recover it. 00:37:30.106 [2024-09-29 16:45:30.454583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.106 [2024-09-29 16:45:30.454620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.106 qpair failed and we were unable to recover it. 00:37:30.106 [2024-09-29 16:45:30.454757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.106 [2024-09-29 16:45:30.454792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.106 qpair failed and we were unable to recover it. 00:37:30.106 [2024-09-29 16:45:30.454919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.106 [2024-09-29 16:45:30.454953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.106 qpair failed and we were unable to recover it. 00:37:30.106 [2024-09-29 16:45:30.455096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.106 [2024-09-29 16:45:30.455130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.106 qpair failed and we were unable to recover it. 00:37:30.106 [2024-09-29 16:45:30.455263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.106 [2024-09-29 16:45:30.455297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.106 qpair failed and we were unable to recover it. 00:37:30.106 [2024-09-29 16:45:30.455417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.106 [2024-09-29 16:45:30.455452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.106 qpair failed and we were unable to recover it. 00:37:30.106 [2024-09-29 16:45:30.455600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.106 [2024-09-29 16:45:30.455635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.106 qpair failed and we were unable to recover it. 00:37:30.106 [2024-09-29 16:45:30.455765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.106 [2024-09-29 16:45:30.455799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.106 qpair failed and we were unable to recover it. 00:37:30.106 [2024-09-29 16:45:30.456147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.106 [2024-09-29 16:45:30.456196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.106 qpair failed and we were unable to recover it. 00:37:30.106 [2024-09-29 16:45:30.456348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.106 [2024-09-29 16:45:30.456382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.106 qpair failed and we were unable to recover it. 00:37:30.106 [2024-09-29 16:45:30.456528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.106 [2024-09-29 16:45:30.456562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.106 qpair failed and we were unable to recover it. 00:37:30.106 [2024-09-29 16:45:30.456685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.106 [2024-09-29 16:45:30.456721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.106 qpair failed and we were unable to recover it. 00:37:30.106 [2024-09-29 16:45:30.456831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.106 [2024-09-29 16:45:30.456865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.106 qpair failed and we were unable to recover it. 00:37:30.106 [2024-09-29 16:45:30.457033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.106 [2024-09-29 16:45:30.457082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.106 qpair failed and we were unable to recover it. 00:37:30.106 [2024-09-29 16:45:30.457312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.106 [2024-09-29 16:45:30.457346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.106 qpair failed and we were unable to recover it. 00:37:30.106 [2024-09-29 16:45:30.457459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.106 [2024-09-29 16:45:30.457493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.106 qpair failed and we were unable to recover it. 00:37:30.106 [2024-09-29 16:45:30.457607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.106 [2024-09-29 16:45:30.457641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.106 qpair failed and we were unable to recover it. 00:37:30.106 [2024-09-29 16:45:30.457794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.106 [2024-09-29 16:45:30.457829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.106 qpair failed and we were unable to recover it. 00:37:30.106 [2024-09-29 16:45:30.457959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.106 [2024-09-29 16:45:30.458007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.106 qpair failed and we were unable to recover it. 00:37:30.106 [2024-09-29 16:45:30.458130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.106 [2024-09-29 16:45:30.458167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.106 qpair failed and we were unable to recover it. 00:37:30.106 [2024-09-29 16:45:30.458340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.106 [2024-09-29 16:45:30.458376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.106 qpair failed and we were unable to recover it. 00:37:30.106 [2024-09-29 16:45:30.458486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.106 [2024-09-29 16:45:30.458521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.106 qpair failed and we were unable to recover it. 00:37:30.106 [2024-09-29 16:45:30.458700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.106 [2024-09-29 16:45:30.458748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.106 qpair failed and we were unable to recover it. 00:37:30.106 [2024-09-29 16:45:30.458905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.106 [2024-09-29 16:45:30.458953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.106 qpair failed and we were unable to recover it. 00:37:30.106 [2024-09-29 16:45:30.459110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.106 [2024-09-29 16:45:30.459148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.106 qpair failed and we were unable to recover it. 00:37:30.106 [2024-09-29 16:45:30.459293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.106 [2024-09-29 16:45:30.459328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.106 qpair failed and we were unable to recover it. 00:37:30.106 [2024-09-29 16:45:30.459454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.106 [2024-09-29 16:45:30.459488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.106 qpair failed and we were unable to recover it. 00:37:30.107 [2024-09-29 16:45:30.459669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.107 [2024-09-29 16:45:30.459713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.107 qpair failed and we were unable to recover it. 00:37:30.107 [2024-09-29 16:45:30.459858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.107 [2024-09-29 16:45:30.459892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.107 qpair failed and we were unable to recover it. 00:37:30.107 [2024-09-29 16:45:30.460088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.107 [2024-09-29 16:45:30.460142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.107 qpair failed and we were unable to recover it. 00:37:30.107 [2024-09-29 16:45:30.460288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.107 [2024-09-29 16:45:30.460324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.107 qpair failed and we were unable to recover it. 00:37:30.107 [2024-09-29 16:45:30.460446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.107 [2024-09-29 16:45:30.460480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.107 qpair failed and we were unable to recover it. 00:37:30.107 [2024-09-29 16:45:30.460599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.107 [2024-09-29 16:45:30.460638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.107 qpair failed and we were unable to recover it. 00:37:30.107 [2024-09-29 16:45:30.460798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.107 [2024-09-29 16:45:30.460831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.107 qpair failed and we were unable to recover it. 00:37:30.107 [2024-09-29 16:45:30.460950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.107 [2024-09-29 16:45:30.460993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.107 qpair failed and we were unable to recover it. 00:37:30.107 [2024-09-29 16:45:30.461144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.107 [2024-09-29 16:45:30.461177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.107 qpair failed and we were unable to recover it. 00:37:30.107 [2024-09-29 16:45:30.461352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.107 [2024-09-29 16:45:30.461385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.107 qpair failed and we were unable to recover it. 00:37:30.107 [2024-09-29 16:45:30.461574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.107 [2024-09-29 16:45:30.461622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.107 qpair failed and we were unable to recover it. 00:37:30.107 [2024-09-29 16:45:30.461777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.107 [2024-09-29 16:45:30.461826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.107 qpair failed and we were unable to recover it. 00:37:30.107 [2024-09-29 16:45:30.461960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.107 [2024-09-29 16:45:30.462000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.107 qpair failed and we were unable to recover it. 00:37:30.107 [2024-09-29 16:45:30.462122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.107 [2024-09-29 16:45:30.462157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.107 qpair failed and we were unable to recover it. 00:37:30.107 [2024-09-29 16:45:30.462287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.107 [2024-09-29 16:45:30.462320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.107 qpair failed and we were unable to recover it. 00:37:30.107 [2024-09-29 16:45:30.462467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.107 [2024-09-29 16:45:30.462501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.107 qpair failed and we were unable to recover it. 00:37:30.107 [2024-09-29 16:45:30.462617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.107 [2024-09-29 16:45:30.462651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.107 qpair failed and we were unable to recover it. 00:37:30.107 [2024-09-29 16:45:30.462789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.107 [2024-09-29 16:45:30.462827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.107 qpair failed and we were unable to recover it. 00:37:30.107 [2024-09-29 16:45:30.462979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.107 [2024-09-29 16:45:30.463027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.107 qpair failed and we were unable to recover it. 00:37:30.107 [2024-09-29 16:45:30.463267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.107 [2024-09-29 16:45:30.463303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.107 qpair failed and we were unable to recover it. 00:37:30.107 [2024-09-29 16:45:30.463481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.107 [2024-09-29 16:45:30.463516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.107 qpair failed and we were unable to recover it. 00:37:30.107 [2024-09-29 16:45:30.463640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.107 [2024-09-29 16:45:30.463680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.107 qpair failed and we were unable to recover it. 00:37:30.107 [2024-09-29 16:45:30.463807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.107 [2024-09-29 16:45:30.463842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.107 qpair failed and we were unable to recover it. 00:37:30.107 [2024-09-29 16:45:30.463958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.107 [2024-09-29 16:45:30.464002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.107 qpair failed and we were unable to recover it. 00:37:30.107 [2024-09-29 16:45:30.464116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.107 [2024-09-29 16:45:30.464150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.107 qpair failed and we were unable to recover it. 00:37:30.107 [2024-09-29 16:45:30.464267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.107 [2024-09-29 16:45:30.464301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.107 qpair failed and we were unable to recover it. 00:37:30.107 [2024-09-29 16:45:30.464429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.107 [2024-09-29 16:45:30.464464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.107 qpair failed and we were unable to recover it. 00:37:30.107 [2024-09-29 16:45:30.464592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.107 [2024-09-29 16:45:30.464640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.107 qpair failed and we were unable to recover it. 00:37:30.107 [2024-09-29 16:45:30.464798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.107 [2024-09-29 16:45:30.464846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.107 qpair failed and we were unable to recover it. 00:37:30.107 [2024-09-29 16:45:30.464994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.107 [2024-09-29 16:45:30.465030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.107 qpair failed and we were unable to recover it. 00:37:30.107 [2024-09-29 16:45:30.465205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.107 [2024-09-29 16:45:30.465240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.107 qpair failed and we were unable to recover it. 00:37:30.107 [2024-09-29 16:45:30.465350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.107 [2024-09-29 16:45:30.465384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.107 qpair failed and we were unable to recover it. 00:37:30.107 [2024-09-29 16:45:30.465504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.107 [2024-09-29 16:45:30.465540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.107 qpair failed and we were unable to recover it. 00:37:30.107 [2024-09-29 16:45:30.465670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.107 [2024-09-29 16:45:30.465727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.107 qpair failed and we were unable to recover it. 00:37:30.107 [2024-09-29 16:45:30.465859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.107 [2024-09-29 16:45:30.465897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.107 qpair failed and we were unable to recover it. 00:37:30.107 [2024-09-29 16:45:30.466037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.107 [2024-09-29 16:45:30.466072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.107 qpair failed and we were unable to recover it. 00:37:30.107 [2024-09-29 16:45:30.466223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.107 [2024-09-29 16:45:30.466260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.107 qpair failed and we were unable to recover it. 00:37:30.107 [2024-09-29 16:45:30.466380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.107 [2024-09-29 16:45:30.466413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.107 qpair failed and we were unable to recover it. 00:37:30.107 [2024-09-29 16:45:30.466531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.108 [2024-09-29 16:45:30.466567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.108 qpair failed and we were unable to recover it. 00:37:30.108 [2024-09-29 16:45:30.466722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.108 [2024-09-29 16:45:30.466758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.108 qpair failed and we were unable to recover it. 00:37:30.108 [2024-09-29 16:45:30.466878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.108 [2024-09-29 16:45:30.466912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.108 qpair failed and we were unable to recover it. 00:37:30.108 [2024-09-29 16:45:30.467048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.108 [2024-09-29 16:45:30.467082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.108 qpair failed and we were unable to recover it. 00:37:30.108 [2024-09-29 16:45:30.467223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.108 [2024-09-29 16:45:30.467256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.108 qpair failed and we were unable to recover it. 00:37:30.108 [2024-09-29 16:45:30.467391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.108 [2024-09-29 16:45:30.467440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.108 qpair failed and we were unable to recover it. 00:37:30.108 [2024-09-29 16:45:30.467573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.108 [2024-09-29 16:45:30.467609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.108 qpair failed and we were unable to recover it. 00:37:30.108 [2024-09-29 16:45:30.467786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.108 [2024-09-29 16:45:30.467827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.108 qpair failed and we were unable to recover it. 00:37:30.108 [2024-09-29 16:45:30.467938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.108 [2024-09-29 16:45:30.467984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.108 qpair failed and we were unable to recover it. 00:37:30.108 [2024-09-29 16:45:30.468119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.108 [2024-09-29 16:45:30.468153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.108 qpair failed and we were unable to recover it. 00:37:30.108 [2024-09-29 16:45:30.468268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.108 [2024-09-29 16:45:30.468302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.108 qpair failed and we were unable to recover it. 00:37:30.108 [2024-09-29 16:45:30.468446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.108 [2024-09-29 16:45:30.468479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.108 qpair failed and we were unable to recover it. 00:37:30.108 [2024-09-29 16:45:30.468610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.108 [2024-09-29 16:45:30.468668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.108 qpair failed and we were unable to recover it. 00:37:30.108 [2024-09-29 16:45:30.468818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.108 [2024-09-29 16:45:30.468866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.108 qpair failed and we were unable to recover it. 00:37:30.108 [2024-09-29 16:45:30.469028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.108 [2024-09-29 16:45:30.469064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.108 qpair failed and we were unable to recover it. 00:37:30.108 [2024-09-29 16:45:30.469213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.108 [2024-09-29 16:45:30.469249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.108 qpair failed and we were unable to recover it. 00:37:30.108 [2024-09-29 16:45:30.469366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.108 [2024-09-29 16:45:30.469401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.108 qpair failed and we were unable to recover it. 00:37:30.108 [2024-09-29 16:45:30.469554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.108 [2024-09-29 16:45:30.469589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.108 qpair failed and we were unable to recover it. 00:37:30.108 [2024-09-29 16:45:30.469732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.108 [2024-09-29 16:45:30.469768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.108 qpair failed and we were unable to recover it. 00:37:30.108 [2024-09-29 16:45:30.469909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.108 [2024-09-29 16:45:30.469957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.108 qpair failed and we were unable to recover it. 00:37:30.108 [2024-09-29 16:45:30.470108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.108 [2024-09-29 16:45:30.470143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.108 qpair failed and we were unable to recover it. 00:37:30.108 [2024-09-29 16:45:30.470298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.108 [2024-09-29 16:45:30.470333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.108 qpair failed and we were unable to recover it. 00:37:30.108 [2024-09-29 16:45:30.470445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.108 [2024-09-29 16:45:30.470478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.108 qpair failed and we were unable to recover it. 00:37:30.108 [2024-09-29 16:45:30.470603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.108 [2024-09-29 16:45:30.470639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.108 qpair failed and we were unable to recover it. 00:37:30.108 [2024-09-29 16:45:30.470778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.108 [2024-09-29 16:45:30.470812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.108 qpair failed and we were unable to recover it. 00:37:30.108 [2024-09-29 16:45:30.470933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.108 [2024-09-29 16:45:30.470977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.108 qpair failed and we were unable to recover it. 00:37:30.108 [2024-09-29 16:45:30.471117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.108 [2024-09-29 16:45:30.471152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.108 qpair failed and we were unable to recover it. 00:37:30.108 [2024-09-29 16:45:30.471275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.108 [2024-09-29 16:45:30.471310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.108 qpair failed and we were unable to recover it. 00:37:30.108 [2024-09-29 16:45:30.471461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.108 [2024-09-29 16:45:30.471496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.108 qpair failed and we were unable to recover it. 00:37:30.108 [2024-09-29 16:45:30.471606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.108 [2024-09-29 16:45:30.471642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.108 qpair failed and we were unable to recover it. 00:37:30.108 [2024-09-29 16:45:30.471775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.108 [2024-09-29 16:45:30.471809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.108 qpair failed and we were unable to recover it. 00:37:30.108 [2024-09-29 16:45:30.471916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.108 [2024-09-29 16:45:30.471949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.108 qpair failed and we were unable to recover it. 00:37:30.108 [2024-09-29 16:45:30.472125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.108 [2024-09-29 16:45:30.472158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.108 qpair failed and we were unable to recover it. 00:37:30.108 [2024-09-29 16:45:30.472270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.108 [2024-09-29 16:45:30.472303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.108 qpair failed and we were unable to recover it. 00:37:30.108 [2024-09-29 16:45:30.472453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.109 [2024-09-29 16:45:30.472488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.109 qpair failed and we were unable to recover it. 00:37:30.109 [2024-09-29 16:45:30.472616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.109 [2024-09-29 16:45:30.472663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.109 qpair failed and we were unable to recover it. 00:37:30.109 [2024-09-29 16:45:30.472805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.109 [2024-09-29 16:45:30.472852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.109 qpair failed and we were unable to recover it. 00:37:30.109 [2024-09-29 16:45:30.472994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.109 [2024-09-29 16:45:30.473030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.109 qpair failed and we were unable to recover it. 00:37:30.109 [2024-09-29 16:45:30.473178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.109 [2024-09-29 16:45:30.473213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.109 qpair failed and we were unable to recover it. 00:37:30.109 [2024-09-29 16:45:30.473378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.109 [2024-09-29 16:45:30.473412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.109 qpair failed and we were unable to recover it. 00:37:30.109 [2024-09-29 16:45:30.473544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.109 [2024-09-29 16:45:30.473579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.109 qpair failed and we were unable to recover it. 00:37:30.109 [2024-09-29 16:45:30.473712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.109 [2024-09-29 16:45:30.473749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.109 qpair failed and we were unable to recover it. 00:37:30.109 [2024-09-29 16:45:30.473879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.109 [2024-09-29 16:45:30.473915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.109 qpair failed and we were unable to recover it. 00:37:30.109 [2024-09-29 16:45:30.474039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.109 [2024-09-29 16:45:30.474075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.109 qpair failed and we were unable to recover it. 00:37:30.109 [2024-09-29 16:45:30.474222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.109 [2024-09-29 16:45:30.474258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.109 qpair failed and we were unable to recover it. 00:37:30.109 [2024-09-29 16:45:30.474401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.109 [2024-09-29 16:45:30.474436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.109 qpair failed and we were unable to recover it. 00:37:30.109 [2024-09-29 16:45:30.474554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.109 [2024-09-29 16:45:30.474589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.109 qpair failed and we were unable to recover it. 00:37:30.109 [2024-09-29 16:45:30.474721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.109 [2024-09-29 16:45:30.474764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.109 qpair failed and we were unable to recover it. 00:37:30.109 [2024-09-29 16:45:30.474880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.109 [2024-09-29 16:45:30.474913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.109 qpair failed and we were unable to recover it. 00:37:30.109 [2024-09-29 16:45:30.475031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.109 [2024-09-29 16:45:30.475068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.109 qpair failed and we were unable to recover it. 00:37:30.109 [2024-09-29 16:45:30.475186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.109 [2024-09-29 16:45:30.475220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.109 qpair failed and we were unable to recover it. 00:37:30.109 [2024-09-29 16:45:30.475367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.109 [2024-09-29 16:45:30.475403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.109 qpair failed and we were unable to recover it. 00:37:30.109 [2024-09-29 16:45:30.475551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.109 [2024-09-29 16:45:30.475586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.109 qpair failed and we were unable to recover it. 00:37:30.109 [2024-09-29 16:45:30.475715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.109 [2024-09-29 16:45:30.475750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.109 qpair failed and we were unable to recover it. 00:37:30.109 [2024-09-29 16:45:30.475857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.109 [2024-09-29 16:45:30.475890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.109 qpair failed and we were unable to recover it. 00:37:30.109 [2024-09-29 16:45:30.476020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.109 [2024-09-29 16:45:30.476056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.109 qpair failed and we were unable to recover it. 00:37:30.109 [2024-09-29 16:45:30.476214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.109 [2024-09-29 16:45:30.476262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.109 qpair failed and we were unable to recover it. 00:37:30.109 [2024-09-29 16:45:30.476416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.109 [2024-09-29 16:45:30.476451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.109 qpair failed and we were unable to recover it. 00:37:30.109 [2024-09-29 16:45:30.476607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.109 [2024-09-29 16:45:30.476642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.109 qpair failed and we were unable to recover it. 00:37:30.109 [2024-09-29 16:45:30.476785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.109 [2024-09-29 16:45:30.476821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.109 qpair failed and we were unable to recover it. 00:37:30.109 [2024-09-29 16:45:30.476931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.109 [2024-09-29 16:45:30.476973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.109 qpair failed and we were unable to recover it. 00:37:30.109 [2024-09-29 16:45:30.477095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.109 [2024-09-29 16:45:30.477129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.109 qpair failed and we were unable to recover it. 00:37:30.109 [2024-09-29 16:45:30.477269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.109 [2024-09-29 16:45:30.477303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.109 qpair failed and we were unable to recover it. 00:37:30.109 [2024-09-29 16:45:30.477478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.109 [2024-09-29 16:45:30.477515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.109 qpair failed and we were unable to recover it. 00:37:30.109 [2024-09-29 16:45:30.477666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.109 [2024-09-29 16:45:30.477707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.109 qpair failed and we were unable to recover it. 00:37:30.109 [2024-09-29 16:45:30.477821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.109 [2024-09-29 16:45:30.477855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.109 qpair failed and we were unable to recover it. 00:37:30.109 [2024-09-29 16:45:30.477968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.109 [2024-09-29 16:45:30.478002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.109 qpair failed and we were unable to recover it. 00:37:30.109 [2024-09-29 16:45:30.478127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.109 [2024-09-29 16:45:30.478162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.109 qpair failed and we were unable to recover it. 00:37:30.109 [2024-09-29 16:45:30.478294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.109 [2024-09-29 16:45:30.478342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.109 qpair failed and we were unable to recover it. 00:37:30.109 [2024-09-29 16:45:30.478494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.109 [2024-09-29 16:45:30.478529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.109 qpair failed and we were unable to recover it. 00:37:30.109 [2024-09-29 16:45:30.478682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.109 [2024-09-29 16:45:30.478717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.109 qpair failed and we were unable to recover it. 00:37:30.109 [2024-09-29 16:45:30.478828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.109 [2024-09-29 16:45:30.478863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.109 qpair failed and we were unable to recover it. 00:37:30.109 [2024-09-29 16:45:30.479002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.109 [2024-09-29 16:45:30.479037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.109 qpair failed and we were unable to recover it. 00:37:30.110 [2024-09-29 16:45:30.479153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.110 [2024-09-29 16:45:30.479187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.110 qpair failed and we were unable to recover it. 00:37:30.110 [2024-09-29 16:45:30.479313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.110 [2024-09-29 16:45:30.479348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.110 qpair failed and we were unable to recover it. 00:37:30.110 [2024-09-29 16:45:30.479500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.110 [2024-09-29 16:45:30.479534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.110 qpair failed and we were unable to recover it. 00:37:30.110 [2024-09-29 16:45:30.479679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.110 [2024-09-29 16:45:30.479714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.110 qpair failed and we were unable to recover it. 00:37:30.110 [2024-09-29 16:45:30.479826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.110 [2024-09-29 16:45:30.479861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.110 qpair failed and we were unable to recover it. 00:37:30.110 [2024-09-29 16:45:30.479973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.110 [2024-09-29 16:45:30.480008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.110 qpair failed and we were unable to recover it. 00:37:30.110 [2024-09-29 16:45:30.480132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.110 [2024-09-29 16:45:30.480176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.110 qpair failed and we were unable to recover it. 00:37:30.110 [2024-09-29 16:45:30.480289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.110 [2024-09-29 16:45:30.480323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.110 qpair failed and we were unable to recover it. 00:37:30.110 [2024-09-29 16:45:30.480473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.110 [2024-09-29 16:45:30.480512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.110 qpair failed and we were unable to recover it. 00:37:30.110 [2024-09-29 16:45:30.480629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.110 [2024-09-29 16:45:30.480667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.110 qpair failed and we were unable to recover it. 00:37:30.110 [2024-09-29 16:45:30.480800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.110 [2024-09-29 16:45:30.480835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.110 qpair failed and we were unable to recover it. 00:37:30.110 [2024-09-29 16:45:30.480965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.110 [2024-09-29 16:45:30.481000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.110 qpair failed and we were unable to recover it. 00:37:30.110 [2024-09-29 16:45:30.481147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.110 [2024-09-29 16:45:30.481180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.110 qpair failed and we were unable to recover it. 00:37:30.110 [2024-09-29 16:45:30.481313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.110 [2024-09-29 16:45:30.481360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.110 qpair failed and we were unable to recover it. 00:37:30.110 [2024-09-29 16:45:30.481509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.110 [2024-09-29 16:45:30.481550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.110 qpair failed and we were unable to recover it. 00:37:30.110 [2024-09-29 16:45:30.481700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.110 [2024-09-29 16:45:30.481736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.110 qpair failed and we were unable to recover it. 00:37:30.110 [2024-09-29 16:45:30.481850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.110 [2024-09-29 16:45:30.481884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.110 qpair failed and we were unable to recover it. 00:37:30.110 [2024-09-29 16:45:30.482008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.110 [2024-09-29 16:45:30.482053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.110 qpair failed and we were unable to recover it. 00:37:30.110 [2024-09-29 16:45:30.482221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.110 [2024-09-29 16:45:30.482255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.110 qpair failed and we were unable to recover it. 00:37:30.110 [2024-09-29 16:45:30.482376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.110 [2024-09-29 16:45:30.482412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.110 qpair failed and we were unable to recover it. 00:37:30.110 [2024-09-29 16:45:30.482529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.110 [2024-09-29 16:45:30.482567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.110 qpair failed and we were unable to recover it. 00:37:30.110 [2024-09-29 16:45:30.482700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.110 [2024-09-29 16:45:30.482735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.110 qpair failed and we were unable to recover it. 00:37:30.110 [2024-09-29 16:45:30.482862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.110 [2024-09-29 16:45:30.482896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.110 qpair failed and we were unable to recover it. 00:37:30.110 [2024-09-29 16:45:30.483066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.110 [2024-09-29 16:45:30.483099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.110 qpair failed and we were unable to recover it. 00:37:30.110 [2024-09-29 16:45:30.483207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.110 [2024-09-29 16:45:30.483240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.110 qpair failed and we were unable to recover it. 00:37:30.110 [2024-09-29 16:45:30.483400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.110 [2024-09-29 16:45:30.483434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.110 qpair failed and we were unable to recover it. 00:37:30.110 [2024-09-29 16:45:30.483579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.110 [2024-09-29 16:45:30.483615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.110 qpair failed and we were unable to recover it. 00:37:30.110 [2024-09-29 16:45:30.483745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.110 [2024-09-29 16:45:30.483781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.110 qpair failed and we were unable to recover it. 00:37:30.110 [2024-09-29 16:45:30.483935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.110 [2024-09-29 16:45:30.483980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.110 qpair failed and we were unable to recover it. 00:37:30.110 [2024-09-29 16:45:30.484155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.110 [2024-09-29 16:45:30.484189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.110 qpair failed and we were unable to recover it. 00:37:30.110 [2024-09-29 16:45:30.484367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.110 [2024-09-29 16:45:30.484401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.110 qpair failed and we were unable to recover it. 00:37:30.110 [2024-09-29 16:45:30.484528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.110 [2024-09-29 16:45:30.484562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.110 qpair failed and we were unable to recover it. 00:37:30.110 [2024-09-29 16:45:30.484689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.110 [2024-09-29 16:45:30.484731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.110 qpair failed and we were unable to recover it. 00:37:30.110 [2024-09-29 16:45:30.484867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.110 [2024-09-29 16:45:30.484914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.110 qpair failed and we were unable to recover it. 00:37:30.110 [2024-09-29 16:45:30.485048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.110 [2024-09-29 16:45:30.485085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.110 qpair failed and we were unable to recover it. 00:37:30.110 [2024-09-29 16:45:30.485227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.110 [2024-09-29 16:45:30.485262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.110 qpair failed and we were unable to recover it. 00:37:30.110 [2024-09-29 16:45:30.485399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.110 [2024-09-29 16:45:30.485434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.110 qpair failed and we were unable to recover it. 00:37:30.110 [2024-09-29 16:45:30.485589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.110 [2024-09-29 16:45:30.485623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.110 qpair failed and we were unable to recover it. 00:37:30.110 [2024-09-29 16:45:30.485760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.111 [2024-09-29 16:45:30.485796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.111 qpair failed and we were unable to recover it. 00:37:30.111 [2024-09-29 16:45:30.485946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.111 [2024-09-29 16:45:30.485989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.111 qpair failed and we were unable to recover it. 00:37:30.111 [2024-09-29 16:45:30.486130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.111 [2024-09-29 16:45:30.486165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.111 qpair failed and we were unable to recover it. 00:37:30.111 [2024-09-29 16:45:30.486309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.111 [2024-09-29 16:45:30.486344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.111 qpair failed and we were unable to recover it. 00:37:30.111 [2024-09-29 16:45:30.486489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.111 [2024-09-29 16:45:30.486523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.111 qpair failed and we were unable to recover it. 00:37:30.111 [2024-09-29 16:45:30.486667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.111 [2024-09-29 16:45:30.486707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.111 qpair failed and we were unable to recover it. 00:37:30.111 [2024-09-29 16:45:30.486856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.111 [2024-09-29 16:45:30.486890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.111 qpair failed and we were unable to recover it. 00:37:30.111 [2024-09-29 16:45:30.487058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.111 [2024-09-29 16:45:30.487106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.111 qpair failed and we were unable to recover it. 00:37:30.111 [2024-09-29 16:45:30.487287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.111 [2024-09-29 16:45:30.487322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.111 qpair failed and we were unable to recover it. 00:37:30.111 [2024-09-29 16:45:30.487475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.111 [2024-09-29 16:45:30.487511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.111 qpair failed and we were unable to recover it. 00:37:30.111 [2024-09-29 16:45:30.487665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.111 [2024-09-29 16:45:30.487704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.111 qpair failed and we were unable to recover it. 00:37:30.111 [2024-09-29 16:45:30.487835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.111 [2024-09-29 16:45:30.487883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.111 qpair failed and we were unable to recover it. 00:37:30.111 [2024-09-29 16:45:30.488038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.111 [2024-09-29 16:45:30.488074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.111 qpair failed and we were unable to recover it. 00:37:30.111 [2024-09-29 16:45:30.488201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.111 [2024-09-29 16:45:30.488235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.111 qpair failed and we were unable to recover it. 00:37:30.111 [2024-09-29 16:45:30.488350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.111 [2024-09-29 16:45:30.488385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.111 qpair failed and we were unable to recover it. 00:37:30.111 [2024-09-29 16:45:30.488535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.111 [2024-09-29 16:45:30.488570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.111 qpair failed and we were unable to recover it. 00:37:30.111 [2024-09-29 16:45:30.488719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.111 [2024-09-29 16:45:30.488759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.111 qpair failed and we were unable to recover it. 00:37:30.111 [2024-09-29 16:45:30.488882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.111 [2024-09-29 16:45:30.488916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.111 qpair failed and we were unable to recover it. 00:37:30.111 [2024-09-29 16:45:30.489063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.111 [2024-09-29 16:45:30.489097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.111 qpair failed and we were unable to recover it. 00:37:30.111 [2024-09-29 16:45:30.489209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.111 [2024-09-29 16:45:30.489242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.111 qpair failed and we were unable to recover it. 00:37:30.111 [2024-09-29 16:45:30.489362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.111 [2024-09-29 16:45:30.489396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.111 qpair failed and we were unable to recover it. 00:37:30.111 [2024-09-29 16:45:30.489534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.111 [2024-09-29 16:45:30.489568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.111 qpair failed and we were unable to recover it. 00:37:30.111 [2024-09-29 16:45:30.489712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.111 [2024-09-29 16:45:30.489748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.111 qpair failed and we were unable to recover it. 00:37:30.111 [2024-09-29 16:45:30.489866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.111 [2024-09-29 16:45:30.489902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.111 qpair failed and we were unable to recover it. 00:37:30.111 [2024-09-29 16:45:30.490037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.111 [2024-09-29 16:45:30.490091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.111 qpair failed and we were unable to recover it. 00:37:30.111 [2024-09-29 16:45:30.490272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.111 [2024-09-29 16:45:30.490320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.111 qpair failed and we were unable to recover it. 00:37:30.111 [2024-09-29 16:45:30.490469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.111 [2024-09-29 16:45:30.490518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.111 qpair failed and we were unable to recover it. 00:37:30.111 [2024-09-29 16:45:30.490638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.111 [2024-09-29 16:45:30.490680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.111 qpair failed and we were unable to recover it. 00:37:30.111 [2024-09-29 16:45:30.490826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.111 [2024-09-29 16:45:30.490860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.111 qpair failed and we were unable to recover it. 00:37:30.111 [2024-09-29 16:45:30.490979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.111 [2024-09-29 16:45:30.491015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.111 qpair failed and we were unable to recover it. 00:37:30.111 [2024-09-29 16:45:30.491197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.111 [2024-09-29 16:45:30.491231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.111 qpair failed and we were unable to recover it. 00:37:30.111 [2024-09-29 16:45:30.491401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.111 [2024-09-29 16:45:30.491435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.111 qpair failed and we were unable to recover it. 00:37:30.111 [2024-09-29 16:45:30.491578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.111 [2024-09-29 16:45:30.491612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.111 qpair failed and we were unable to recover it. 00:37:30.111 [2024-09-29 16:45:30.491779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.111 [2024-09-29 16:45:30.491814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.111 qpair failed and we were unable to recover it. 00:37:30.111 [2024-09-29 16:45:30.491939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.111 [2024-09-29 16:45:30.491987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.111 qpair failed and we were unable to recover it. 00:37:30.111 [2024-09-29 16:45:30.492133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.111 [2024-09-29 16:45:30.492168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.111 qpair failed and we were unable to recover it. 00:37:30.111 [2024-09-29 16:45:30.492304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.111 [2024-09-29 16:45:30.492337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.111 qpair failed and we were unable to recover it. 00:37:30.111 [2024-09-29 16:45:30.492453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.111 [2024-09-29 16:45:30.492488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.111 qpair failed and we were unable to recover it. 00:37:30.111 [2024-09-29 16:45:30.492634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.111 [2024-09-29 16:45:30.492668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.112 qpair failed and we were unable to recover it. 00:37:30.112 [2024-09-29 16:45:30.492794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.112 [2024-09-29 16:45:30.492828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.112 qpair failed and we were unable to recover it. 00:37:30.112 [2024-09-29 16:45:30.492936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.112 [2024-09-29 16:45:30.492971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.112 qpair failed and we were unable to recover it. 00:37:30.112 [2024-09-29 16:45:30.493091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.112 [2024-09-29 16:45:30.493126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.112 qpair failed and we were unable to recover it. 00:37:30.112 [2024-09-29 16:45:30.493263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.112 [2024-09-29 16:45:30.493311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.112 qpair failed and we were unable to recover it. 00:37:30.112 [2024-09-29 16:45:30.493455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.112 [2024-09-29 16:45:30.493490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.112 qpair failed and we were unable to recover it. 00:37:30.112 [2024-09-29 16:45:30.493657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.112 [2024-09-29 16:45:30.493699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.112 qpair failed and we were unable to recover it. 00:37:30.112 [2024-09-29 16:45:30.493816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.112 [2024-09-29 16:45:30.493850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.112 qpair failed and we were unable to recover it. 00:37:30.112 [2024-09-29 16:45:30.493997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.112 [2024-09-29 16:45:30.494031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.112 qpair failed and we were unable to recover it. 00:37:30.112 [2024-09-29 16:45:30.494166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.112 [2024-09-29 16:45:30.494198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.112 qpair failed and we were unable to recover it. 00:37:30.112 [2024-09-29 16:45:30.494318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.112 [2024-09-29 16:45:30.494353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.112 qpair failed and we were unable to recover it. 00:37:30.112 [2024-09-29 16:45:30.494500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.112 [2024-09-29 16:45:30.494535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.112 qpair failed and we were unable to recover it. 00:37:30.112 [2024-09-29 16:45:30.494696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.112 [2024-09-29 16:45:30.494744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.112 qpair failed and we were unable to recover it. 00:37:30.112 [2024-09-29 16:45:30.494891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.112 [2024-09-29 16:45:30.494928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.112 qpair failed and we were unable to recover it. 00:37:30.112 [2024-09-29 16:45:30.495080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.112 [2024-09-29 16:45:30.495114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.112 qpair failed and we were unable to recover it. 00:37:30.112 [2024-09-29 16:45:30.495259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.112 [2024-09-29 16:45:30.495293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.112 qpair failed and we were unable to recover it. 00:37:30.112 [2024-09-29 16:45:30.495413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.112 [2024-09-29 16:45:30.495448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.112 qpair failed and we were unable to recover it. 00:37:30.112 [2024-09-29 16:45:30.495568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.112 [2024-09-29 16:45:30.495602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.112 qpair failed and we were unable to recover it. 00:37:30.112 [2024-09-29 16:45:30.495751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.112 [2024-09-29 16:45:30.495795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.112 qpair failed and we were unable to recover it. 00:37:30.112 [2024-09-29 16:45:30.495935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.112 [2024-09-29 16:45:30.495983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.112 qpair failed and we were unable to recover it. 00:37:30.112 [2024-09-29 16:45:30.496130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.112 [2024-09-29 16:45:30.496165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.112 qpair failed and we were unable to recover it. 00:37:30.112 [2024-09-29 16:45:30.496280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.112 [2024-09-29 16:45:30.496313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.112 qpair failed and we were unable to recover it. 00:37:30.112 [2024-09-29 16:45:30.496433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.112 [2024-09-29 16:45:30.496466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.112 qpair failed and we were unable to recover it. 00:37:30.112 [2024-09-29 16:45:30.496638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.112 [2024-09-29 16:45:30.496696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.112 qpair failed and we were unable to recover it. 00:37:30.112 [2024-09-29 16:45:30.496835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.112 [2024-09-29 16:45:30.496872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.112 qpair failed and we were unable to recover it. 00:37:30.112 [2024-09-29 16:45:30.497047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.112 [2024-09-29 16:45:30.497094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.112 qpair failed and we were unable to recover it. 00:37:30.112 [2024-09-29 16:45:30.497218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.112 [2024-09-29 16:45:30.497253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.112 qpair failed and we were unable to recover it. 00:37:30.112 [2024-09-29 16:45:30.497420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.112 [2024-09-29 16:45:30.497454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.112 qpair failed and we were unable to recover it. 00:37:30.112 [2024-09-29 16:45:30.497601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.112 [2024-09-29 16:45:30.497635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.112 qpair failed and we were unable to recover it. 00:37:30.112 [2024-09-29 16:45:30.497776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.112 [2024-09-29 16:45:30.497824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.112 qpair failed and we were unable to recover it. 00:37:30.112 [2024-09-29 16:45:30.497962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.112 [2024-09-29 16:45:30.497999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.112 qpair failed and we were unable to recover it. 00:37:30.112 [2024-09-29 16:45:30.498177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.112 [2024-09-29 16:45:30.498212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.112 qpair failed and we were unable to recover it. 00:37:30.112 [2024-09-29 16:45:30.498351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.112 [2024-09-29 16:45:30.498388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.112 qpair failed and we were unable to recover it. 00:37:30.112 [2024-09-29 16:45:30.498530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.112 [2024-09-29 16:45:30.498576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.112 qpair failed and we were unable to recover it. 00:37:30.112 [2024-09-29 16:45:30.498735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.113 [2024-09-29 16:45:30.498784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.113 qpair failed and we were unable to recover it. 00:37:30.113 [2024-09-29 16:45:30.498917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.113 [2024-09-29 16:45:30.498953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.113 qpair failed and we were unable to recover it. 00:37:30.113 [2024-09-29 16:45:30.499097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.113 [2024-09-29 16:45:30.499131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.113 qpair failed and we were unable to recover it. 00:37:30.113 [2024-09-29 16:45:30.499297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.113 [2024-09-29 16:45:30.499332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.113 qpair failed and we were unable to recover it. 00:37:30.113 [2024-09-29 16:45:30.499498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.113 [2024-09-29 16:45:30.499533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.113 qpair failed and we were unable to recover it. 00:37:30.113 [2024-09-29 16:45:30.499688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.113 [2024-09-29 16:45:30.499737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.113 qpair failed and we were unable to recover it. 00:37:30.113 [2024-09-29 16:45:30.499867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.113 [2024-09-29 16:45:30.499903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.113 qpair failed and we were unable to recover it. 00:37:30.113 [2024-09-29 16:45:30.500023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.113 [2024-09-29 16:45:30.500057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.113 qpair failed and we were unable to recover it. 00:37:30.113 [2024-09-29 16:45:30.500175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.113 [2024-09-29 16:45:30.500209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.113 qpair failed and we were unable to recover it. 00:37:30.113 [2024-09-29 16:45:30.500378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.113 [2024-09-29 16:45:30.500411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.113 qpair failed and we were unable to recover it. 00:37:30.113 [2024-09-29 16:45:30.500527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.113 [2024-09-29 16:45:30.500560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.113 qpair failed and we were unable to recover it. 00:37:30.113 [2024-09-29 16:45:30.500700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.113 [2024-09-29 16:45:30.500744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.113 qpair failed and we were unable to recover it. 00:37:30.113 [2024-09-29 16:45:30.500877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.113 [2024-09-29 16:45:30.500915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.113 qpair failed and we were unable to recover it. 00:37:30.113 [2024-09-29 16:45:30.501041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.113 [2024-09-29 16:45:30.501076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.113 qpair failed and we were unable to recover it. 00:37:30.113 [2024-09-29 16:45:30.501201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.113 [2024-09-29 16:45:30.501234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.113 qpair failed and we were unable to recover it. 00:37:30.113 [2024-09-29 16:45:30.501358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.113 [2024-09-29 16:45:30.501392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.113 qpair failed and we were unable to recover it. 00:37:30.113 [2024-09-29 16:45:30.501512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.113 [2024-09-29 16:45:30.501546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.113 qpair failed and we were unable to recover it. 00:37:30.113 [2024-09-29 16:45:30.501684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.113 [2024-09-29 16:45:30.501718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.113 qpair failed and we were unable to recover it. 00:37:30.113 [2024-09-29 16:45:30.501830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.113 [2024-09-29 16:45:30.501865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.113 qpair failed and we were unable to recover it. 00:37:30.113 [2024-09-29 16:45:30.501991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.113 [2024-09-29 16:45:30.502025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.113 qpair failed and we were unable to recover it. 00:37:30.113 [2024-09-29 16:45:30.502162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.113 [2024-09-29 16:45:30.502196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.113 qpair failed and we were unable to recover it. 00:37:30.113 [2024-09-29 16:45:30.502311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.113 [2024-09-29 16:45:30.502344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.113 qpair failed and we were unable to recover it. 00:37:30.113 [2024-09-29 16:45:30.502484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.113 [2024-09-29 16:45:30.502518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.113 qpair failed and we were unable to recover it. 00:37:30.113 [2024-09-29 16:45:30.502710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.113 [2024-09-29 16:45:30.502759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.113 qpair failed and we were unable to recover it. 00:37:30.113 [2024-09-29 16:45:30.502879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.113 [2024-09-29 16:45:30.502913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.113 qpair failed and we were unable to recover it. 00:37:30.113 [2024-09-29 16:45:30.503077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.113 [2024-09-29 16:45:30.503125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.113 qpair failed and we were unable to recover it. 00:37:30.113 [2024-09-29 16:45:30.503250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.113 [2024-09-29 16:45:30.503287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.113 qpair failed and we were unable to recover it. 00:37:30.113 [2024-09-29 16:45:30.503458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.113 [2024-09-29 16:45:30.503493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.113 qpair failed and we were unable to recover it. 00:37:30.113 [2024-09-29 16:45:30.503606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.113 [2024-09-29 16:45:30.503640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.113 qpair failed and we were unable to recover it. 00:37:30.113 [2024-09-29 16:45:30.503794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.113 [2024-09-29 16:45:30.503829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.113 qpair failed and we were unable to recover it. 00:37:30.113 [2024-09-29 16:45:30.503972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.113 [2024-09-29 16:45:30.504005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.113 qpair failed and we were unable to recover it. 00:37:30.113 [2024-09-29 16:45:30.504124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.113 [2024-09-29 16:45:30.504158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.113 qpair failed and we were unable to recover it. 00:37:30.113 [2024-09-29 16:45:30.504299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.113 [2024-09-29 16:45:30.504333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.113 qpair failed and we were unable to recover it. 00:37:30.113 [2024-09-29 16:45:30.504489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.113 [2024-09-29 16:45:30.504522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.113 qpair failed and we were unable to recover it. 00:37:30.113 [2024-09-29 16:45:30.504692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.113 [2024-09-29 16:45:30.504726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.113 qpair failed and we were unable to recover it. 00:37:30.113 [2024-09-29 16:45:30.504841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.113 [2024-09-29 16:45:30.504874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.113 qpair failed and we were unable to recover it. 00:37:30.113 [2024-09-29 16:45:30.505034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.113 [2024-09-29 16:45:30.505083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.113 qpair failed and we were unable to recover it. 00:37:30.113 [2024-09-29 16:45:30.505234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.113 [2024-09-29 16:45:30.505271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.113 qpair failed and we were unable to recover it. 00:37:30.113 [2024-09-29 16:45:30.505405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.113 [2024-09-29 16:45:30.505454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.114 qpair failed and we were unable to recover it. 00:37:30.114 [2024-09-29 16:45:30.505630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.114 [2024-09-29 16:45:30.505666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.114 qpair failed and we were unable to recover it. 00:37:30.114 [2024-09-29 16:45:30.505820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.114 [2024-09-29 16:45:30.505868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.114 qpair failed and we were unable to recover it. 00:37:30.114 [2024-09-29 16:45:30.506019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.114 [2024-09-29 16:45:30.506056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.114 qpair failed and we were unable to recover it. 00:37:30.114 [2024-09-29 16:45:30.506204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.114 [2024-09-29 16:45:30.506239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.114 qpair failed and we were unable to recover it. 00:37:30.114 [2024-09-29 16:45:30.506359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.114 [2024-09-29 16:45:30.506394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.114 qpair failed and we were unable to recover it. 00:37:30.114 [2024-09-29 16:45:30.506520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.114 [2024-09-29 16:45:30.506557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.114 qpair failed and we were unable to recover it. 00:37:30.114 [2024-09-29 16:45:30.506720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.114 [2024-09-29 16:45:30.506767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.114 qpair failed and we were unable to recover it. 00:37:30.114 [2024-09-29 16:45:30.506903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.114 [2024-09-29 16:45:30.506940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.114 qpair failed and we were unable to recover it. 00:37:30.114 [2024-09-29 16:45:30.507070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.114 [2024-09-29 16:45:30.507104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.114 qpair failed and we were unable to recover it. 00:37:30.114 [2024-09-29 16:45:30.507260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.114 [2024-09-29 16:45:30.507293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.114 qpair failed and we were unable to recover it. 00:37:30.114 [2024-09-29 16:45:30.507438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.114 [2024-09-29 16:45:30.507471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.114 qpair failed and we were unable to recover it. 00:37:30.114 [2024-09-29 16:45:30.507609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.114 [2024-09-29 16:45:30.507644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.114 qpair failed and we were unable to recover it. 00:37:30.114 [2024-09-29 16:45:30.507815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.114 [2024-09-29 16:45:30.507869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.114 qpair failed and we were unable to recover it. 00:37:30.114 [2024-09-29 16:45:30.508010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.114 [2024-09-29 16:45:30.508059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.114 qpair failed and we were unable to recover it. 00:37:30.114 [2024-09-29 16:45:30.508212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.114 [2024-09-29 16:45:30.508247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.114 qpair failed and we were unable to recover it. 00:37:30.114 [2024-09-29 16:45:30.508376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.114 [2024-09-29 16:45:30.508411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.114 qpair failed and we were unable to recover it. 00:37:30.114 [2024-09-29 16:45:30.508540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.114 [2024-09-29 16:45:30.508574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.114 qpair failed and we were unable to recover it. 00:37:30.114 [2024-09-29 16:45:30.508694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.114 [2024-09-29 16:45:30.508729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.114 qpair failed and we were unable to recover it. 00:37:30.114 [2024-09-29 16:45:30.508853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.114 [2024-09-29 16:45:30.508888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.114 qpair failed and we were unable to recover it. 00:37:30.114 [2024-09-29 16:45:30.509013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.114 [2024-09-29 16:45:30.509050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.114 qpair failed and we were unable to recover it. 00:37:30.114 [2024-09-29 16:45:30.509220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.114 [2024-09-29 16:45:30.509255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.114 qpair failed and we were unable to recover it. 00:37:30.114 [2024-09-29 16:45:30.509410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.114 [2024-09-29 16:45:30.509444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.114 qpair failed and we were unable to recover it. 00:37:30.114 [2024-09-29 16:45:30.509564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.114 [2024-09-29 16:45:30.509600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.114 qpair failed and we were unable to recover it. 00:37:30.114 [2024-09-29 16:45:30.509727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.114 [2024-09-29 16:45:30.509763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.114 qpair failed and we were unable to recover it. 00:37:30.114 [2024-09-29 16:45:30.509899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.114 [2024-09-29 16:45:30.509946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.114 qpair failed and we were unable to recover it. 00:37:30.114 [2024-09-29 16:45:30.510096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.114 [2024-09-29 16:45:30.510131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.114 qpair failed and we were unable to recover it. 00:37:30.114 [2024-09-29 16:45:30.510276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.114 [2024-09-29 16:45:30.510310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.114 qpair failed and we were unable to recover it. 00:37:30.114 [2024-09-29 16:45:30.510459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.114 [2024-09-29 16:45:30.510492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.114 qpair failed and we were unable to recover it. 00:37:30.114 [2024-09-29 16:45:30.510637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.114 [2024-09-29 16:45:30.510678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.114 qpair failed and we were unable to recover it. 00:37:30.114 [2024-09-29 16:45:30.510810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.114 [2024-09-29 16:45:30.510858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.114 qpair failed and we were unable to recover it. 00:37:30.114 [2024-09-29 16:45:30.511010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.114 [2024-09-29 16:45:30.511047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.114 qpair failed and we were unable to recover it. 00:37:30.114 [2024-09-29 16:45:30.511162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.114 [2024-09-29 16:45:30.511196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.114 qpair failed and we were unable to recover it. 00:37:30.114 [2024-09-29 16:45:30.511341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.114 [2024-09-29 16:45:30.511376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.114 qpair failed and we were unable to recover it. 00:37:30.114 [2024-09-29 16:45:30.511527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.114 [2024-09-29 16:45:30.511562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.114 qpair failed and we were unable to recover it. 00:37:30.114 [2024-09-29 16:45:30.511703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.114 [2024-09-29 16:45:30.511738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.114 qpair failed and we were unable to recover it. 00:37:30.114 [2024-09-29 16:45:30.511869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.114 [2024-09-29 16:45:30.511904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.114 qpair failed and we were unable to recover it. 00:37:30.114 [2024-09-29 16:45:30.512038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.114 [2024-09-29 16:45:30.512072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.114 qpair failed and we were unable to recover it. 00:37:30.114 [2024-09-29 16:45:30.512211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.114 [2024-09-29 16:45:30.512245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.114 qpair failed and we were unable to recover it. 00:37:30.114 [2024-09-29 16:45:30.512409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.115 [2024-09-29 16:45:30.512444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.115 qpair failed and we were unable to recover it. 00:37:30.115 [2024-09-29 16:45:30.512590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.115 [2024-09-29 16:45:30.512625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.115 qpair failed and we were unable to recover it. 00:37:30.115 [2024-09-29 16:45:30.512756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.115 [2024-09-29 16:45:30.512804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.115 qpair failed and we were unable to recover it. 00:37:30.115 [2024-09-29 16:45:30.512934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.115 [2024-09-29 16:45:30.512971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.115 qpair failed and we were unable to recover it. 00:37:30.115 [2024-09-29 16:45:30.513115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.115 [2024-09-29 16:45:30.513149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.115 qpair failed and we were unable to recover it. 00:37:30.115 [2024-09-29 16:45:30.513293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.115 [2024-09-29 16:45:30.513327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.115 qpair failed and we were unable to recover it. 00:37:30.115 [2024-09-29 16:45:30.513474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.115 [2024-09-29 16:45:30.513508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.115 qpair failed and we were unable to recover it. 00:37:30.115 [2024-09-29 16:45:30.513628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.115 [2024-09-29 16:45:30.513664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.115 qpair failed and we were unable to recover it. 00:37:30.115 [2024-09-29 16:45:30.513791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.115 [2024-09-29 16:45:30.513825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.115 qpair failed and we were unable to recover it. 00:37:30.115 [2024-09-29 16:45:30.514010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.115 [2024-09-29 16:45:30.514059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.115 qpair failed and we were unable to recover it. 00:37:30.115 [2024-09-29 16:45:30.514204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.115 [2024-09-29 16:45:30.514239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.115 qpair failed and we were unable to recover it. 00:37:30.115 [2024-09-29 16:45:30.514386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.115 [2024-09-29 16:45:30.514422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.115 qpair failed and we were unable to recover it. 00:37:30.115 [2024-09-29 16:45:30.514569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.115 [2024-09-29 16:45:30.514604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.115 qpair failed and we were unable to recover it. 00:37:30.115 [2024-09-29 16:45:30.514858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.115 [2024-09-29 16:45:30.514905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.115 qpair failed and we were unable to recover it. 00:37:30.115 [2024-09-29 16:45:30.515034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.115 [2024-09-29 16:45:30.515076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.115 qpair failed and we were unable to recover it. 00:37:30.115 [2024-09-29 16:45:30.515224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.115 [2024-09-29 16:45:30.515259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.115 qpair failed and we were unable to recover it. 00:37:30.115 [2024-09-29 16:45:30.515376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.115 [2024-09-29 16:45:30.515410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.115 qpair failed and we were unable to recover it. 00:37:30.115 [2024-09-29 16:45:30.515569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.115 [2024-09-29 16:45:30.515617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.115 qpair failed and we were unable to recover it. 00:37:30.115 [2024-09-29 16:45:30.515791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.115 [2024-09-29 16:45:30.515840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.115 qpair failed and we were unable to recover it. 00:37:30.115 [2024-09-29 16:45:30.515973] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:30.115 [2024-09-29 16:45:30.516006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.115 [2024-09-29 16:45:30.516040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.115 qpair failed and we were unable to recover it. 00:37:30.115 [2024-09-29 16:45:30.516171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.115 [2024-09-29 16:45:30.516204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.115 qpair failed and we were unable to recover it. 00:37:30.115 [2024-09-29 16:45:30.516345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.115 [2024-09-29 16:45:30.516379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.115 qpair failed and we were unable to recover it. 00:37:30.115 [2024-09-29 16:45:30.516499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.115 [2024-09-29 16:45:30.516534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.115 qpair failed and we were unable to recover it. 00:37:30.115 [2024-09-29 16:45:30.516702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.115 [2024-09-29 16:45:30.516750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.115 qpair failed and we were unable to recover it. 00:37:30.115 [2024-09-29 16:45:30.516895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.115 [2024-09-29 16:45:30.516944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.115 qpair failed and we were unable to recover it. 00:37:30.115 [2024-09-29 16:45:30.517123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.115 [2024-09-29 16:45:30.517159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.115 qpair failed and we were unable to recover it. 00:37:30.115 [2024-09-29 16:45:30.517281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.115 [2024-09-29 16:45:30.517316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.115 qpair failed and we were unable to recover it. 00:37:30.115 [2024-09-29 16:45:30.517483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.115 [2024-09-29 16:45:30.517523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.115 qpair failed and we were unable to recover it. 00:37:30.115 [2024-09-29 16:45:30.517653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.115 [2024-09-29 16:45:30.517712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.115 qpair failed and we were unable to recover it. 00:37:30.115 [2024-09-29 16:45:30.517856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.115 [2024-09-29 16:45:30.517904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.115 qpair failed and we were unable to recover it. 00:37:30.115 [2024-09-29 16:45:30.518061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.115 [2024-09-29 16:45:30.518098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.115 qpair failed and we were unable to recover it. 00:37:30.115 [2024-09-29 16:45:30.518213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.115 [2024-09-29 16:45:30.518247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.115 qpair failed and we were unable to recover it. 00:37:30.115 [2024-09-29 16:45:30.518394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.115 [2024-09-29 16:45:30.518428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.115 qpair failed and we were unable to recover it. 00:37:30.115 [2024-09-29 16:45:30.518548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.115 [2024-09-29 16:45:30.518583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.115 qpair failed and we were unable to recover it. 00:37:30.115 [2024-09-29 16:45:30.518759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.115 [2024-09-29 16:45:30.518796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.115 qpair failed and we were unable to recover it. 00:37:30.115 [2024-09-29 16:45:30.518927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.115 [2024-09-29 16:45:30.518975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.115 qpair failed and we were unable to recover it. 00:37:30.115 [2024-09-29 16:45:30.519132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.115 [2024-09-29 16:45:30.519166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.115 qpair failed and we were unable to recover it. 00:37:30.115 [2024-09-29 16:45:30.519335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.115 [2024-09-29 16:45:30.519369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.115 qpair failed and we were unable to recover it. 00:37:30.115 [2024-09-29 16:45:30.519484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.116 [2024-09-29 16:45:30.519517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.116 qpair failed and we were unable to recover it. 00:37:30.116 [2024-09-29 16:45:30.519656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.116 [2024-09-29 16:45:30.519703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.116 qpair failed and we were unable to recover it. 00:37:30.116 [2024-09-29 16:45:30.519846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.116 [2024-09-29 16:45:30.519879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.116 qpair failed and we were unable to recover it. 00:37:30.116 [2024-09-29 16:45:30.520057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.116 [2024-09-29 16:45:30.520093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.116 qpair failed and we were unable to recover it. 00:37:30.116 [2024-09-29 16:45:30.520213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.116 [2024-09-29 16:45:30.520248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.116 qpair failed and we were unable to recover it. 00:37:30.116 [2024-09-29 16:45:30.520370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.116 [2024-09-29 16:45:30.520406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.116 qpair failed and we were unable to recover it. 00:37:30.116 [2024-09-29 16:45:30.520550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.116 [2024-09-29 16:45:30.520584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.116 qpair failed and we were unable to recover it. 00:37:30.116 [2024-09-29 16:45:30.520721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.116 [2024-09-29 16:45:30.520770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.116 qpair failed and we were unable to recover it. 00:37:30.116 [2024-09-29 16:45:30.520889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.116 [2024-09-29 16:45:30.520925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.116 qpair failed and we were unable to recover it. 00:37:30.116 [2024-09-29 16:45:30.521035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.116 [2024-09-29 16:45:30.521069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.116 qpair failed and we were unable to recover it. 00:37:30.116 [2024-09-29 16:45:30.521219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.116 [2024-09-29 16:45:30.521254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.116 qpair failed and we were unable to recover it. 00:37:30.116 [2024-09-29 16:45:30.521401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.116 [2024-09-29 16:45:30.521437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.116 qpair failed and we were unable to recover it. 00:37:30.116 [2024-09-29 16:45:30.521547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.116 [2024-09-29 16:45:30.521582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.116 qpair failed and we were unable to recover it. 00:37:30.116 [2024-09-29 16:45:30.521728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.116 [2024-09-29 16:45:30.521763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.116 qpair failed and we were unable to recover it. 00:37:30.116 [2024-09-29 16:45:30.521923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.116 [2024-09-29 16:45:30.521971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.116 qpair failed and we were unable to recover it. 00:37:30.116 [2024-09-29 16:45:30.522151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.116 [2024-09-29 16:45:30.522188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.116 qpair failed and we were unable to recover it. 00:37:30.116 [2024-09-29 16:45:30.522343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.116 [2024-09-29 16:45:30.522377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.116 qpair failed and we were unable to recover it. 00:37:30.116 [2024-09-29 16:45:30.522485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.116 [2024-09-29 16:45:30.522520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.116 qpair failed and we were unable to recover it. 00:37:30.116 [2024-09-29 16:45:30.522654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.116 [2024-09-29 16:45:30.522709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.116 qpair failed and we were unable to recover it. 00:37:30.116 [2024-09-29 16:45:30.522862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.116 [2024-09-29 16:45:30.522897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.116 qpair failed and we were unable to recover it. 00:37:30.116 [2024-09-29 16:45:30.523020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.116 [2024-09-29 16:45:30.523054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.116 qpair failed and we were unable to recover it. 00:37:30.116 [2024-09-29 16:45:30.523229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.116 [2024-09-29 16:45:30.523263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.116 qpair failed and we were unable to recover it. 00:37:30.116 [2024-09-29 16:45:30.523407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.116 [2024-09-29 16:45:30.523440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.116 qpair failed and we were unable to recover it. 00:37:30.116 [2024-09-29 16:45:30.523663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.116 [2024-09-29 16:45:30.523709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.116 qpair failed and we were unable to recover it. 00:37:30.116 [2024-09-29 16:45:30.523855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.116 [2024-09-29 16:45:30.523888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.116 qpair failed and we were unable to recover it. 00:37:30.116 [2024-09-29 16:45:30.524027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.116 [2024-09-29 16:45:30.524061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.116 qpair failed and we were unable to recover it. 00:37:30.116 [2024-09-29 16:45:30.524230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.116 [2024-09-29 16:45:30.524264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.116 qpair failed and we were unable to recover it. 00:37:30.116 [2024-09-29 16:45:30.524414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.116 [2024-09-29 16:45:30.524448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.116 qpair failed and we were unable to recover it. 00:37:30.116 [2024-09-29 16:45:30.524568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.116 [2024-09-29 16:45:30.524606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.116 qpair failed and we were unable to recover it. 00:37:30.116 [2024-09-29 16:45:30.524777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.116 [2024-09-29 16:45:30.524832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.116 qpair failed and we were unable to recover it. 00:37:30.116 [2024-09-29 16:45:30.524993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.116 [2024-09-29 16:45:30.525031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.116 qpair failed and we were unable to recover it. 00:37:30.116 [2024-09-29 16:45:30.525201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.116 [2024-09-29 16:45:30.525235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.116 qpair failed and we were unable to recover it. 00:37:30.116 [2024-09-29 16:45:30.525349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.116 [2024-09-29 16:45:30.525383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.116 qpair failed and we were unable to recover it. 00:37:30.116 [2024-09-29 16:45:30.525545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.116 [2024-09-29 16:45:30.525593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.116 qpair failed and we were unable to recover it. 00:37:30.116 [2024-09-29 16:45:30.525766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.116 [2024-09-29 16:45:30.525803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.116 qpair failed and we were unable to recover it. 00:37:30.116 [2024-09-29 16:45:30.525918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.116 [2024-09-29 16:45:30.525953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.116 qpair failed and we were unable to recover it. 00:37:30.116 [2024-09-29 16:45:30.526096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.116 [2024-09-29 16:45:30.526131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.116 qpair failed and we were unable to recover it. 00:37:30.116 [2024-09-29 16:45:30.526311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.116 [2024-09-29 16:45:30.526345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.116 qpair failed and we were unable to recover it. 00:37:30.116 [2024-09-29 16:45:30.526463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.116 [2024-09-29 16:45:30.526497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.116 qpair failed and we were unable to recover it. 00:37:30.117 [2024-09-29 16:45:30.526624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.117 [2024-09-29 16:45:30.526668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.117 qpair failed and we were unable to recover it. 00:37:30.117 [2024-09-29 16:45:30.526813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.117 [2024-09-29 16:45:30.526872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.117 qpair failed and we were unable to recover it. 00:37:30.117 [2024-09-29 16:45:30.527045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.117 [2024-09-29 16:45:30.527093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.117 qpair failed and we were unable to recover it. 00:37:30.117 [2024-09-29 16:45:30.527255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.117 [2024-09-29 16:45:30.527291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.117 qpair failed and we were unable to recover it. 00:37:30.117 [2024-09-29 16:45:30.527416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.117 [2024-09-29 16:45:30.527450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.117 qpair failed and we were unable to recover it. 00:37:30.117 [2024-09-29 16:45:30.527583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.117 [2024-09-29 16:45:30.527617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.117 qpair failed and we were unable to recover it. 00:37:30.117 [2024-09-29 16:45:30.527764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.117 [2024-09-29 16:45:30.527807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.117 qpair failed and we were unable to recover it. 00:37:30.117 [2024-09-29 16:45:30.527948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.117 [2024-09-29 16:45:30.528002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.117 qpair failed and we were unable to recover it. 00:37:30.117 [2024-09-29 16:45:30.528189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.117 [2024-09-29 16:45:30.528225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.117 qpair failed and we were unable to recover it. 00:37:30.117 [2024-09-29 16:45:30.528341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.117 [2024-09-29 16:45:30.528375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.117 qpair failed and we were unable to recover it. 00:37:30.117 [2024-09-29 16:45:30.528490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.117 [2024-09-29 16:45:30.528524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.117 qpair failed and we were unable to recover it. 00:37:30.117 [2024-09-29 16:45:30.528689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.117 [2024-09-29 16:45:30.528738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.117 qpair failed and we were unable to recover it. 00:37:30.117 [2024-09-29 16:45:30.528883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.117 [2024-09-29 16:45:30.528931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.117 qpair failed and we were unable to recover it. 00:37:30.117 [2024-09-29 16:45:30.529105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.117 [2024-09-29 16:45:30.529142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.117 qpair failed and we were unable to recover it. 00:37:30.117 [2024-09-29 16:45:30.529288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.117 [2024-09-29 16:45:30.529324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.117 qpair failed and we were unable to recover it. 00:37:30.117 [2024-09-29 16:45:30.529438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.117 [2024-09-29 16:45:30.529472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.117 qpair failed and we were unable to recover it. 00:37:30.117 [2024-09-29 16:45:30.529625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.117 [2024-09-29 16:45:30.529663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.117 qpair failed and we were unable to recover it. 00:37:30.117 [2024-09-29 16:45:30.529838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.117 [2024-09-29 16:45:30.529873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.117 qpair failed and we were unable to recover it. 00:37:30.117 [2024-09-29 16:45:30.530044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.117 [2024-09-29 16:45:30.530077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.117 qpair failed and we were unable to recover it. 00:37:30.117 [2024-09-29 16:45:30.530217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.117 [2024-09-29 16:45:30.530251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.117 qpair failed and we were unable to recover it. 00:37:30.117 [2024-09-29 16:45:30.530369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.117 [2024-09-29 16:45:30.530402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.117 qpair failed and we were unable to recover it. 00:37:30.117 [2024-09-29 16:45:30.530544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.117 [2024-09-29 16:45:30.530578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.117 qpair failed and we were unable to recover it. 00:37:30.117 [2024-09-29 16:45:30.530729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.117 [2024-09-29 16:45:30.530778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.117 qpair failed and we were unable to recover it. 00:37:30.117 [2024-09-29 16:45:30.530913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.117 [2024-09-29 16:45:30.530966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.117 qpair failed and we were unable to recover it. 00:37:30.117 [2024-09-29 16:45:30.531136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.117 [2024-09-29 16:45:30.531173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.117 qpair failed and we were unable to recover it. 00:37:30.117 [2024-09-29 16:45:30.531327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.117 [2024-09-29 16:45:30.531361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.117 qpair failed and we were unable to recover it. 00:37:30.117 [2024-09-29 16:45:30.531504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.117 [2024-09-29 16:45:30.531570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.117 qpair failed and we were unable to recover it. 00:37:30.117 [2024-09-29 16:45:30.531744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.117 [2024-09-29 16:45:30.531792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.117 qpair failed and we were unable to recover it. 00:37:30.117 [2024-09-29 16:45:30.531914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.117 [2024-09-29 16:45:30.531948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.117 qpair failed and we were unable to recover it. 00:37:30.117 [2024-09-29 16:45:30.532152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.117 [2024-09-29 16:45:30.532186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.117 qpair failed and we were unable to recover it. 00:37:30.117 [2024-09-29 16:45:30.532300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.117 [2024-09-29 16:45:30.532340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.117 qpair failed and we were unable to recover it. 00:37:30.117 [2024-09-29 16:45:30.532482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.117 [2024-09-29 16:45:30.532515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.117 qpair failed and we were unable to recover it. 00:37:30.117 [2024-09-29 16:45:30.532663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.117 [2024-09-29 16:45:30.532728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.117 qpair failed and we were unable to recover it. 00:37:30.117 [2024-09-29 16:45:30.532855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.117 [2024-09-29 16:45:30.532892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.117 qpair failed and we were unable to recover it. 00:37:30.118 [2024-09-29 16:45:30.533012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.118 [2024-09-29 16:45:30.533046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.118 qpair failed and we were unable to recover it. 00:37:30.118 [2024-09-29 16:45:30.533182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.118 [2024-09-29 16:45:30.533216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.118 qpair failed and we were unable to recover it. 00:37:30.118 [2024-09-29 16:45:30.533333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.118 [2024-09-29 16:45:30.533367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.118 qpair failed and we were unable to recover it. 00:37:30.118 [2024-09-29 16:45:30.533486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.118 [2024-09-29 16:45:30.533520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.118 qpair failed and we were unable to recover it. 00:37:30.118 [2024-09-29 16:45:30.533679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.118 [2024-09-29 16:45:30.533716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.118 qpair failed and we were unable to recover it. 00:37:30.118 [2024-09-29 16:45:30.533884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.118 [2024-09-29 16:45:30.533932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.118 qpair failed and we were unable to recover it. 00:37:30.118 [2024-09-29 16:45:30.534064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.118 [2024-09-29 16:45:30.534100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.118 qpair failed and we were unable to recover it. 00:37:30.118 [2024-09-29 16:45:30.534248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.118 [2024-09-29 16:45:30.534282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.118 qpair failed and we were unable to recover it. 00:37:30.118 [2024-09-29 16:45:30.534428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.118 [2024-09-29 16:45:30.534461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.118 qpair failed and we were unable to recover it. 00:37:30.118 [2024-09-29 16:45:30.534604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.118 [2024-09-29 16:45:30.534639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.118 qpair failed and we were unable to recover it. 00:37:30.118 [2024-09-29 16:45:30.534833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.118 [2024-09-29 16:45:30.534867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.118 qpair failed and we were unable to recover it. 00:37:30.118 [2024-09-29 16:45:30.535029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.118 [2024-09-29 16:45:30.535076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.118 qpair failed and we were unable to recover it. 00:37:30.118 [2024-09-29 16:45:30.535205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.118 [2024-09-29 16:45:30.535241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.118 qpair failed and we were unable to recover it. 00:37:30.118 [2024-09-29 16:45:30.535397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.118 [2024-09-29 16:45:30.535430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.118 qpair failed and we were unable to recover it. 00:37:30.118 [2024-09-29 16:45:30.535575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.118 [2024-09-29 16:45:30.535609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.118 qpair failed and we were unable to recover it. 00:37:30.118 [2024-09-29 16:45:30.535756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.118 [2024-09-29 16:45:30.535790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.118 qpair failed and we were unable to recover it. 00:37:30.118 [2024-09-29 16:45:30.535912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.118 [2024-09-29 16:45:30.535947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.118 qpair failed and we were unable to recover it. 00:37:30.118 [2024-09-29 16:45:30.536095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.118 [2024-09-29 16:45:30.536129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.118 qpair failed and we were unable to recover it. 00:37:30.118 [2024-09-29 16:45:30.536248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.118 [2024-09-29 16:45:30.536281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.118 qpair failed and we were unable to recover it. 00:37:30.118 [2024-09-29 16:45:30.536450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.118 [2024-09-29 16:45:30.536484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.118 qpair failed and we were unable to recover it. 00:37:30.118 [2024-09-29 16:45:30.536623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.118 [2024-09-29 16:45:30.536684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.118 qpair failed and we were unable to recover it. 00:37:30.118 [2024-09-29 16:45:30.536887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.118 [2024-09-29 16:45:30.536935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.118 qpair failed and we were unable to recover it. 00:37:30.118 [2024-09-29 16:45:30.537089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.118 [2024-09-29 16:45:30.537124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.118 qpair failed and we were unable to recover it. 00:37:30.118 [2024-09-29 16:45:30.537269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.118 [2024-09-29 16:45:30.537304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.118 qpair failed and we were unable to recover it. 00:37:30.118 [2024-09-29 16:45:30.537443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.118 [2024-09-29 16:45:30.537477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.118 qpair failed and we were unable to recover it. 00:37:30.118 [2024-09-29 16:45:30.537630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.118 [2024-09-29 16:45:30.537686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.118 qpair failed and we were unable to recover it. 00:37:30.118 [2024-09-29 16:45:30.537811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.118 [2024-09-29 16:45:30.537847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.118 qpair failed and we were unable to recover it. 00:37:30.118 [2024-09-29 16:45:30.537967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.118 [2024-09-29 16:45:30.538005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.118 qpair failed and we were unable to recover it. 00:37:30.118 [2024-09-29 16:45:30.538157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.118 [2024-09-29 16:45:30.538192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.118 qpair failed and we were unable to recover it. 00:37:30.118 [2024-09-29 16:45:30.538303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.118 [2024-09-29 16:45:30.538337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.118 qpair failed and we were unable to recover it. 00:37:30.118 [2024-09-29 16:45:30.538508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.118 [2024-09-29 16:45:30.538543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.118 qpair failed and we were unable to recover it. 00:37:30.118 [2024-09-29 16:45:30.538710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.118 [2024-09-29 16:45:30.538745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.118 qpair failed and we were unable to recover it. 00:37:30.118 [2024-09-29 16:45:30.538871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.118 [2024-09-29 16:45:30.538908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.118 qpair failed and we were unable to recover it. 00:37:30.118 [2024-09-29 16:45:30.539082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.118 [2024-09-29 16:45:30.539116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.118 qpair failed and we were unable to recover it. 00:37:30.118 [2024-09-29 16:45:30.539227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.118 [2024-09-29 16:45:30.539261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.118 qpair failed and we were unable to recover it. 00:37:30.118 [2024-09-29 16:45:30.539406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.118 [2024-09-29 16:45:30.539439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.118 qpair failed and we were unable to recover it. 00:37:30.118 [2024-09-29 16:45:30.539611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.118 [2024-09-29 16:45:30.539664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.118 qpair failed and we were unable to recover it. 00:37:30.118 [2024-09-29 16:45:30.539912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.118 [2024-09-29 16:45:30.539960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.118 qpair failed and we were unable to recover it. 00:37:30.118 [2024-09-29 16:45:30.540085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.119 [2024-09-29 16:45:30.540129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.119 qpair failed and we were unable to recover it. 00:37:30.119 [2024-09-29 16:45:30.540373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.119 [2024-09-29 16:45:30.540408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.119 qpair failed and we were unable to recover it. 00:37:30.119 [2024-09-29 16:45:30.540561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.119 [2024-09-29 16:45:30.540606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.119 qpair failed and we were unable to recover it. 00:37:30.119 [2024-09-29 16:45:30.540762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.119 [2024-09-29 16:45:30.540796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.119 qpair failed and we were unable to recover it. 00:37:30.119 [2024-09-29 16:45:30.540913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.119 [2024-09-29 16:45:30.540949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.119 qpair failed and we were unable to recover it. 00:37:30.119 [2024-09-29 16:45:30.541100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.119 [2024-09-29 16:45:30.541133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.119 qpair failed and we were unable to recover it. 00:37:30.119 [2024-09-29 16:45:30.541278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.119 [2024-09-29 16:45:30.541313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.119 qpair failed and we were unable to recover it. 00:37:30.119 [2024-09-29 16:45:30.541461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.119 [2024-09-29 16:45:30.541497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.119 qpair failed and we were unable to recover it. 00:37:30.119 [2024-09-29 16:45:30.541629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.119 [2024-09-29 16:45:30.541687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.119 qpair failed and we were unable to recover it. 00:37:30.119 [2024-09-29 16:45:30.541858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.119 [2024-09-29 16:45:30.541905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.119 qpair failed and we were unable to recover it. 00:37:30.119 [2024-09-29 16:45:30.542067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.119 [2024-09-29 16:45:30.542103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.119 qpair failed and we were unable to recover it. 00:37:30.119 [2024-09-29 16:45:30.542243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.119 [2024-09-29 16:45:30.542277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.119 qpair failed and we were unable to recover it. 00:37:30.119 [2024-09-29 16:45:30.542454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.119 [2024-09-29 16:45:30.542488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.119 qpair failed and we were unable to recover it. 00:37:30.119 [2024-09-29 16:45:30.542614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.119 [2024-09-29 16:45:30.542649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.119 qpair failed and we were unable to recover it. 00:37:30.119 [2024-09-29 16:45:30.542804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.119 [2024-09-29 16:45:30.542853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.119 qpair failed and we were unable to recover it. 00:37:30.119 [2024-09-29 16:45:30.542981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.119 [2024-09-29 16:45:30.543016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.119 qpair failed and we were unable to recover it. 00:37:30.119 [2024-09-29 16:45:30.543135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.119 [2024-09-29 16:45:30.543169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.119 qpair failed and we were unable to recover it. 00:37:30.119 [2024-09-29 16:45:30.543338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.119 [2024-09-29 16:45:30.543372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.119 qpair failed and we were unable to recover it. 00:37:30.119 [2024-09-29 16:45:30.543492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.119 [2024-09-29 16:45:30.543525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.119 qpair failed and we were unable to recover it. 00:37:30.119 [2024-09-29 16:45:30.543679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.119 [2024-09-29 16:45:30.543713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.119 qpair failed and we were unable to recover it. 00:37:30.119 [2024-09-29 16:45:30.543823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.119 [2024-09-29 16:45:30.543857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.119 qpair failed and we were unable to recover it. 00:37:30.119 [2024-09-29 16:45:30.544020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.119 [2024-09-29 16:45:30.544068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.119 qpair failed and we were unable to recover it. 00:37:30.119 [2024-09-29 16:45:30.544204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.119 [2024-09-29 16:45:30.544240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.119 qpair failed and we were unable to recover it. 00:37:30.119 [2024-09-29 16:45:30.544363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.119 [2024-09-29 16:45:30.544410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.119 qpair failed and we were unable to recover it. 00:37:30.119 [2024-09-29 16:45:30.544559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.119 [2024-09-29 16:45:30.544593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.119 qpair failed and we were unable to recover it. 00:37:30.119 [2024-09-29 16:45:30.544768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.119 [2024-09-29 16:45:30.544817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.119 qpair failed and we were unable to recover it. 00:37:30.119 [2024-09-29 16:45:30.544960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.119 [2024-09-29 16:45:30.545008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.119 qpair failed and we were unable to recover it. 00:37:30.119 [2024-09-29 16:45:30.545162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.119 [2024-09-29 16:45:30.545196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.119 qpair failed and we were unable to recover it. 00:37:30.119 [2024-09-29 16:45:30.545363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.119 [2024-09-29 16:45:30.545396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.119 qpair failed and we were unable to recover it. 00:37:30.119 [2024-09-29 16:45:30.545550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.119 [2024-09-29 16:45:30.545584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.119 qpair failed and we were unable to recover it. 00:37:30.119 [2024-09-29 16:45:30.545730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.119 [2024-09-29 16:45:30.545763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.119 qpair failed and we were unable to recover it. 00:37:30.119 [2024-09-29 16:45:30.545879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.119 [2024-09-29 16:45:30.545915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.119 qpair failed and we were unable to recover it. 00:37:30.119 [2024-09-29 16:45:30.546079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.119 [2024-09-29 16:45:30.546116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.119 qpair failed and we were unable to recover it. 00:37:30.119 [2024-09-29 16:45:30.546234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.119 [2024-09-29 16:45:30.546269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.119 qpair failed and we were unable to recover it. 00:37:30.119 [2024-09-29 16:45:30.546437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.119 [2024-09-29 16:45:30.546471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.119 qpair failed and we were unable to recover it. 00:37:30.119 [2024-09-29 16:45:30.546641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.119 [2024-09-29 16:45:30.546691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.119 qpair failed and we were unable to recover it. 00:37:30.119 [2024-09-29 16:45:30.546813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.119 [2024-09-29 16:45:30.546848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.119 qpair failed and we were unable to recover it. 00:37:30.119 [2024-09-29 16:45:30.546997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.119 [2024-09-29 16:45:30.547030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.119 qpair failed and we were unable to recover it. 00:37:30.119 [2024-09-29 16:45:30.547153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.119 [2024-09-29 16:45:30.547192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.120 qpair failed and we were unable to recover it. 00:37:30.120 [2024-09-29 16:45:30.547343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.120 [2024-09-29 16:45:30.547376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.120 qpair failed and we were unable to recover it. 00:37:30.120 [2024-09-29 16:45:30.547520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.120 [2024-09-29 16:45:30.547554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.120 qpair failed and we were unable to recover it. 00:37:30.120 [2024-09-29 16:45:30.547699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.120 [2024-09-29 16:45:30.547734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.120 qpair failed and we were unable to recover it. 00:37:30.120 [2024-09-29 16:45:30.547852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.120 [2024-09-29 16:45:30.547885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.120 qpair failed and we were unable to recover it. 00:37:30.120 [2024-09-29 16:45:30.548020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.120 [2024-09-29 16:45:30.548056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.120 qpair failed and we were unable to recover it. 00:37:30.120 [2024-09-29 16:45:30.548196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.120 [2024-09-29 16:45:30.548232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.120 qpair failed and we were unable to recover it. 00:37:30.120 [2024-09-29 16:45:30.548369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.120 [2024-09-29 16:45:30.548416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.120 qpair failed and we were unable to recover it. 00:37:30.120 [2024-09-29 16:45:30.548567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.120 [2024-09-29 16:45:30.548600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.120 qpair failed and we were unable to recover it. 00:37:30.120 [2024-09-29 16:45:30.548735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.120 [2024-09-29 16:45:30.548769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.120 qpair failed and we were unable to recover it. 00:37:30.120 [2024-09-29 16:45:30.548886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.120 [2024-09-29 16:45:30.548919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.120 qpair failed and we were unable to recover it. 00:37:30.120 [2024-09-29 16:45:30.549062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.120 [2024-09-29 16:45:30.549095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.120 qpair failed and we were unable to recover it. 00:37:30.120 [2024-09-29 16:45:30.549231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.120 [2024-09-29 16:45:30.549265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.120 qpair failed and we were unable to recover it. 00:37:30.120 [2024-09-29 16:45:30.549384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.120 [2024-09-29 16:45:30.549419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.120 qpair failed and we were unable to recover it. 00:37:30.120 [2024-09-29 16:45:30.549576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.120 [2024-09-29 16:45:30.549614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.120 qpair failed and we were unable to recover it. 00:37:30.120 [2024-09-29 16:45:30.549764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.120 [2024-09-29 16:45:30.549800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.120 qpair failed and we were unable to recover it. 00:37:30.120 [2024-09-29 16:45:30.549920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.120 [2024-09-29 16:45:30.549957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.120 qpair failed and we were unable to recover it. 00:37:30.120 [2024-09-29 16:45:30.550078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.120 [2024-09-29 16:45:30.550111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.120 qpair failed and we were unable to recover it. 00:37:30.120 [2024-09-29 16:45:30.550259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.120 [2024-09-29 16:45:30.550292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.120 qpair failed and we were unable to recover it. 00:37:30.120 [2024-09-29 16:45:30.550413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.120 [2024-09-29 16:45:30.550448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.120 qpair failed and we were unable to recover it. 00:37:30.120 [2024-09-29 16:45:30.550594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.120 [2024-09-29 16:45:30.550627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.120 qpair failed and we were unable to recover it. 00:37:30.120 [2024-09-29 16:45:30.550761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.120 [2024-09-29 16:45:30.550795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.120 qpair failed and we were unable to recover it. 00:37:30.120 [2024-09-29 16:45:30.550934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.120 [2024-09-29 16:45:30.550967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.120 qpair failed and we were unable to recover it. 00:37:30.120 [2024-09-29 16:45:30.551136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.120 [2024-09-29 16:45:30.551169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.120 qpair failed and we were unable to recover it. 00:37:30.120 [2024-09-29 16:45:30.551279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.120 [2024-09-29 16:45:30.551313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.120 qpair failed and we were unable to recover it. 00:37:30.120 [2024-09-29 16:45:30.551487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.120 [2024-09-29 16:45:30.551521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.120 qpair failed and we were unable to recover it. 00:37:30.120 [2024-09-29 16:45:30.551689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.120 [2024-09-29 16:45:30.551738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.120 qpair failed and we were unable to recover it. 00:37:30.120 [2024-09-29 16:45:30.551879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.120 [2024-09-29 16:45:30.551928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.120 qpair failed and we were unable to recover it. 00:37:30.120 [2024-09-29 16:45:30.552080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.120 [2024-09-29 16:45:30.552115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.120 qpair failed and we were unable to recover it. 00:37:30.120 [2024-09-29 16:45:30.552265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.120 [2024-09-29 16:45:30.552299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.120 qpair failed and we were unable to recover it. 00:37:30.120 [2024-09-29 16:45:30.552406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.120 [2024-09-29 16:45:30.552441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.120 qpair failed and we were unable to recover it. 00:37:30.120 [2024-09-29 16:45:30.552588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.120 [2024-09-29 16:45:30.552623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.120 qpair failed and we were unable to recover it. 00:37:30.120 [2024-09-29 16:45:30.552763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.120 [2024-09-29 16:45:30.552811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.120 qpair failed and we were unable to recover it. 00:37:30.120 [2024-09-29 16:45:30.552946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.120 [2024-09-29 16:45:30.552992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.120 qpair failed and we were unable to recover it. 00:37:30.120 [2024-09-29 16:45:30.553135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.120 [2024-09-29 16:45:30.553169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.120 qpair failed and we were unable to recover it. 00:37:30.120 [2024-09-29 16:45:30.553284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.120 [2024-09-29 16:45:30.553320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.120 qpair failed and we were unable to recover it. 00:37:30.120 [2024-09-29 16:45:30.553508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.120 [2024-09-29 16:45:30.553543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.121 qpair failed and we were unable to recover it. 00:37:30.121 [2024-09-29 16:45:30.553717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.121 [2024-09-29 16:45:30.553765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.121 qpair failed and we were unable to recover it. 00:37:30.121 [2024-09-29 16:45:30.553957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.121 [2024-09-29 16:45:30.553992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.121 qpair failed and we were unable to recover it. 00:37:30.121 [2024-09-29 16:45:30.554107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.121 [2024-09-29 16:45:30.554141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.121 qpair failed and we were unable to recover it. 00:37:30.121 [2024-09-29 16:45:30.554265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.121 [2024-09-29 16:45:30.554306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.121 qpair failed and we were unable to recover it. 00:37:30.121 [2024-09-29 16:45:30.554423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.121 [2024-09-29 16:45:30.554457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.121 qpair failed and we were unable to recover it. 00:37:30.121 [2024-09-29 16:45:30.554611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.121 [2024-09-29 16:45:30.554667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.121 qpair failed and we were unable to recover it. 00:37:30.121 [2024-09-29 16:45:30.554808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.121 [2024-09-29 16:45:30.554844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.121 qpair failed and we were unable to recover it. 00:37:30.121 [2024-09-29 16:45:30.555001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.121 [2024-09-29 16:45:30.555036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.121 qpair failed and we were unable to recover it. 00:37:30.121 [2024-09-29 16:45:30.555186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.121 [2024-09-29 16:45:30.555221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.121 qpair failed and we were unable to recover it. 00:37:30.121 [2024-09-29 16:45:30.555366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.121 [2024-09-29 16:45:30.555410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.121 qpair failed and we were unable to recover it. 00:37:30.121 [2024-09-29 16:45:30.555568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.121 [2024-09-29 16:45:30.555616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.121 qpair failed and we were unable to recover it. 00:37:30.121 [2024-09-29 16:45:30.555779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.121 [2024-09-29 16:45:30.555827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.121 qpair failed and we were unable to recover it. 00:37:30.121 [2024-09-29 16:45:30.555965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.121 [2024-09-29 16:45:30.556030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.121 qpair failed and we were unable to recover it. 00:37:30.121 [2024-09-29 16:45:30.556187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.121 [2024-09-29 16:45:30.556223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.121 qpair failed and we were unable to recover it. 00:37:30.121 [2024-09-29 16:45:30.556341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.121 [2024-09-29 16:45:30.556375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.121 qpair failed and we were unable to recover it. 00:37:30.121 [2024-09-29 16:45:30.556517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.121 [2024-09-29 16:45:30.556552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.121 qpair failed and we were unable to recover it. 00:37:30.121 [2024-09-29 16:45:30.556686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.121 [2024-09-29 16:45:30.556723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.121 qpair failed and we were unable to recover it. 00:37:30.121 [2024-09-29 16:45:30.556896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.121 [2024-09-29 16:45:30.556944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.121 qpair failed and we were unable to recover it. 00:37:30.121 [2024-09-29 16:45:30.557103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.121 [2024-09-29 16:45:30.557139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.121 qpair failed and we were unable to recover it. 00:37:30.121 [2024-09-29 16:45:30.557258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.121 [2024-09-29 16:45:30.557292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.121 qpair failed and we were unable to recover it. 00:37:30.121 [2024-09-29 16:45:30.557411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.121 [2024-09-29 16:45:30.557445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.121 qpair failed and we were unable to recover it. 00:37:30.121 [2024-09-29 16:45:30.557576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.121 [2024-09-29 16:45:30.557623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.121 qpair failed and we were unable to recover it. 00:37:30.121 [2024-09-29 16:45:30.557771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.121 [2024-09-29 16:45:30.557809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.121 qpair failed and we were unable to recover it. 00:37:30.121 [2024-09-29 16:45:30.557970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.121 [2024-09-29 16:45:30.558018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.121 qpair failed and we were unable to recover it. 00:37:30.121 [2024-09-29 16:45:30.558143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.121 [2024-09-29 16:45:30.558180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.121 qpair failed and we were unable to recover it. 00:37:30.121 [2024-09-29 16:45:30.558301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.121 [2024-09-29 16:45:30.558335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.121 qpair failed and we were unable to recover it. 00:37:30.121 [2024-09-29 16:45:30.558487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.121 [2024-09-29 16:45:30.558520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.121 qpair failed and we were unable to recover it. 00:37:30.121 [2024-09-29 16:45:30.558648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.121 [2024-09-29 16:45:30.558694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.121 qpair failed and we were unable to recover it. 00:37:30.121 [2024-09-29 16:45:30.558819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.121 [2024-09-29 16:45:30.558853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.121 qpair failed and we were unable to recover it. 00:37:30.121 [2024-09-29 16:45:30.558965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.121 [2024-09-29 16:45:30.558998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.121 qpair failed and we were unable to recover it. 00:37:30.121 [2024-09-29 16:45:30.559147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.121 [2024-09-29 16:45:30.559180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.121 qpair failed and we were unable to recover it. 00:37:30.121 [2024-09-29 16:45:30.559354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.121 [2024-09-29 16:45:30.559388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.121 qpair failed and we were unable to recover it. 00:37:30.121 [2024-09-29 16:45:30.559495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.121 [2024-09-29 16:45:30.559528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.121 qpair failed and we were unable to recover it. 00:37:30.121 [2024-09-29 16:45:30.559680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.121 [2024-09-29 16:45:30.559714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.121 qpair failed and we were unable to recover it. 00:37:30.122 [2024-09-29 16:45:30.559839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.122 [2024-09-29 16:45:30.559887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.122 qpair failed and we were unable to recover it. 00:37:30.122 [2024-09-29 16:45:30.560031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.122 [2024-09-29 16:45:30.560067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.122 qpair failed and we were unable to recover it. 00:37:30.122 [2024-09-29 16:45:30.560213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.122 [2024-09-29 16:45:30.560248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.122 qpair failed and we were unable to recover it. 00:37:30.122 [2024-09-29 16:45:30.560391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.122 [2024-09-29 16:45:30.560425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.122 qpair failed and we were unable to recover it. 00:37:30.122 [2024-09-29 16:45:30.560554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.122 [2024-09-29 16:45:30.560590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.122 qpair failed and we were unable to recover it. 00:37:30.122 [2024-09-29 16:45:30.560738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.122 [2024-09-29 16:45:30.560773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.122 qpair failed and we were unable to recover it. 00:37:30.122 [2024-09-29 16:45:30.560883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.122 [2024-09-29 16:45:30.560924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.122 qpair failed and we were unable to recover it. 00:37:30.122 [2024-09-29 16:45:30.561052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.122 [2024-09-29 16:45:30.561085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.122 qpair failed and we were unable to recover it. 00:37:30.122 [2024-09-29 16:45:30.561252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.122 [2024-09-29 16:45:30.561285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.122 qpair failed and we were unable to recover it. 00:37:30.122 [2024-09-29 16:45:30.561430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.122 [2024-09-29 16:45:30.561468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.122 qpair failed and we were unable to recover it. 00:37:30.122 [2024-09-29 16:45:30.561592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.122 [2024-09-29 16:45:30.561627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.122 qpair failed and we were unable to recover it. 00:37:30.122 [2024-09-29 16:45:30.561754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.122 [2024-09-29 16:45:30.561788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.122 qpair failed and we were unable to recover it. 00:37:30.122 [2024-09-29 16:45:30.561932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.122 [2024-09-29 16:45:30.561966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.122 qpair failed and we were unable to recover it. 00:37:30.122 [2024-09-29 16:45:30.562108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.122 [2024-09-29 16:45:30.562142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.122 qpair failed and we were unable to recover it. 00:37:30.122 [2024-09-29 16:45:30.562270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.122 [2024-09-29 16:45:30.562317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.122 qpair failed and we were unable to recover it. 00:37:30.122 [2024-09-29 16:45:30.562431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.122 [2024-09-29 16:45:30.562466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.122 qpair failed and we were unable to recover it. 00:37:30.122 [2024-09-29 16:45:30.562585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.122 [2024-09-29 16:45:30.562619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.122 qpair failed and we were unable to recover it. 00:37:30.122 [2024-09-29 16:45:30.562744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.122 [2024-09-29 16:45:30.562778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.122 qpair failed and we were unable to recover it. 00:37:30.122 [2024-09-29 16:45:30.562894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.122 [2024-09-29 16:45:30.562927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.122 qpair failed and we were unable to recover it. 00:37:30.122 [2024-09-29 16:45:30.563083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.122 [2024-09-29 16:45:30.563130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.122 qpair failed and we were unable to recover it. 00:37:30.122 [2024-09-29 16:45:30.563257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.122 [2024-09-29 16:45:30.563293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.122 qpair failed and we were unable to recover it. 00:37:30.122 [2024-09-29 16:45:30.563401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.122 [2024-09-29 16:45:30.563435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.122 qpair failed and we were unable to recover it. 00:37:30.122 [2024-09-29 16:45:30.563551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.122 [2024-09-29 16:45:30.563586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.122 qpair failed and we were unable to recover it. 00:37:30.122 [2024-09-29 16:45:30.563721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.122 [2024-09-29 16:45:30.563757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.122 qpair failed and we were unable to recover it. 00:37:30.122 [2024-09-29 16:45:30.563884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.122 [2024-09-29 16:45:30.563919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.122 qpair failed and we were unable to recover it. 00:37:30.122 [2024-09-29 16:45:30.564074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.122 [2024-09-29 16:45:30.564109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.122 qpair failed and we were unable to recover it. 00:37:30.123 [2024-09-29 16:45:30.564214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.123 [2024-09-29 16:45:30.564249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.123 qpair failed and we were unable to recover it. 00:37:30.123 [2024-09-29 16:45:30.564376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.123 [2024-09-29 16:45:30.564410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.123 qpair failed and we were unable to recover it. 00:37:30.123 [2024-09-29 16:45:30.564584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.123 [2024-09-29 16:45:30.564619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.123 qpair failed and we were unable to recover it. 00:37:30.123 [2024-09-29 16:45:30.564755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.123 [2024-09-29 16:45:30.564789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.123 qpair failed and we were unable to recover it. 00:37:30.123 [2024-09-29 16:45:30.564933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.123 [2024-09-29 16:45:30.564968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.123 qpair failed and we were unable to recover it. 00:37:30.123 [2024-09-29 16:45:30.565083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.123 [2024-09-29 16:45:30.565117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.123 qpair failed and we were unable to recover it. 00:37:30.123 [2024-09-29 16:45:30.565261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.123 [2024-09-29 16:45:30.565296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.123 qpair failed and we were unable to recover it. 00:37:30.123 [2024-09-29 16:45:30.565438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.123 [2024-09-29 16:45:30.565472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.123 qpair failed and we were unable to recover it. 00:37:30.123 [2024-09-29 16:45:30.565616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.123 [2024-09-29 16:45:30.565651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.123 qpair failed and we were unable to recover it. 00:37:30.123 [2024-09-29 16:45:30.565791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.123 [2024-09-29 16:45:30.565839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.123 qpair failed and we were unable to recover it. 00:37:30.123 [2024-09-29 16:45:30.565988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.123 [2024-09-29 16:45:30.566024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.123 qpair failed and we were unable to recover it. 00:37:30.123 [2024-09-29 16:45:30.566135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.123 [2024-09-29 16:45:30.566169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.123 qpair failed and we were unable to recover it. 00:37:30.123 [2024-09-29 16:45:30.566281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.123 [2024-09-29 16:45:30.566315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.123 qpair failed and we were unable to recover it. 00:37:30.123 [2024-09-29 16:45:30.566429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.123 [2024-09-29 16:45:30.566462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.123 qpair failed and we were unable to recover it. 00:37:30.123 [2024-09-29 16:45:30.566600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.123 [2024-09-29 16:45:30.566633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.123 qpair failed and we were unable to recover it. 00:37:30.123 [2024-09-29 16:45:30.566760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.123 [2024-09-29 16:45:30.566794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.123 qpair failed and we were unable to recover it. 00:37:30.123 [2024-09-29 16:45:30.566914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.123 [2024-09-29 16:45:30.566948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.123 qpair failed and we were unable to recover it. 00:37:30.123 [2024-09-29 16:45:30.567066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.123 [2024-09-29 16:45:30.567100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.123 qpair failed and we were unable to recover it. 00:37:30.123 [2024-09-29 16:45:30.567244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.123 [2024-09-29 16:45:30.567277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.123 qpair failed and we were unable to recover it. 00:37:30.123 [2024-09-29 16:45:30.567444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.123 [2024-09-29 16:45:30.567478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.123 qpair failed and we were unable to recover it. 00:37:30.123 [2024-09-29 16:45:30.567605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.123 [2024-09-29 16:45:30.567653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.123 qpair failed and we were unable to recover it. 00:37:30.123 [2024-09-29 16:45:30.567801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.123 [2024-09-29 16:45:30.567849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.123 qpair failed and we were unable to recover it. 00:37:30.123 [2024-09-29 16:45:30.567973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.123 [2024-09-29 16:45:30.568011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.123 qpair failed and we were unable to recover it. 00:37:30.123 [2024-09-29 16:45:30.568131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.123 [2024-09-29 16:45:30.568174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.123 qpair failed and we were unable to recover it. 00:37:30.123 [2024-09-29 16:45:30.568302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.123 [2024-09-29 16:45:30.568336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.123 qpair failed and we were unable to recover it. 00:37:30.123 [2024-09-29 16:45:30.568477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.123 [2024-09-29 16:45:30.568510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.123 qpair failed and we were unable to recover it. 00:37:30.123 [2024-09-29 16:45:30.568646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.123 [2024-09-29 16:45:30.568690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.123 qpair failed and we were unable to recover it. 00:37:30.123 [2024-09-29 16:45:30.568843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.123 [2024-09-29 16:45:30.568881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.123 qpair failed and we were unable to recover it. 00:37:30.123 [2024-09-29 16:45:30.569016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.123 [2024-09-29 16:45:30.569056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.123 qpair failed and we were unable to recover it. 00:37:30.123 [2024-09-29 16:45:30.569208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.123 [2024-09-29 16:45:30.569244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.123 qpair failed and we were unable to recover it. 00:37:30.123 [2024-09-29 16:45:30.569391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.123 [2024-09-29 16:45:30.569425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.123 qpair failed and we were unable to recover it. 00:37:30.123 [2024-09-29 16:45:30.569536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.123 [2024-09-29 16:45:30.569569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.123 qpair failed and we were unable to recover it. 00:37:30.123 [2024-09-29 16:45:30.569757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.123 [2024-09-29 16:45:30.569805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.123 qpair failed and we were unable to recover it. 00:37:30.123 [2024-09-29 16:45:30.569919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.123 [2024-09-29 16:45:30.569955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.124 qpair failed and we were unable to recover it. 00:37:30.124 [2024-09-29 16:45:30.570103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.124 [2024-09-29 16:45:30.570145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.124 qpair failed and we were unable to recover it. 00:37:30.124 [2024-09-29 16:45:30.570296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.124 [2024-09-29 16:45:30.570329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.124 qpair failed and we were unable to recover it. 00:37:30.124 [2024-09-29 16:45:30.570478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.124 [2024-09-29 16:45:30.570513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.124 qpair failed and we were unable to recover it. 00:37:30.124 [2024-09-29 16:45:30.570660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.124 [2024-09-29 16:45:30.570701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.124 qpair failed and we were unable to recover it. 00:37:30.124 [2024-09-29 16:45:30.570846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.124 [2024-09-29 16:45:30.570881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.124 qpair failed and we were unable to recover it. 00:37:30.124 [2024-09-29 16:45:30.571002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.124 [2024-09-29 16:45:30.571037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.124 qpair failed and we were unable to recover it. 00:37:30.124 [2024-09-29 16:45:30.571220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.124 [2024-09-29 16:45:30.571268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.124 qpair failed and we were unable to recover it. 00:37:30.124 [2024-09-29 16:45:30.571393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.124 [2024-09-29 16:45:30.571429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.124 qpair failed and we were unable to recover it. 00:37:30.124 [2024-09-29 16:45:30.571550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.124 [2024-09-29 16:45:30.571585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.124 qpair failed and we were unable to recover it. 00:37:30.124 [2024-09-29 16:45:30.571722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.124 [2024-09-29 16:45:30.571757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.124 qpair failed and we were unable to recover it. 00:37:30.124 [2024-09-29 16:45:30.571907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.124 [2024-09-29 16:45:30.571940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.124 qpair failed and we were unable to recover it. 00:37:30.124 [2024-09-29 16:45:30.572049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.124 [2024-09-29 16:45:30.572083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.124 qpair failed and we were unable to recover it. 00:37:30.124 [2024-09-29 16:45:30.572226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.124 [2024-09-29 16:45:30.572259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.124 qpair failed and we were unable to recover it. 00:37:30.124 [2024-09-29 16:45:30.572393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.124 [2024-09-29 16:45:30.572441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.124 qpair failed and we were unable to recover it. 00:37:30.124 [2024-09-29 16:45:30.572611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.124 [2024-09-29 16:45:30.572659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.124 qpair failed and we were unable to recover it. 00:37:30.124 [2024-09-29 16:45:30.572830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.124 [2024-09-29 16:45:30.572866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.124 qpair failed and we were unable to recover it. 00:37:30.124 [2024-09-29 16:45:30.573096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.124 [2024-09-29 16:45:30.573130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.124 qpair failed and we were unable to recover it. 00:37:30.124 [2024-09-29 16:45:30.573352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.124 [2024-09-29 16:45:30.573386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.124 qpair failed and we were unable to recover it. 00:37:30.124 [2024-09-29 16:45:30.573537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.124 [2024-09-29 16:45:30.573570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.124 qpair failed and we were unable to recover it. 00:37:30.124 [2024-09-29 16:45:30.573693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.124 [2024-09-29 16:45:30.573726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.124 qpair failed and we were unable to recover it. 00:37:30.124 [2024-09-29 16:45:30.573867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.124 [2024-09-29 16:45:30.573900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.124 qpair failed and we were unable to recover it. 00:37:30.124 [2024-09-29 16:45:30.574037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.124 [2024-09-29 16:45:30.574071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.124 qpair failed and we were unable to recover it. 00:37:30.124 [2024-09-29 16:45:30.574221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.124 [2024-09-29 16:45:30.574255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.124 qpair failed and we were unable to recover it. 00:37:30.124 [2024-09-29 16:45:30.574381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.124 [2024-09-29 16:45:30.574428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.124 qpair failed and we were unable to recover it. 00:37:30.124 [2024-09-29 16:45:30.574556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.124 [2024-09-29 16:45:30.574592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.124 qpair failed and we were unable to recover it. 00:37:30.124 [2024-09-29 16:45:30.574721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.124 [2024-09-29 16:45:30.574757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.124 qpair failed and we were unable to recover it. 00:37:30.124 [2024-09-29 16:45:30.574875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.124 [2024-09-29 16:45:30.574909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.124 qpair failed and we were unable to recover it. 00:37:30.124 [2024-09-29 16:45:30.575054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.124 [2024-09-29 16:45:30.575087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.124 qpair failed and we were unable to recover it. 00:37:30.124 [2024-09-29 16:45:30.575227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.124 [2024-09-29 16:45:30.575260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.124 qpair failed and we were unable to recover it. 00:37:30.124 [2024-09-29 16:45:30.575408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.124 [2024-09-29 16:45:30.575448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.124 qpair failed and we were unable to recover it. 00:37:30.124 [2024-09-29 16:45:30.575592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.124 [2024-09-29 16:45:30.575625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.124 qpair failed and we were unable to recover it. 00:37:30.124 [2024-09-29 16:45:30.575772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.124 [2024-09-29 16:45:30.575821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.124 qpair failed and we were unable to recover it. 00:37:30.124 [2024-09-29 16:45:30.575942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.124 [2024-09-29 16:45:30.575977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.124 qpair failed and we were unable to recover it. 00:37:30.124 [2024-09-29 16:45:30.576124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.125 [2024-09-29 16:45:30.576157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.125 qpair failed and we were unable to recover it. 00:37:30.125 [2024-09-29 16:45:30.576294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.125 [2024-09-29 16:45:30.576327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.125 qpair failed and we were unable to recover it. 00:37:30.125 [2024-09-29 16:45:30.576498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.125 [2024-09-29 16:45:30.576532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.125 qpair failed and we were unable to recover it. 00:37:30.125 [2024-09-29 16:45:30.576659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.125 [2024-09-29 16:45:30.576713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.125 qpair failed and we were unable to recover it. 00:37:30.125 [2024-09-29 16:45:30.576833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.125 [2024-09-29 16:45:30.576868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.125 qpair failed and we were unable to recover it. 00:37:30.125 [2024-09-29 16:45:30.577001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.125 [2024-09-29 16:45:30.577049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.125 qpair failed and we were unable to recover it. 00:37:30.125 [2024-09-29 16:45:30.577164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.125 [2024-09-29 16:45:30.577199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.125 qpair failed and we were unable to recover it. 00:37:30.125 [2024-09-29 16:45:30.577338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.125 [2024-09-29 16:45:30.577372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.125 qpair failed and we were unable to recover it. 00:37:30.125 [2024-09-29 16:45:30.577488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.125 [2024-09-29 16:45:30.577521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.125 qpair failed and we were unable to recover it. 00:37:30.125 [2024-09-29 16:45:30.577665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.125 [2024-09-29 16:45:30.577706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.125 qpair failed and we were unable to recover it. 00:37:30.125 [2024-09-29 16:45:30.577840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.125 [2024-09-29 16:45:30.577875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.125 qpair failed and we were unable to recover it. 00:37:30.125 [2024-09-29 16:45:30.578015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.125 [2024-09-29 16:45:30.578048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.125 qpair failed and we were unable to recover it. 00:37:30.125 [2024-09-29 16:45:30.578218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.125 [2024-09-29 16:45:30.578252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.125 qpair failed and we were unable to recover it. 00:37:30.125 [2024-09-29 16:45:30.578364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.125 [2024-09-29 16:45:30.578398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.125 qpair failed and we were unable to recover it. 00:37:30.125 [2024-09-29 16:45:30.578544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.125 [2024-09-29 16:45:30.578579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.125 qpair failed and we were unable to recover it. 00:37:30.125 [2024-09-29 16:45:30.578711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.125 [2024-09-29 16:45:30.578759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.125 qpair failed and we were unable to recover it. 00:37:30.125 [2024-09-29 16:45:30.578889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.125 [2024-09-29 16:45:30.578927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.125 qpair failed and we were unable to recover it. 00:37:30.125 [2024-09-29 16:45:30.579074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.125 [2024-09-29 16:45:30.579109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.125 qpair failed and we were unable to recover it. 00:37:30.125 [2024-09-29 16:45:30.579248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.125 [2024-09-29 16:45:30.579282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.125 qpair failed and we were unable to recover it. 00:37:30.125 [2024-09-29 16:45:30.579426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.125 [2024-09-29 16:45:30.579460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.125 qpair failed and we were unable to recover it. 00:37:30.125 [2024-09-29 16:45:30.579615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.125 [2024-09-29 16:45:30.579649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.125 qpair failed and we were unable to recover it. 00:37:30.125 [2024-09-29 16:45:30.579782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.125 [2024-09-29 16:45:30.579829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.125 qpair failed and we were unable to recover it. 00:37:30.125 [2024-09-29 16:45:30.579949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.125 [2024-09-29 16:45:30.579984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.125 qpair failed and we were unable to recover it. 00:37:30.125 [2024-09-29 16:45:30.580130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.125 [2024-09-29 16:45:30.580164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.125 qpair failed and we were unable to recover it. 00:37:30.125 [2024-09-29 16:45:30.580274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.125 [2024-09-29 16:45:30.580308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:30.125 qpair failed and we were unable to recover it. 00:37:30.125 [2024-09-29 16:45:30.580491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.125 [2024-09-29 16:45:30.580530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.125 qpair failed and we were unable to recover it. 00:37:30.125 [2024-09-29 16:45:30.580651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.125 [2024-09-29 16:45:30.580726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.125 qpair failed and we were unable to recover it. 00:37:30.125 [2024-09-29 16:45:30.580846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.125 [2024-09-29 16:45:30.580880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.125 qpair failed and we were unable to recover it. 00:37:30.125 [2024-09-29 16:45:30.580992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.125 [2024-09-29 16:45:30.581026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.125 qpair failed and we were unable to recover it. 00:37:30.125 [2024-09-29 16:45:30.581144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.125 [2024-09-29 16:45:30.581178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.125 qpair failed and we were unable to recover it. 00:37:30.125 [2024-09-29 16:45:30.581321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.125 [2024-09-29 16:45:30.581355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.125 qpair failed and we were unable to recover it. 00:37:30.125 [2024-09-29 16:45:30.581521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.125 [2024-09-29 16:45:30.581569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.125 qpair failed and we were unable to recover it. 00:37:30.125 [2024-09-29 16:45:30.581726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.125 [2024-09-29 16:45:30.581774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.125 qpair failed and we were unable to recover it. 00:37:30.125 [2024-09-29 16:45:30.581947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.125 [2024-09-29 16:45:30.581984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.125 qpair failed and we were unable to recover it. 00:37:30.125 [2024-09-29 16:45:30.582101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.125 [2024-09-29 16:45:30.582136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.125 qpair failed and we were unable to recover it. 00:37:30.125 [2024-09-29 16:45:30.582281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.126 [2024-09-29 16:45:30.582315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.126 qpair failed and we were unable to recover it. 00:37:30.126 [2024-09-29 16:45:30.582433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.126 [2024-09-29 16:45:30.582467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.126 qpair failed and we were unable to recover it. 00:37:30.126 [2024-09-29 16:45:30.582604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.126 [2024-09-29 16:45:30.582638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.126 qpair failed and we were unable to recover it. 00:37:30.126 [2024-09-29 16:45:30.582789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.126 [2024-09-29 16:45:30.582824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.126 qpair failed and we were unable to recover it. 00:37:30.126 [2024-09-29 16:45:30.582943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.126 [2024-09-29 16:45:30.582979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.126 qpair failed and we were unable to recover it. 00:37:30.126 [2024-09-29 16:45:30.583101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.126 [2024-09-29 16:45:30.583135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.126 qpair failed and we were unable to recover it. 00:37:30.126 [2024-09-29 16:45:30.583256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.126 [2024-09-29 16:45:30.583290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.126 qpair failed and we were unable to recover it. 00:37:30.126 [2024-09-29 16:45:30.583435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.126 [2024-09-29 16:45:30.583470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.126 qpair failed and we were unable to recover it. 00:37:30.126 [2024-09-29 16:45:30.583584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.126 [2024-09-29 16:45:30.583618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.126 qpair failed and we were unable to recover it. 00:37:30.126 [2024-09-29 16:45:30.583783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.126 [2024-09-29 16:45:30.583824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.126 qpair failed and we were unable to recover it. 00:37:30.126 [2024-09-29 16:45:30.583938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.126 [2024-09-29 16:45:30.583974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.126 qpair failed and we were unable to recover it. 00:37:30.126 A controller has encountered a failure and is being reset. 00:37:30.126 [2024-09-29 16:45:30.584131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.126 [2024-09-29 16:45:30.584182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:30.126 qpair failed and we were unable to recover it. 00:37:30.126 [2024-09-29 16:45:30.584346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.126 [2024-09-29 16:45:30.584381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.126 qpair failed and we were unable to recover it. 00:37:30.126 [2024-09-29 16:45:30.584488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.126 [2024-09-29 16:45:30.584522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.126 qpair failed and we were unable to recover it. 00:37:30.126 [2024-09-29 16:45:30.584642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.126 [2024-09-29 16:45:30.584685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.126 qpair failed and we were unable to recover it. 00:37:30.126 [2024-09-29 16:45:30.584832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.126 [2024-09-29 16:45:30.584866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.126 qpair failed and we were unable to recover it. 00:37:30.126 [2024-09-29 16:45:30.584995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.126 [2024-09-29 16:45:30.585030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.126 qpair failed and we were unable to recover it. 00:37:30.126 [2024-09-29 16:45:30.585147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.126 [2024-09-29 16:45:30.585193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.126 qpair failed and we were unable to recover it. 00:37:30.126 [2024-09-29 16:45:30.585349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.126 [2024-09-29 16:45:30.585389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.126 qpair failed and we were unable to recover it. 00:37:30.126 [2024-09-29 16:45:30.585512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.126 [2024-09-29 16:45:30.585547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.126 qpair failed and we were unable to recover it. 00:37:30.126 [2024-09-29 16:45:30.585679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.126 [2024-09-29 16:45:30.585716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.126 qpair failed and we were unable to recover it. 00:37:30.126 [2024-09-29 16:45:30.585843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.126 [2024-09-29 16:45:30.585877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.126 qpair failed and we were unable to recover it. 00:37:30.126 [2024-09-29 16:45:30.585992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.126 [2024-09-29 16:45:30.586027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.126 qpair failed and we were unable to recover it. 00:37:30.126 [2024-09-29 16:45:30.586167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.126 [2024-09-29 16:45:30.586201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.126 qpair failed and we were unable to recover it. 00:37:30.126 [2024-09-29 16:45:30.586323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.126 [2024-09-29 16:45:30.586358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.126 qpair failed and we were unable to recover it. 00:37:30.126 [2024-09-29 16:45:30.586474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.126 [2024-09-29 16:45:30.586507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:30.126 qpair failed and we were unable to recover it. 00:37:30.126 [2024-09-29 16:45:30.586627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.126 [2024-09-29 16:45:30.586662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.126 qpair failed and we were unable to recover it. 00:37:30.126 [2024-09-29 16:45:30.586794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.126 [2024-09-29 16:45:30.586829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.126 qpair failed and we were unable to recover it. 00:37:30.126 [2024-09-29 16:45:30.586944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.126 [2024-09-29 16:45:30.586978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.126 qpair failed and we were unable to recover it. 00:37:30.126 [2024-09-29 16:45:30.587116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.126 [2024-09-29 16:45:30.587151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.127 qpair failed and we were unable to recover it. 00:37:30.127 [2024-09-29 16:45:30.587264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.127 [2024-09-29 16:45:30.587298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:30.127 qpair failed and we were unable to recover it. 00:37:30.127 [2024-09-29 16:45:30.587547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:30.127 [2024-09-29 16:45:30.587601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:30.127 [2024-09-29 16:45:30.587637] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2780 is same with the state(6) to be set 00:37:30.127 [2024-09-29 16:45:30.587689] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2780 (9): Bad file descriptor 00:37:30.127 [2024-09-29 16:45:30.587727] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:30.127 [2024-09-29 16:45:30.587755] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:30.127 [2024-09-29 16:45:30.587781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:30.127 Unable to reset the controller. 00:37:30.385 [2024-09-29 16:45:30.762070] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:30.385 [2024-09-29 16:45:30.762162] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:30.385 [2024-09-29 16:45:30.762186] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:30.385 [2024-09-29 16:45:30.762209] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:30.385 [2024-09-29 16:45:30.762226] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:30.385 [2024-09-29 16:45:30.762337] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:37:30.385 [2024-09-29 16:45:30.762392] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:37:30.385 [2024-09-29 16:45:30.762437] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:37:30.386 [2024-09-29 16:45:30.762444] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:37:30.952 16:45:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:30.952 16:45:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:37:30.952 16:45:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:37:30.952 16:45:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:30.952 16:45:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:30.952 16:45:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:30.952 16:45:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:30.952 16:45:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:30.952 16:45:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:30.952 Malloc0 00:37:30.952 16:45:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:30.952 16:45:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:37:30.952 16:45:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:30.952 16:45:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:30.952 [2024-09-29 16:45:31.401820] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:30.952 16:45:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:30.952 16:45:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:30.952 16:45:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:30.952 16:45:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:30.952 16:45:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:30.952 16:45:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:30.952 16:45:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:30.952 16:45:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:30.952 16:45:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:30.952 16:45:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:30.952 16:45:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:30.952 16:45:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:30.952 [2024-09-29 16:45:31.431922] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:30.952 16:45:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:30.953 16:45:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:30.953 16:45:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:30.953 16:45:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:30.953 16:45:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:30.953 16:45:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3330009 00:37:31.211 Controller properly reset. 00:37:36.475 Initializing NVMe Controllers 00:37:36.475 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:36.475 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:36.475 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:37:36.475 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:37:36.476 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:37:36.476 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:37:36.476 Initialization complete. Launching workers. 00:37:36.476 Starting thread on core 1 00:37:36.476 Starting thread on core 2 00:37:36.476 Starting thread on core 3 00:37:36.476 Starting thread on core 0 00:37:36.476 16:45:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:37:36.476 00:37:36.476 real 0m11.577s 00:37:36.476 user 0m36.236s 00:37:36.476 sys 0m7.521s 00:37:36.476 16:45:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:36.476 16:45:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:36.476 ************************************ 00:37:36.476 END TEST nvmf_target_disconnect_tc2 00:37:36.476 ************************************ 00:37:36.476 16:45:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:37:36.476 16:45:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:37:36.476 16:45:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:37:36.476 16:45:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@512 -- # nvmfcleanup 00:37:36.476 16:45:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:37:36.476 16:45:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:36.476 16:45:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:37:36.476 16:45:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:36.476 16:45:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:36.476 rmmod nvme_tcp 00:37:36.476 rmmod nvme_fabrics 00:37:36.476 rmmod nvme_keyring 00:37:36.476 16:45:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:36.476 16:45:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:37:36.476 16:45:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:37:36.476 16:45:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@513 -- # '[' -n 3330422 ']' 00:37:36.476 16:45:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@514 -- # killprocess 3330422 00:37:36.476 16:45:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 3330422 ']' 00:37:36.476 16:45:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 3330422 00:37:36.476 16:45:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:37:36.476 16:45:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:36.476 16:45:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3330422 00:37:36.476 16:45:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:37:36.476 16:45:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:37:36.476 16:45:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3330422' 00:37:36.476 killing process with pid 3330422 00:37:36.476 16:45:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 3330422 00:37:36.476 16:45:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 3330422 00:37:37.852 16:45:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:37:37.852 16:45:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:37:37.852 16:45:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:37:37.852 16:45:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:37:37.852 16:45:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@787 -- # iptables-save 00:37:37.852 16:45:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:37:37.852 16:45:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@787 -- # iptables-restore 00:37:37.852 16:45:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:37.852 16:45:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:37.852 16:45:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:37.852 16:45:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:37.852 16:45:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:39.756 16:45:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:39.756 00:37:39.756 real 0m17.769s 00:37:39.756 user 1m3.999s 00:37:39.756 sys 0m10.354s 00:37:39.756 16:45:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:39.756 16:45:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:39.756 ************************************ 00:37:39.756 END TEST nvmf_target_disconnect 00:37:39.756 ************************************ 00:37:39.756 16:45:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:37:39.756 00:37:39.756 real 7m43.185s 00:37:39.756 user 19m55.955s 00:37:39.756 sys 1m33.039s 00:37:39.756 16:45:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:39.756 16:45:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:39.756 ************************************ 00:37:39.756 END TEST nvmf_host 00:37:39.756 ************************************ 00:37:39.757 16:45:40 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:37:39.757 16:45:40 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:37:39.757 16:45:40 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:37:39.757 16:45:40 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:37:39.757 16:45:40 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:39.757 16:45:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:39.757 ************************************ 00:37:39.757 START TEST nvmf_target_core_interrupt_mode 00:37:39.757 ************************************ 00:37:39.757 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:37:39.757 * Looking for test storage... 00:37:39.757 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:37:39.757 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:37:39.757 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # lcov --version 00:37:39.757 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:37:40.017 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:37:40.017 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:40.017 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:40.017 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:40.017 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:37:40.017 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:37:40.017 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:37:40.017 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:37:40.017 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:37:40.017 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:37:40.017 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:37:40.017 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:40.017 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:37:40.017 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:37:40.017 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:40.017 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:40.017 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:37:40.017 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:37:40.017 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:40.017 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:37:40.017 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:37:40.017 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:37:40.017 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:37:40.017 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:40.017 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:37:40.017 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:37:40.017 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:40.017 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:40.017 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:37:40.017 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:40.017 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:37:40.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:40.017 --rc genhtml_branch_coverage=1 00:37:40.017 --rc genhtml_function_coverage=1 00:37:40.017 --rc genhtml_legend=1 00:37:40.017 --rc geninfo_all_blocks=1 00:37:40.017 --rc geninfo_unexecuted_blocks=1 00:37:40.017 00:37:40.017 ' 00:37:40.017 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:37:40.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:40.017 --rc genhtml_branch_coverage=1 00:37:40.017 --rc genhtml_function_coverage=1 00:37:40.017 --rc genhtml_legend=1 00:37:40.017 --rc geninfo_all_blocks=1 00:37:40.017 --rc geninfo_unexecuted_blocks=1 00:37:40.017 00:37:40.017 ' 00:37:40.017 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:37:40.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:40.017 --rc genhtml_branch_coverage=1 00:37:40.017 --rc genhtml_function_coverage=1 00:37:40.017 --rc genhtml_legend=1 00:37:40.017 --rc geninfo_all_blocks=1 00:37:40.017 --rc geninfo_unexecuted_blocks=1 00:37:40.017 00:37:40.017 ' 00:37:40.017 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:37:40.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:40.017 --rc genhtml_branch_coverage=1 00:37:40.017 --rc genhtml_function_coverage=1 00:37:40.017 --rc genhtml_legend=1 00:37:40.017 --rc geninfo_all_blocks=1 00:37:40.017 --rc geninfo_unexecuted_blocks=1 00:37:40.017 00:37:40.017 ' 00:37:40.017 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:37:40.017 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:37:40.017 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:40.017 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:37:40.017 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:40.017 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:40.017 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:40.017 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:40.017 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:40.017 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:40.017 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:40.017 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:40.017 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:40.017 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:40.017 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:40.017 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:40.017 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:40.017 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:40.017 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:40.017 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:40.017 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:40.017 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:37:40.017 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:40.017 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:40.017 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:40.017 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:40.018 ************************************ 00:37:40.018 START TEST nvmf_abort 00:37:40.018 ************************************ 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:37:40.018 * Looking for test storage... 00:37:40.018 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # lcov --version 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:37:40.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:40.018 --rc genhtml_branch_coverage=1 00:37:40.018 --rc genhtml_function_coverage=1 00:37:40.018 --rc genhtml_legend=1 00:37:40.018 --rc geninfo_all_blocks=1 00:37:40.018 --rc geninfo_unexecuted_blocks=1 00:37:40.018 00:37:40.018 ' 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:37:40.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:40.018 --rc genhtml_branch_coverage=1 00:37:40.018 --rc genhtml_function_coverage=1 00:37:40.018 --rc genhtml_legend=1 00:37:40.018 --rc geninfo_all_blocks=1 00:37:40.018 --rc geninfo_unexecuted_blocks=1 00:37:40.018 00:37:40.018 ' 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:37:40.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:40.018 --rc genhtml_branch_coverage=1 00:37:40.018 --rc genhtml_function_coverage=1 00:37:40.018 --rc genhtml_legend=1 00:37:40.018 --rc geninfo_all_blocks=1 00:37:40.018 --rc geninfo_unexecuted_blocks=1 00:37:40.018 00:37:40.018 ' 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:37:40.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:40.018 --rc genhtml_branch_coverage=1 00:37:40.018 --rc genhtml_function_coverage=1 00:37:40.018 --rc genhtml_legend=1 00:37:40.018 --rc geninfo_all_blocks=1 00:37:40.018 --rc geninfo_unexecuted_blocks=1 00:37:40.018 00:37:40.018 ' 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:40.018 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:37:40.019 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:40.019 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:40.019 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:40.019 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:40.019 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:40.019 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:40.019 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:37:40.019 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:40.019 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:37:40.019 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:40.019 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:40.019 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:40.019 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:40.019 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:40.019 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:40.019 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:40.019 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:40.019 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:40.019 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:40.019 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:40.019 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:37:40.019 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:37:40.019 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:37:40.019 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:40.019 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@472 -- # prepare_net_devs 00:37:40.019 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@434 -- # local -g is_hw=no 00:37:40.019 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@436 -- # remove_spdk_ns 00:37:40.019 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:40.019 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:40.019 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:40.019 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:37:40.019 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:37:40.019 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:37:40.019 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:41.919 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:41.919 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:37:41.919 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:41.919 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:41.919 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:41.919 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:41.919 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:41.919 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:41.920 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:41.920 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ up == up ]] 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:41.920 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ up == up ]] 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:41.920 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # is_hw=yes 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:41.920 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:42.179 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:42.179 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:42.179 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:42.179 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:42.179 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:42.179 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:42.179 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:42.179 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:42.179 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:42.179 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:37:42.179 00:37:42.179 --- 10.0.0.2 ping statistics --- 00:37:42.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:42.179 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:37:42.179 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:42.179 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:42.179 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:37:42.179 00:37:42.179 --- 10.0.0.1 ping statistics --- 00:37:42.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:42.179 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:37:42.179 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:42.179 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # return 0 00:37:42.179 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:37:42.179 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:42.179 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:37:42.179 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:37:42.179 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:42.179 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:37:42.179 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:37:42.179 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:37:42.179 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:37:42.179 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:42.179 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:42.179 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@505 -- # nvmfpid=3333359 00:37:42.179 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:37:42.179 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@506 -- # waitforlisten 3333359 00:37:42.179 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 3333359 ']' 00:37:42.179 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:42.179 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:42.179 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:42.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:42.179 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:42.179 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:42.179 [2024-09-29 16:45:42.699592] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:42.179 [2024-09-29 16:45:42.702221] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:37:42.179 [2024-09-29 16:45:42.702317] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:42.437 [2024-09-29 16:45:42.843489] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:42.694 [2024-09-29 16:45:43.105840] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:42.694 [2024-09-29 16:45:43.105933] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:42.694 [2024-09-29 16:45:43.105964] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:42.694 [2024-09-29 16:45:43.105988] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:42.694 [2024-09-29 16:45:43.106010] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:42.694 [2024-09-29 16:45:43.106165] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:37:42.694 [2024-09-29 16:45:43.106255] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:37:42.694 [2024-09-29 16:45:43.106276] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:37:42.952 [2024-09-29 16:45:43.487801] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:42.952 [2024-09-29 16:45:43.488506] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:42.952 [2024-09-29 16:45:43.488860] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:42.952 [2024-09-29 16:45:43.490010] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:43.210 16:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:43.210 16:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:37:43.210 16:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:37:43.210 16:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:43.210 16:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:43.210 16:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:43.210 16:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:37:43.210 16:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:43.210 16:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:43.210 [2024-09-29 16:45:43.747324] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:43.210 16:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:43.210 16:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:37:43.210 16:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:43.210 16:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:43.468 Malloc0 00:37:43.468 16:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:43.468 16:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:37:43.468 16:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:43.468 16:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:43.468 Delay0 00:37:43.468 16:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:43.468 16:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:43.468 16:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:43.468 16:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:43.468 16:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:43.468 16:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:37:43.468 16:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:43.468 16:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:43.468 16:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:43.468 16:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:43.468 16:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:43.468 16:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:43.468 [2024-09-29 16:45:43.863506] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:43.468 16:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:43.468 16:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:43.468 16:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:43.469 16:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:43.469 16:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:43.469 16:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:37:43.726 [2024-09-29 16:45:44.050842] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:37:45.622 Initializing NVMe Controllers 00:37:45.623 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:37:45.623 controller IO queue size 128 less than required 00:37:45.623 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:37:45.623 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:37:45.623 Initialization complete. Launching workers. 00:37:45.623 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 23174 00:37:45.623 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 23231, failed to submit 66 00:37:45.623 success 23174, unsuccessful 57, failed 0 00:37:45.623 16:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:45.623 16:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:45.623 16:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:45.623 16:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:45.623 16:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:37:45.623 16:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:37:45.623 16:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # nvmfcleanup 00:37:45.623 16:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:37:45.623 16:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:45.623 16:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:37:45.623 16:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:45.623 16:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:45.623 rmmod nvme_tcp 00:37:45.623 rmmod nvme_fabrics 00:37:45.623 rmmod nvme_keyring 00:37:45.623 16:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:45.623 16:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:37:45.623 16:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:37:45.623 16:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@513 -- # '[' -n 3333359 ']' 00:37:45.623 16:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@514 -- # killprocess 3333359 00:37:45.623 16:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 3333359 ']' 00:37:45.623 16:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 3333359 00:37:45.623 16:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:37:45.623 16:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:45.623 16:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3333359 00:37:45.881 16:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:45.881 16:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:45.881 16:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3333359' 00:37:45.881 killing process with pid 3333359 00:37:45.881 16:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@969 -- # kill 3333359 00:37:45.881 16:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@974 -- # wait 3333359 00:37:47.255 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:37:47.255 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:37:47.255 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:37:47.255 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:37:47.255 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@787 -- # iptables-save 00:37:47.255 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:37:47.255 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@787 -- # iptables-restore 00:37:47.255 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:47.255 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:47.255 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:47.255 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:47.255 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:49.799 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:49.799 00:37:49.799 real 0m9.388s 00:37:49.799 user 0m11.843s 00:37:49.799 sys 0m3.035s 00:37:49.799 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:49.799 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:49.799 ************************************ 00:37:49.799 END TEST nvmf_abort 00:37:49.799 ************************************ 00:37:49.799 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:37:49.799 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:37:49.800 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:49.800 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:49.800 ************************************ 00:37:49.800 START TEST nvmf_ns_hotplug_stress 00:37:49.800 ************************************ 00:37:49.800 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:37:49.800 * Looking for test storage... 00:37:49.800 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:49.800 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:37:49.800 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:37:49.800 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:37:49.800 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:37:49.800 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:49.800 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:49.800 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:49.800 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:37:49.800 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:37:49.800 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:37:49.800 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:37:49.800 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:37:49.800 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:37:49.800 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:37:49.800 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:49.800 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:37:49.800 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:37:49.800 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:49.800 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:49.800 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:37:49.800 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:37:49.800 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:49.800 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:37:49.800 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:37:49.800 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:37:49.800 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:37:49.800 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:49.800 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:37:49.800 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:37:49.800 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:49.800 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:49.800 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:37:49.800 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:49.800 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:37:49.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:49.800 --rc genhtml_branch_coverage=1 00:37:49.800 --rc genhtml_function_coverage=1 00:37:49.800 --rc genhtml_legend=1 00:37:49.800 --rc geninfo_all_blocks=1 00:37:49.800 --rc geninfo_unexecuted_blocks=1 00:37:49.800 00:37:49.800 ' 00:37:49.800 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:37:49.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:49.800 --rc genhtml_branch_coverage=1 00:37:49.800 --rc genhtml_function_coverage=1 00:37:49.800 --rc genhtml_legend=1 00:37:49.800 --rc geninfo_all_blocks=1 00:37:49.800 --rc geninfo_unexecuted_blocks=1 00:37:49.800 00:37:49.800 ' 00:37:49.800 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:37:49.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:49.800 --rc genhtml_branch_coverage=1 00:37:49.800 --rc genhtml_function_coverage=1 00:37:49.800 --rc genhtml_legend=1 00:37:49.800 --rc geninfo_all_blocks=1 00:37:49.800 --rc geninfo_unexecuted_blocks=1 00:37:49.800 00:37:49.800 ' 00:37:49.800 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:37:49.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:49.800 --rc genhtml_branch_coverage=1 00:37:49.800 --rc genhtml_function_coverage=1 00:37:49.800 --rc genhtml_legend=1 00:37:49.800 --rc geninfo_all_blocks=1 00:37:49.800 --rc geninfo_unexecuted_blocks=1 00:37:49.800 00:37:49.800 ' 00:37:49.800 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:49.800 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:37:49.800 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:49.800 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:49.800 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:49.800 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:49.800 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:49.800 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:49.800 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:49.800 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:49.800 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:49.800 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:49.800 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:49.800 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:49.800 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:49.800 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:49.800 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:49.800 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:49.800 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:49.800 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:37:49.800 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:49.800 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:49.800 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:49.800 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:49.801 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:49.801 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:49.801 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:37:49.801 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:49.801 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:37:49.801 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:49.801 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:49.801 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:49.801 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:49.801 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:49.801 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:49.801 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:49.801 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:49.801 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:49.801 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:49.801 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:49.801 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:37:49.801 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:37:49.801 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:49.801 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # prepare_net_devs 00:37:49.801 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@434 -- # local -g is_hw=no 00:37:49.801 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # remove_spdk_ns 00:37:49.801 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:49.801 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:49.801 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:49.801 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:37:49.801 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:37:49.801 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:37:49.801 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:51.704 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:51.704 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:37:51.704 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:51.704 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:51.704 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:51.704 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:51.704 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:51.704 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:37:51.704 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:51.704 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:37:51.704 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:37:51.704 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:37:51.704 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:37:51.704 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:37:51.704 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:37:51.704 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:51.704 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:51.704 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:51.704 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:51.704 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:51.704 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:51.704 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:51.704 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:51.704 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:51.704 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:51.704 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:51.704 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:37:51.704 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:37:51.704 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:37:51.704 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:37:51.704 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:37:51.704 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:37:51.704 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:37:51.704 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:51.704 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:51.704 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:37:51.704 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:37:51.704 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:51.704 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:51.704 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:37:51.704 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:37:51.705 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:51.705 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:51.705 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:37:51.705 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:37:51.705 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:51.705 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:51.705 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:37:51.705 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:37:51.705 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:37:51.705 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:37:51.705 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:37:51.705 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:51.705 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:37:51.705 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:51.705 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:37:51.705 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:37:51.705 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:51.705 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:51.705 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:51.705 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:37:51.705 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:37:51.705 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:51.705 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:37:51.705 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:51.705 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:37:51.705 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:37:51.705 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:51.705 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:51.705 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:51.705 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:37:51.705 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:37:51.705 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # is_hw=yes 00:37:51.705 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:37:51.705 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:37:51.705 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:37:51.705 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:51.705 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:51.705 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:51.705 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:51.705 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:51.705 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:51.705 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:51.705 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:51.705 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:51.705 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:51.705 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:51.705 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:51.705 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:51.705 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:51.705 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:51.705 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:51.705 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:51.705 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:51.705 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:51.705 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:51.705 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:51.705 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:51.705 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:51.705 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:51.705 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.291 ms 00:37:51.705 00:37:51.705 --- 10.0.0.2 ping statistics --- 00:37:51.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:51.705 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:37:51.705 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:51.705 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:51.705 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:37:51.705 00:37:51.705 --- 10.0.0.1 ping statistics --- 00:37:51.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:51.705 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:37:51.705 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:51.705 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # return 0 00:37:51.705 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:37:51.705 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:51.705 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:37:51.705 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:37:51.705 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:51.705 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:37:51.705 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:37:51.705 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:37:51.705 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:37:51.705 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:51.705 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:51.705 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # nvmfpid=3335863 00:37:51.705 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:37:51.705 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # waitforlisten 3335863 00:37:51.705 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 3335863 ']' 00:37:51.705 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:51.705 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:51.705 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:51.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:51.705 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:51.705 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:51.705 [2024-09-29 16:45:52.162128] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:51.705 [2024-09-29 16:45:52.165176] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:37:51.705 [2024-09-29 16:45:52.165286] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:51.964 [2024-09-29 16:45:52.316762] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:52.223 [2024-09-29 16:45:52.573835] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:52.223 [2024-09-29 16:45:52.573916] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:52.223 [2024-09-29 16:45:52.573948] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:52.223 [2024-09-29 16:45:52.573970] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:52.223 [2024-09-29 16:45:52.573993] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:52.223 [2024-09-29 16:45:52.574141] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:37:52.223 [2024-09-29 16:45:52.574227] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:37:52.223 [2024-09-29 16:45:52.574237] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:37:52.481 [2024-09-29 16:45:52.950833] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:52.481 [2024-09-29 16:45:52.951912] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:52.481 [2024-09-29 16:45:52.952719] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:52.481 [2024-09-29 16:45:52.953058] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:52.739 16:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:52.739 16:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:37:52.739 16:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:37:52.739 16:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:52.739 16:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:52.739 16:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:52.739 16:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:37:52.739 16:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:52.998 [2024-09-29 16:45:53.455349] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:52.998 16:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:37:53.256 16:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:53.514 [2024-09-29 16:45:54.011782] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:53.514 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:53.773 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:37:54.339 Malloc0 00:37:54.339 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:37:54.339 Delay0 00:37:54.339 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:54.597 16:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:37:55.164 NULL1 00:37:55.164 16:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:37:55.164 16:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3336384 00:37:55.164 16:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:37:55.164 16:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3336384 00:37:55.164 16:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:56.538 Read completed with error (sct=0, sc=11) 00:37:56.538 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:56.538 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:56.538 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:56.538 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:56.796 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:56.796 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:56.796 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:56.796 16:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:37:56.796 16:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:37:57.054 true 00:37:57.055 16:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3336384 00:37:57.055 16:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:57.989 16:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:57.989 16:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:37:57.989 16:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:37:58.246 true 00:37:58.504 16:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3336384 00:37:58.504 16:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:58.760 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:59.017 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:37:59.017 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:37:59.275 true 00:37:59.275 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3336384 00:37:59.275 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:59.533 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:59.790 16:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:37:59.790 16:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:38:00.048 true 00:38:00.048 16:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3336384 00:38:00.048 16:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:00.982 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:00.982 16:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:00.982 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:00.982 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:01.240 16:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:38:01.240 16:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:38:01.497 true 00:38:01.497 16:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3336384 00:38:01.497 16:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:01.754 16:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:02.012 16:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:38:02.012 16:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:38:02.270 true 00:38:02.270 16:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3336384 00:38:02.270 16:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:03.203 16:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:03.203 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:03.461 16:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:38:03.461 16:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:38:03.718 true 00:38:03.719 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3336384 00:38:03.719 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:03.976 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:04.234 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:38:04.234 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:38:04.492 true 00:38:04.492 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3336384 00:38:04.492 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:04.749 16:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:05.034 16:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:38:05.034 16:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:38:05.317 true 00:38:05.317 16:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3336384 00:38:05.317 16:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:06.249 16:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:06.506 16:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:38:06.506 16:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:38:06.762 true 00:38:06.762 16:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3336384 00:38:06.762 16:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:07.019 16:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:07.277 16:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:38:07.277 16:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:38:07.840 true 00:38:07.840 16:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3336384 00:38:07.840 16:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:07.840 16:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:08.097 16:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:38:08.097 16:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:38:08.353 true 00:38:08.615 16:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3336384 00:38:08.615 16:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:09.547 16:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:09.547 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:38:09.547 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:38:09.804 true 00:38:09.804 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3336384 00:38:09.804 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:10.060 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:10.625 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:38:10.625 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:38:10.625 true 00:38:10.625 16:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3336384 00:38:10.625 16:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:10.882 16:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:11.139 16:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:38:11.139 16:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:38:11.396 true 00:38:11.396 16:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3336384 00:38:11.396 16:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:12.330 16:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:12.588 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:12.588 16:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:38:12.588 16:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:38:12.846 true 00:38:12.846 16:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3336384 00:38:12.846 16:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:13.411 16:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:13.411 16:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:38:13.411 16:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:38:13.670 true 00:38:13.670 16:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3336384 00:38:13.670 16:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:13.928 16:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:14.493 16:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:38:14.493 16:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:38:14.493 true 00:38:14.493 16:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3336384 00:38:14.493 16:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:15.869 16:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:15.869 16:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:38:15.869 16:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:38:16.127 true 00:38:16.127 16:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3336384 00:38:16.127 16:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:16.386 16:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:16.644 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:38:16.644 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:38:16.902 true 00:38:16.902 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3336384 00:38:16.902 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:17.160 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:17.418 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:38:17.418 16:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:38:17.676 true 00:38:17.676 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3336384 00:38:17.676 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:18.614 16:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:18.874 16:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:38:18.874 16:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:38:19.132 true 00:38:19.132 16:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3336384 00:38:19.132 16:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:19.390 16:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:19.649 16:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:38:19.649 16:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:38:19.907 true 00:38:19.907 16:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3336384 00:38:19.907 16:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:20.165 16:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:20.423 16:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:38:20.423 16:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:38:20.681 true 00:38:20.681 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3336384 00:38:20.681 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:21.616 16:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:21.874 16:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:38:21.874 16:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:38:22.133 true 00:38:22.133 16:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3336384 00:38:22.133 16:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:22.391 16:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:22.649 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:38:22.649 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:38:22.908 true 00:38:22.908 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3336384 00:38:22.908 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:23.473 16:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:23.473 16:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:38:23.473 16:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:38:23.732 true 00:38:23.732 16:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3336384 00:38:23.732 16:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:24.667 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:24.667 16:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:24.925 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:24.925 16:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:38:24.925 16:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:38:25.182 true 00:38:25.182 16:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3336384 00:38:25.182 16:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:25.747 Initializing NVMe Controllers 00:38:25.747 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:25.747 Controller IO queue size 128, less than required. 00:38:25.747 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:25.747 Controller IO queue size 128, less than required. 00:38:25.747 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:25.747 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:38:25.747 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:38:25.747 Initialization complete. Launching workers. 00:38:25.747 ======================================================== 00:38:25.747 Latency(us) 00:38:25.747 Device Information : IOPS MiB/s Average min max 00:38:25.747 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 484.69 0.24 108564.91 3894.81 1017298.73 00:38:25.747 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 6799.24 3.32 18826.26 1892.83 489393.34 00:38:25.747 ======================================================== 00:38:25.747 Total : 7283.93 3.56 24797.73 1892.83 1017298.73 00:38:25.747 00:38:25.747 16:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:25.747 16:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:38:25.747 16:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:38:26.005 true 00:38:26.005 16:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3336384 00:38:26.005 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3336384) - No such process 00:38:26.005 16:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3336384 00:38:26.005 16:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:26.263 16:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:26.829 16:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:38:26.829 16:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:38:26.829 16:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:38:26.829 16:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:26.829 16:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:38:26.829 null0 00:38:26.829 16:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:26.829 16:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:26.829 16:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:38:27.087 null1 00:38:27.087 16:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:27.087 16:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:27.087 16:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:38:27.345 null2 00:38:27.345 16:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:27.345 16:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:27.345 16:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:38:27.603 null3 00:38:27.603 16:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:27.603 16:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:27.603 16:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:38:27.860 null4 00:38:28.118 16:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:28.118 16:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:28.118 16:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:38:28.376 null5 00:38:28.376 16:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:28.376 16:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:28.376 16:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:38:28.645 null6 00:38:28.645 16:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:28.645 16:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:28.645 16:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:38:28.904 null7 00:38:28.905 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:28.905 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:28.905 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:38:28.905 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:28.905 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:28.905 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:38:28.905 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:28.905 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:28.905 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:38:28.905 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:28.905 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:28.905 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:28.905 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:28.905 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:38:28.905 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:28.905 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:28.905 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:38:28.905 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:28.905 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:28.905 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:28.905 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:28.905 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:38:28.905 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:28.905 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:38:28.905 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:28.905 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:28.905 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:28.905 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:28.905 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:28.905 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:38:28.905 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:28.905 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:38:28.905 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:28.905 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:28.905 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:28.905 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:28.905 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:28.905 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:38:28.905 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:28.905 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:28.905 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:38:28.905 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:28.905 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:28.905 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:28.905 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:28.905 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:38:28.905 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:28.905 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:38:28.905 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:28.905 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:28.905 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:28.905 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:28.905 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:28.905 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:38:28.905 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:28.905 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:38:28.905 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:28.905 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:28.905 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:28.905 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:28.905 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:28.905 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:38:28.905 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:28.905 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:38:28.905 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:28.905 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:28.905 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3340268 3340270 3340273 3340275 3340278 3340281 3340283 3340286 00:38:28.905 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:28.905 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:29.164 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:29.164 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:29.164 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:29.164 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:29.164 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:29.164 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:29.164 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:29.164 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:29.422 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:29.422 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:29.422 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:29.422 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:29.422 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:29.422 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:29.422 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:29.422 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:29.422 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:29.422 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:29.422 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:29.422 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:29.422 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:29.422 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:29.422 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:29.422 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:29.422 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:29.422 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:29.422 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:29.422 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:29.422 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:29.422 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:29.422 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:29.422 16:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:29.679 16:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:29.679 16:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:29.679 16:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:29.679 16:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:29.679 16:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:29.679 16:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:29.679 16:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:29.679 16:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:29.937 16:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:29.937 16:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:29.937 16:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:29.937 16:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:29.937 16:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:29.937 16:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:29.937 16:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:29.937 16:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:29.937 16:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:29.937 16:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:29.937 16:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:29.937 16:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:29.937 16:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:29.937 16:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:29.937 16:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:29.937 16:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:29.937 16:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:29.937 16:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:29.937 16:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:29.937 16:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:29.937 16:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:29.937 16:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:29.937 16:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:29.937 16:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:30.196 16:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:30.196 16:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:30.196 16:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:30.196 16:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:30.196 16:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:30.453 16:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:30.453 16:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:30.453 16:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:30.711 16:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:30.711 16:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:30.711 16:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:30.711 16:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:30.711 16:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:30.711 16:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:30.711 16:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:30.711 16:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:30.711 16:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:30.711 16:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:30.711 16:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:30.711 16:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:30.711 16:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:30.711 16:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:30.711 16:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:30.711 16:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:30.711 16:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:30.711 16:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:30.711 16:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:30.711 16:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:30.711 16:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:30.711 16:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:30.711 16:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:30.711 16:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:30.969 16:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:30.969 16:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:30.969 16:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:30.969 16:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:30.969 16:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:30.969 16:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:30.969 16:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:30.969 16:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:31.227 16:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:31.227 16:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:31.227 16:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:31.227 16:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:31.227 16:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:31.227 16:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:31.227 16:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:31.227 16:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:31.227 16:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:31.227 16:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:31.227 16:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:31.227 16:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:31.227 16:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:31.227 16:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:31.227 16:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:31.227 16:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:31.227 16:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:31.227 16:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:31.227 16:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:31.227 16:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:31.227 16:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:31.227 16:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:31.227 16:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:31.227 16:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:31.485 16:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:31.485 16:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:31.485 16:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:31.485 16:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:31.485 16:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:31.485 16:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:31.485 16:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:31.485 16:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:31.743 16:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:31.743 16:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:31.743 16:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:31.743 16:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:31.743 16:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:31.743 16:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:31.743 16:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:31.743 16:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:31.743 16:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:31.743 16:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:31.743 16:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:31.743 16:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:31.743 16:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:31.743 16:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:31.743 16:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:31.743 16:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:31.743 16:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:31.743 16:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:31.743 16:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:31.743 16:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:31.743 16:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:31.743 16:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:31.743 16:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:31.743 16:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:32.002 16:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:32.002 16:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:32.002 16:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:32.002 16:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:32.002 16:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:32.002 16:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:32.002 16:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:32.002 16:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:32.260 16:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:32.260 16:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:32.260 16:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:32.260 16:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:32.260 16:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:32.260 16:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:32.260 16:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:32.260 16:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:32.260 16:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:32.260 16:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:32.260 16:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:32.260 16:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:32.260 16:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:32.260 16:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:32.260 16:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:32.518 16:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:32.518 16:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:32.518 16:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:32.518 16:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:32.518 16:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:32.518 16:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:32.518 16:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:32.518 16:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:32.518 16:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:32.776 16:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:32.776 16:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:32.776 16:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:32.776 16:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:32.776 16:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:32.776 16:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:32.776 16:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:32.776 16:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:33.034 16:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:33.034 16:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:33.034 16:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:33.034 16:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:33.034 16:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:33.034 16:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:33.034 16:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:33.034 16:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:33.034 16:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:33.034 16:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:33.034 16:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:33.034 16:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:33.034 16:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:33.034 16:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:33.034 16:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:33.034 16:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:33.034 16:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:33.034 16:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:33.034 16:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:33.034 16:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:33.034 16:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:33.034 16:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:33.034 16:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:33.034 16:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:33.292 16:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:33.292 16:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:33.292 16:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:33.292 16:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:33.292 16:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:33.292 16:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:33.292 16:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:33.292 16:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:33.550 16:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:33.550 16:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:33.550 16:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:33.550 16:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:33.550 16:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:33.550 16:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:33.550 16:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:33.550 16:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:33.550 16:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:33.550 16:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:33.550 16:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:33.550 16:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:33.550 16:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:33.550 16:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:33.550 16:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:33.550 16:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:33.550 16:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:33.550 16:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:33.550 16:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:33.550 16:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:33.550 16:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:33.550 16:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:33.550 16:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:33.550 16:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:33.809 16:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:33.809 16:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:33.809 16:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:33.809 16:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:33.809 16:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:33.809 16:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:33.809 16:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:33.809 16:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:34.097 16:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:34.097 16:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:34.097 16:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:34.097 16:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:34.097 16:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:34.097 16:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:34.097 16:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:34.097 16:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:34.097 16:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:34.097 16:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:34.097 16:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:34.097 16:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:34.097 16:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:34.097 16:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:34.097 16:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:34.097 16:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:34.097 16:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:34.097 16:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:34.097 16:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:34.097 16:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:34.097 16:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:34.097 16:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:34.097 16:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:34.097 16:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:34.385 16:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:34.385 16:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:34.385 16:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:34.385 16:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:34.385 16:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:34.385 16:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:34.660 16:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:34.660 16:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:34.660 16:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:34.660 16:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:34.660 16:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:34.660 16:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:34.660 16:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:34.660 16:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:34.918 16:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:34.918 16:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:34.918 16:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:34.918 16:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:34.918 16:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:34.918 16:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:34.918 16:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:34.918 16:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:34.918 16:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:34.918 16:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:34.918 16:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:38:34.918 16:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:38:34.918 16:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # nvmfcleanup 00:38:34.918 16:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:38:34.918 16:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:34.918 16:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:38:34.918 16:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:34.918 16:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:34.918 rmmod nvme_tcp 00:38:34.918 rmmod nvme_fabrics 00:38:34.918 rmmod nvme_keyring 00:38:34.918 16:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:34.918 16:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:38:34.918 16:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:38:34.918 16:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@513 -- # '[' -n 3335863 ']' 00:38:34.918 16:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # killprocess 3335863 00:38:34.918 16:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 3335863 ']' 00:38:34.918 16:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 3335863 00:38:34.918 16:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:38:34.918 16:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:34.918 16:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3335863 00:38:34.918 16:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:34.918 16:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:34.918 16:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3335863' 00:38:34.918 killing process with pid 3335863 00:38:34.918 16:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 3335863 00:38:34.918 16:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 3335863 00:38:36.293 16:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:38:36.293 16:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:38:36.293 16:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:38:36.293 16:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:38:36.293 16:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # iptables-save 00:38:36.293 16:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:38:36.293 16:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # iptables-restore 00:38:36.293 16:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:36.293 16:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:36.293 16:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:36.293 16:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:36.293 16:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:38.826 16:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:38.826 00:38:38.826 real 0m49.009s 00:38:38.826 user 3m18.332s 00:38:38.826 sys 0m22.772s 00:38:38.826 16:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:38.826 16:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:38:38.826 ************************************ 00:38:38.826 END TEST nvmf_ns_hotplug_stress 00:38:38.826 ************************************ 00:38:38.826 16:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:38:38.826 16:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:38:38.826 16:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:38.826 16:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:38.826 ************************************ 00:38:38.826 START TEST nvmf_delete_subsystem 00:38:38.826 ************************************ 00:38:38.826 16:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:38:38.826 * Looking for test storage... 00:38:38.826 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:38.826 16:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:38:38.826 16:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lcov --version 00:38:38.826 16:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:38:38.826 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:38:38.826 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:38.826 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:38.826 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:38.826 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:38:38.826 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:38:38.826 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:38:38.826 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:38:38.826 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:38:38.826 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:38:38.826 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:38:38.826 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:38.826 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:38:38.826 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:38:38.826 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:38.826 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:38.826 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:38:38.826 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:38:38.826 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:38.826 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:38:38.826 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:38:38.826 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:38:38.826 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:38:38.826 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:38.826 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:38:38.826 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:38:38.826 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:38.826 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:38.826 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:38:38.827 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:38.827 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:38:38.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:38.827 --rc genhtml_branch_coverage=1 00:38:38.827 --rc genhtml_function_coverage=1 00:38:38.827 --rc genhtml_legend=1 00:38:38.827 --rc geninfo_all_blocks=1 00:38:38.827 --rc geninfo_unexecuted_blocks=1 00:38:38.827 00:38:38.827 ' 00:38:38.827 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:38:38.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:38.827 --rc genhtml_branch_coverage=1 00:38:38.827 --rc genhtml_function_coverage=1 00:38:38.827 --rc genhtml_legend=1 00:38:38.827 --rc geninfo_all_blocks=1 00:38:38.827 --rc geninfo_unexecuted_blocks=1 00:38:38.827 00:38:38.827 ' 00:38:38.827 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:38:38.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:38.827 --rc genhtml_branch_coverage=1 00:38:38.827 --rc genhtml_function_coverage=1 00:38:38.827 --rc genhtml_legend=1 00:38:38.827 --rc geninfo_all_blocks=1 00:38:38.827 --rc geninfo_unexecuted_blocks=1 00:38:38.827 00:38:38.827 ' 00:38:38.827 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:38:38.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:38.827 --rc genhtml_branch_coverage=1 00:38:38.827 --rc genhtml_function_coverage=1 00:38:38.827 --rc genhtml_legend=1 00:38:38.827 --rc geninfo_all_blocks=1 00:38:38.827 --rc geninfo_unexecuted_blocks=1 00:38:38.827 00:38:38.827 ' 00:38:38.827 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:38.827 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:38:38.827 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:38.827 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:38.827 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:38.827 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:38.827 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:38.827 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:38.827 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:38.827 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:38.827 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:38.827 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:38.827 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:38.827 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:38.827 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:38.827 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:38.827 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:38.827 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:38.827 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:38.827 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:38:38.827 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:38.827 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:38.827 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:38.827 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:38.827 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:38.827 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:38.827 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:38:38.827 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:38.827 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:38:38.827 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:38.827 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:38.827 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:38.827 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:38.827 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:38.827 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:38.827 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:38.827 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:38.827 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:38.827 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:38.827 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:38:38.827 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:38:38.827 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:38.827 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # prepare_net_devs 00:38:38.828 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@434 -- # local -g is_hw=no 00:38:38.828 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # remove_spdk_ns 00:38:38.828 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:38.828 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:38.828 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:38.828 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:38:38.828 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:38:38.828 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:38:38.828 16:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:40.730 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:40.730 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:38:40.730 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:40.730 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:40.730 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:40.730 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:40.730 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:40.730 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:38:40.730 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:40.730 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:38:40.730 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:38:40.730 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:38:40.730 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:38:40.730 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:38:40.730 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:38:40.730 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:40.730 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:40.730 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:40.730 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:40.730 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:40.730 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:40.730 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:40.730 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:40.730 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:40.730 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:40.730 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:40.730 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:38:40.730 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:38:40.730 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:38:40.730 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:38:40.730 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:38:40.730 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:38:40.730 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:38:40.730 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:40.730 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:40.730 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:38:40.730 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:38:40.730 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:40.730 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:40.730 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:38:40.730 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:38:40.730 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:40.730 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:40.730 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:38:40.730 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:38:40.730 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:40.730 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:40.730 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:38:40.730 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:38:40.730 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:38:40.730 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:38:40.730 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:38:40.730 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:40.730 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:38:40.730 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:40.730 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:38:40.730 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:38:40.730 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:40.731 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:40.731 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:40.731 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:38:40.731 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:38:40.731 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:40.731 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:38:40.731 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:40.731 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:38:40.731 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:38:40.731 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:40.731 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:40.731 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:40.731 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:38:40.731 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:38:40.731 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # is_hw=yes 00:38:40.731 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:38:40.731 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:38:40.731 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:38:40.731 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:40.731 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:40.731 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:40.731 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:40.731 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:40.731 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:40.731 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:40.731 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:40.731 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:40.731 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:40.731 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:40.731 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:40.731 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:40.731 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:40.731 16:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:40.731 16:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:40.731 16:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:40.731 16:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:40.731 16:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:40.731 16:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:40.731 16:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:40.731 16:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:40.731 16:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:40.731 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:40.731 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:38:40.731 00:38:40.731 --- 10.0.0.2 ping statistics --- 00:38:40.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:40.731 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:38:40.731 16:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:40.731 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:40.731 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:38:40.731 00:38:40.731 --- 10.0.0.1 ping statistics --- 00:38:40.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:40.731 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:38:40.731 16:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:40.731 16:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # return 0 00:38:40.731 16:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:38:40.731 16:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:40.731 16:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:38:40.731 16:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:38:40.731 16:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:40.731 16:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:38:40.731 16:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:38:40.731 16:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:38:40.731 16:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:38:40.731 16:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:40.731 16:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:40.731 16:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # nvmfpid=3343274 00:38:40.731 16:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:38:40.731 16:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # waitforlisten 3343274 00:38:40.731 16:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 3343274 ']' 00:38:40.731 16:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:40.731 16:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:40.731 16:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:40.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:40.731 16:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:40.731 16:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:40.731 [2024-09-29 16:46:41.219814] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:40.731 [2024-09-29 16:46:41.222269] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:38:40.731 [2024-09-29 16:46:41.222372] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:40.990 [2024-09-29 16:46:41.360919] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:38:41.248 [2024-09-29 16:46:41.617941] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:41.248 [2024-09-29 16:46:41.618028] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:41.248 [2024-09-29 16:46:41.618057] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:41.248 [2024-09-29 16:46:41.618078] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:41.248 [2024-09-29 16:46:41.618100] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:41.248 [2024-09-29 16:46:41.618219] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:38:41.248 [2024-09-29 16:46:41.618229] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:38:41.506 [2024-09-29 16:46:41.999318] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:41.506 [2024-09-29 16:46:42.000046] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:41.506 [2024-09-29 16:46:42.000394] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:41.765 16:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:41.765 16:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:38:41.765 16:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:38:41.765 16:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:41.765 16:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:41.765 16:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:41.765 16:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:41.765 16:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:41.765 16:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:41.765 [2024-09-29 16:46:42.251264] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:41.765 16:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:41.765 16:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:38:41.765 16:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:41.765 16:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:41.765 16:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:41.765 16:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:41.765 16:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:41.765 16:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:41.765 [2024-09-29 16:46:42.271555] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:41.765 16:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:41.765 16:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:38:41.765 16:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:41.765 16:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:41.765 NULL1 00:38:41.765 16:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:41.765 16:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:38:41.765 16:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:41.765 16:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:41.765 Delay0 00:38:41.765 16:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:41.765 16:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:41.765 16:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:41.765 16:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:41.765 16:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:41.765 16:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3343425 00:38:41.765 16:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:38:41.765 16:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:38:42.023 [2024-09-29 16:46:42.392779] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:38:43.919 16:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:43.919 16:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:43.919 16:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:44.177 Write completed with error (sct=0, sc=8) 00:38:44.177 Read completed with error (sct=0, sc=8) 00:38:44.177 Read completed with error (sct=0, sc=8) 00:38:44.177 Read completed with error (sct=0, sc=8) 00:38:44.177 starting I/O failed: -6 00:38:44.177 Read completed with error (sct=0, sc=8) 00:38:44.177 Read completed with error (sct=0, sc=8) 00:38:44.177 Read completed with error (sct=0, sc=8) 00:38:44.177 Read completed with error (sct=0, sc=8) 00:38:44.177 starting I/O failed: -6 00:38:44.177 Read completed with error (sct=0, sc=8) 00:38:44.177 Read completed with error (sct=0, sc=8) 00:38:44.177 Read completed with error (sct=0, sc=8) 00:38:44.177 Write completed with error (sct=0, sc=8) 00:38:44.177 starting I/O failed: -6 00:38:44.177 Read completed with error (sct=0, sc=8) 00:38:44.177 Read completed with error (sct=0, sc=8) 00:38:44.177 Read completed with error (sct=0, sc=8) 00:38:44.177 Read completed with error (sct=0, sc=8) 00:38:44.177 starting I/O failed: -6 00:38:44.177 Read completed with error (sct=0, sc=8) 00:38:44.177 Read completed with error (sct=0, sc=8) 00:38:44.177 Write completed with error (sct=0, sc=8) 00:38:44.177 Write completed with error (sct=0, sc=8) 00:38:44.177 starting I/O failed: -6 00:38:44.177 Read completed with error (sct=0, sc=8) 00:38:44.177 Read completed with error (sct=0, sc=8) 00:38:44.177 Write completed with error (sct=0, sc=8) 00:38:44.177 Write completed with error (sct=0, sc=8) 00:38:44.177 starting I/O failed: -6 00:38:44.177 Write completed with error (sct=0, sc=8) 00:38:44.177 Read completed with error (sct=0, sc=8) 00:38:44.177 Read completed with error (sct=0, sc=8) 00:38:44.177 Write completed with error (sct=0, sc=8) 00:38:44.177 starting I/O failed: -6 00:38:44.177 Write completed with error (sct=0, sc=8) 00:38:44.177 Read completed with error (sct=0, sc=8) 00:38:44.177 Write completed with error (sct=0, sc=8) 00:38:44.177 Read completed with error (sct=0, sc=8) 00:38:44.177 starting I/O failed: -6 00:38:44.177 Read completed with error (sct=0, sc=8) 00:38:44.177 Read completed with error (sct=0, sc=8) 00:38:44.177 Write completed with error (sct=0, sc=8) 00:38:44.177 Read completed with error (sct=0, sc=8) 00:38:44.177 starting I/O failed: -6 00:38:44.177 Read completed with error (sct=0, sc=8) 00:38:44.177 Write completed with error (sct=0, sc=8) 00:38:44.177 starting I/O failed: -6 00:38:44.177 starting I/O failed: -6 00:38:44.177 Write completed with error (sct=0, sc=8) 00:38:44.177 Read completed with error (sct=0, sc=8) 00:38:44.177 Read completed with error (sct=0, sc=8) 00:38:44.177 Read completed with error (sct=0, sc=8) 00:38:44.177 Read completed with error (sct=0, sc=8) 00:38:44.177 Read completed with error (sct=0, sc=8) 00:38:44.177 Read completed with error (sct=0, sc=8) 00:38:44.177 Read completed with error (sct=0, sc=8) 00:38:44.177 Write completed with error (sct=0, sc=8) 00:38:44.177 Write completed with error (sct=0, sc=8) 00:38:44.177 Write completed with error (sct=0, sc=8) 00:38:44.177 Write completed with error (sct=0, sc=8) 00:38:44.177 Read completed with error (sct=0, sc=8) 00:38:44.177 Write completed with error (sct=0, sc=8) 00:38:44.177 Read completed with error (sct=0, sc=8) 00:38:44.177 Read completed with error (sct=0, sc=8) 00:38:44.177 Write completed with error (sct=0, sc=8) 00:38:44.177 Read completed with error (sct=0, sc=8) 00:38:44.177 Read completed with error (sct=0, sc=8) 00:38:44.177 Read completed with error (sct=0, sc=8) 00:38:44.177 Read completed with error (sct=0, sc=8) 00:38:44.177 Read completed with error (sct=0, sc=8) 00:38:44.177 Read completed with error (sct=0, sc=8) 00:38:44.177 Write completed with error (sct=0, sc=8) 00:38:44.177 Read completed with error (sct=0, sc=8) 00:38:44.177 Write completed with error (sct=0, sc=8) 00:38:44.177 Read completed with error (sct=0, sc=8) 00:38:44.177 Read completed with error (sct=0, sc=8) 00:38:44.177 Read completed with error (sct=0, sc=8) 00:38:44.177 Write completed with error (sct=0, sc=8) 00:38:44.177 Read completed with error (sct=0, sc=8) 00:38:44.177 Read completed with error (sct=0, sc=8) 00:38:44.177 Read completed with error (sct=0, sc=8) 00:38:44.177 Read completed with error (sct=0, sc=8) 00:38:44.177 Write completed with error (sct=0, sc=8) 00:38:44.177 Write completed with error (sct=0, sc=8) 00:38:44.177 Read completed with error (sct=0, sc=8) 00:38:44.177 Read completed with error (sct=0, sc=8) 00:38:44.177 Read completed with error (sct=0, sc=8) 00:38:44.177 Read completed with error (sct=0, sc=8) 00:38:44.177 Read completed with error (sct=0, sc=8) 00:38:44.177 Write completed with error (sct=0, sc=8) 00:38:44.177 Read completed with error (sct=0, sc=8) 00:38:44.177 Read completed with error (sct=0, sc=8) 00:38:44.177 Read completed with error (sct=0, sc=8) 00:38:44.178 Write completed with error (sct=0, sc=8) 00:38:44.178 Write completed with error (sct=0, sc=8) 00:38:44.178 Read completed with error (sct=0, sc=8) 00:38:44.178 Read completed with error (sct=0, sc=8) 00:38:44.178 Read completed with error (sct=0, sc=8) 00:38:44.178 Read completed with error (sct=0, sc=8) 00:38:44.178 Write completed with error (sct=0, sc=8) 00:38:44.178 Read completed with error (sct=0, sc=8) 00:38:44.178 Write completed with error (sct=0, sc=8) 00:38:44.178 Read completed with error (sct=0, sc=8) 00:38:44.178 Write completed with error (sct=0, sc=8) 00:38:44.178 Read completed with error (sct=0, sc=8) 00:38:44.178 Read completed with error (sct=0, sc=8) 00:38:44.178 Read completed with error (sct=0, sc=8) 00:38:44.178 Read completed with error (sct=0, sc=8) 00:38:44.178 Read completed with error (sct=0, sc=8) 00:38:44.178 Read completed with error (sct=0, sc=8) 00:38:44.178 Write completed with error (sct=0, sc=8) 00:38:44.178 Read completed with error (sct=0, sc=8) 00:38:44.178 Read completed with error (sct=0, sc=8) 00:38:44.178 Write completed with error (sct=0, sc=8) 00:38:44.178 Write completed with error (sct=0, sc=8) 00:38:44.178 Read completed with error (sct=0, sc=8) 00:38:44.178 Read completed with error (sct=0, sc=8) 00:38:44.178 Read completed with error (sct=0, sc=8) 00:38:44.178 Write completed with error (sct=0, sc=8) 00:38:44.178 Read completed with error (sct=0, sc=8) 00:38:44.178 Read completed with error (sct=0, sc=8) 00:38:44.178 Read completed with error (sct=0, sc=8) 00:38:44.178 Read completed with error (sct=0, sc=8) 00:38:44.178 Read completed with error (sct=0, sc=8) 00:38:44.178 Read completed with error (sct=0, sc=8) 00:38:44.178 Read completed with error (sct=0, sc=8) 00:38:44.178 Read completed with error (sct=0, sc=8) 00:38:44.178 Read completed with error (sct=0, sc=8) 00:38:44.178 Read completed with error (sct=0, sc=8) 00:38:44.178 Read completed with error (sct=0, sc=8) 00:38:44.178 Write completed with error (sct=0, sc=8) 00:38:44.178 Write completed with error (sct=0, sc=8) 00:38:44.178 Read completed with error (sct=0, sc=8) 00:38:44.178 Write completed with error (sct=0, sc=8) 00:38:44.178 Write completed with error (sct=0, sc=8) 00:38:44.178 Write completed with error (sct=0, sc=8) 00:38:44.178 Read completed with error (sct=0, sc=8) 00:38:44.178 Write completed with error (sct=0, sc=8) 00:38:44.178 Write completed with error (sct=0, sc=8) 00:38:44.178 Read completed with error (sct=0, sc=8) 00:38:44.178 Read completed with error (sct=0, sc=8) 00:38:44.178 Write completed with error (sct=0, sc=8) 00:38:44.178 Write completed with error (sct=0, sc=8) 00:38:44.178 Read completed with error (sct=0, sc=8) 00:38:44.178 Read completed with error (sct=0, sc=8) 00:38:44.178 Write completed with error (sct=0, sc=8) 00:38:44.178 Read completed with error (sct=0, sc=8) 00:38:44.178 Write completed with error (sct=0, sc=8) 00:38:44.178 Write completed with error (sct=0, sc=8) 00:38:44.178 Write completed with error (sct=0, sc=8) 00:38:44.178 starting I/O failed: -6 00:38:44.178 Read completed with error (sct=0, sc=8) 00:38:44.178 Read completed with error (sct=0, sc=8) 00:38:44.178 Read completed with error (sct=0, sc=8) 00:38:44.178 Read completed with error (sct=0, sc=8) 00:38:44.178 starting I/O failed: -6 00:38:44.178 Read completed with error (sct=0, sc=8) 00:38:44.178 Read completed with error (sct=0, sc=8) 00:38:44.178 Read completed with error (sct=0, sc=8) 00:38:44.178 Read completed with error (sct=0, sc=8) 00:38:44.178 starting I/O failed: -6 00:38:44.178 Read completed with error (sct=0, sc=8) 00:38:44.178 Read completed with error (sct=0, sc=8) 00:38:44.178 Write completed with error (sct=0, sc=8) 00:38:44.178 Read completed with error (sct=0, sc=8) 00:38:44.178 starting I/O failed: -6 00:38:44.178 Read completed with error (sct=0, sc=8) 00:38:44.178 Write completed with error (sct=0, sc=8) 00:38:44.178 Read completed with error (sct=0, sc=8) 00:38:44.178 Write completed with error (sct=0, sc=8) 00:38:44.178 starting I/O failed: -6 00:38:44.178 Write completed with error (sct=0, sc=8) 00:38:44.178 Read completed with error (sct=0, sc=8) 00:38:44.178 Read completed with error (sct=0, sc=8) 00:38:44.178 Write completed with error (sct=0, sc=8) 00:38:44.178 starting I/O failed: -6 00:38:44.178 Read completed with error (sct=0, sc=8) 00:38:44.178 Write completed with error (sct=0, sc=8) 00:38:44.178 Read completed with error (sct=0, sc=8) 00:38:44.178 Read completed with error (sct=0, sc=8) 00:38:44.178 starting I/O failed: -6 00:38:44.178 Read completed with error (sct=0, sc=8) 00:38:44.178 Read completed with error (sct=0, sc=8) 00:38:44.178 Write completed with error (sct=0, sc=8) 00:38:44.178 Write completed with error (sct=0, sc=8) 00:38:44.178 starting I/O failed: -6 00:38:44.178 Write completed with error (sct=0, sc=8) 00:38:44.178 Read completed with error (sct=0, sc=8) 00:38:44.178 Write completed with error (sct=0, sc=8) 00:38:44.178 Write completed with error (sct=0, sc=8) 00:38:44.178 starting I/O failed: -6 00:38:44.178 Read completed with error (sct=0, sc=8) 00:38:44.178 Read completed with error (sct=0, sc=8) 00:38:44.178 Read completed with error (sct=0, sc=8) 00:38:44.178 Read completed with error (sct=0, sc=8) 00:38:44.178 starting I/O failed: -6 00:38:44.178 Read completed with error (sct=0, sc=8) 00:38:44.178 Write completed with error (sct=0, sc=8) 00:38:44.178 Read completed with error (sct=0, sc=8) 00:38:44.178 Write completed with error (sct=0, sc=8) 00:38:44.178 starting I/O failed: -6 00:38:44.178 Write completed with error (sct=0, sc=8) 00:38:44.178 Read completed with error (sct=0, sc=8) 00:38:44.178 Read completed with error (sct=0, sc=8) 00:38:44.178 Read completed with error (sct=0, sc=8) 00:38:44.178 starting I/O failed: -6 00:38:44.178 Read completed with error (sct=0, sc=8) 00:38:44.178 Write completed with error (sct=0, sc=8) 00:38:44.178 Read completed with error (sct=0, sc=8) 00:38:44.178 Read completed with error (sct=0, sc=8) 00:38:44.178 starting I/O failed: -6 00:38:44.178 Write completed with error (sct=0, sc=8) 00:38:44.178 Read completed with error (sct=0, sc=8) 00:38:44.178 Read completed with error (sct=0, sc=8) 00:38:44.178 Read completed with error (sct=0, sc=8) 00:38:44.178 starting I/O failed: -6 00:38:44.178 Write completed with error (sct=0, sc=8) 00:38:44.178 Read completed with error (sct=0, sc=8) 00:38:44.178 [2024-09-29 16:46:44.529412] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016600 is same with the state(6) to be set 00:38:45.112 [2024-09-29 16:46:45.492955] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000015c00 is same with the state(6) to be set 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Write completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Write completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Write completed with error (sct=0, sc=8) 00:38:45.112 [2024-09-29 16:46:45.528577] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020380 is same with the state(6) to be set 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Write completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Write completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Write completed with error (sct=0, sc=8) 00:38:45.112 Write completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Write completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Write completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Write completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Write completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 [2024-09-29 16:46:45.530216] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016880 is same with the state(6) to be set 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Write completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Write completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Write completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Write completed with error (sct=0, sc=8) 00:38:45.112 Write completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Write completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Write completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Write completed with error (sct=0, sc=8) 00:38:45.112 Write completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Write completed with error (sct=0, sc=8) 00:38:45.112 Write completed with error (sct=0, sc=8) 00:38:45.112 Write completed with error (sct=0, sc=8) 00:38:45.112 [2024-09-29 16:46:45.530708] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016b00 is same with the state(6) to be set 00:38:45.112 Write completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Write completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Write completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Write completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Write completed with error (sct=0, sc=8) 00:38:45.112 Write completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Write completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Write completed with error (sct=0, sc=8) 00:38:45.112 Write completed with error (sct=0, sc=8) 00:38:45.112 Write completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Write completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Write completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Read completed with error (sct=0, sc=8) 00:38:45.112 Write completed with error (sct=0, sc=8) 00:38:45.112 [2024-09-29 16:46:45.531177] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016380 is same with the state(6) to be set 00:38:45.112 Initializing NVMe Controllers 00:38:45.112 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:45.112 Controller IO queue size 128, less than required. 00:38:45.112 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:45.112 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:38:45.112 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:38:45.112 Initialization complete. Launching workers. 00:38:45.112 ======================================================== 00:38:45.112 Latency(us) 00:38:45.112 Device Information : IOPS MiB/s Average min max 00:38:45.112 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 194.90 0.10 945682.80 3468.54 1017175.24 00:38:45.112 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 155.82 0.08 874342.06 672.49 1014660.96 00:38:45.112 ======================================================== 00:38:45.112 Total : 350.72 0.17 913986.98 672.49 1017175.24 00:38:45.112 00:38:45.113 16:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:45.113 16:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:38:45.113 [2024-09-29 16:46:45.535840] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000015c00 (9): Bad file descriptor 00:38:45.113 16:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3343425 00:38:45.113 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:38:45.113 16:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:38:45.680 16:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:38:45.680 16:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3343425 00:38:45.680 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3343425) - No such process 00:38:45.680 16:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3343425 00:38:45.680 16:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:38:45.680 16:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3343425 00:38:45.680 16:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:38:45.680 16:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:45.680 16:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:38:45.680 16:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:45.680 16:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 3343425 00:38:45.680 16:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:38:45.680 16:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:45.680 16:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:45.681 16:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:45.681 16:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:38:45.681 16:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:45.681 16:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:45.681 16:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:45.681 16:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:45.681 16:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:45.681 16:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:45.681 [2024-09-29 16:46:46.055586] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:45.681 16:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:45.681 16:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:45.681 16:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:45.681 16:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:45.681 16:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:45.681 16:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3343823 00:38:45.681 16:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:38:45.681 16:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3343823 00:38:45.681 16:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:38:45.681 16:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:45.681 [2024-09-29 16:46:46.162506] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:38:46.246 16:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:46.246 16:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3343823 00:38:46.246 16:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:46.810 16:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:46.810 16:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3343823 00:38:46.810 16:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:47.068 16:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:47.068 16:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3343823 00:38:47.068 16:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:47.632 16:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:47.632 16:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3343823 00:38:47.632 16:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:48.197 16:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:48.197 16:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3343823 00:38:48.197 16:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:48.761 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:48.761 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3343823 00:38:48.761 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:49.018 Initializing NVMe Controllers 00:38:49.018 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:49.018 Controller IO queue size 128, less than required. 00:38:49.018 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:49.018 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:38:49.018 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:38:49.018 Initialization complete. Launching workers. 00:38:49.018 ======================================================== 00:38:49.018 Latency(us) 00:38:49.018 Device Information : IOPS MiB/s Average min max 00:38:49.018 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1005791.06 1000399.09 1013612.57 00:38:49.018 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005923.79 1000460.02 1042983.09 00:38:49.018 ======================================================== 00:38:49.018 Total : 256.00 0.12 1005857.43 1000399.09 1042983.09 00:38:49.018 00:38:49.275 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:49.275 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3343823 00:38:49.275 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3343823) - No such process 00:38:49.275 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3343823 00:38:49.275 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:38:49.275 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:38:49.275 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # nvmfcleanup 00:38:49.275 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:38:49.275 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:49.275 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:38:49.275 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:49.275 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:49.275 rmmod nvme_tcp 00:38:49.275 rmmod nvme_fabrics 00:38:49.275 rmmod nvme_keyring 00:38:49.275 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:49.275 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:38:49.275 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:38:49.275 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@513 -- # '[' -n 3343274 ']' 00:38:49.275 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # killprocess 3343274 00:38:49.275 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 3343274 ']' 00:38:49.275 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 3343274 00:38:49.275 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:38:49.275 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:49.275 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3343274 00:38:49.275 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:49.275 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:49.275 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3343274' 00:38:49.275 killing process with pid 3343274 00:38:49.275 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 3343274 00:38:49.275 16:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 3343274 00:38:50.649 16:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:38:50.649 16:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:38:50.649 16:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:38:50.649 16:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:38:50.649 16:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # iptables-save 00:38:50.649 16:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:38:50.649 16:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # iptables-restore 00:38:50.649 16:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:50.649 16:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:50.649 16:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:50.649 16:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:50.649 16:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:52.552 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:52.552 00:38:52.552 real 0m14.192s 00:38:52.552 user 0m26.651s 00:38:52.552 sys 0m3.925s 00:38:52.552 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:52.552 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:52.552 ************************************ 00:38:52.552 END TEST nvmf_delete_subsystem 00:38:52.552 ************************************ 00:38:52.552 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:38:52.552 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:38:52.552 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:52.552 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:52.810 ************************************ 00:38:52.810 START TEST nvmf_host_management 00:38:52.810 ************************************ 00:38:52.810 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:38:52.810 * Looking for test storage... 00:38:52.811 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:52.811 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:38:52.811 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:38:52.811 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:38:52.811 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:38:52.811 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:52.811 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:52.811 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:52.811 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:38:52.811 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:38:52.811 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:38:52.811 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:38:52.811 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:38:52.811 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:38:52.811 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:38:52.811 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:52.811 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:38:52.811 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:38:52.811 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:52.811 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:52.811 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:38:52.811 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:38:52.811 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:52.811 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:38:52.811 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:38:52.811 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:38:52.811 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:38:52.811 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:52.811 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:38:52.811 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:38:52.811 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:52.811 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:52.811 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:38:52.811 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:52.811 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:38:52.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:52.811 --rc genhtml_branch_coverage=1 00:38:52.811 --rc genhtml_function_coverage=1 00:38:52.811 --rc genhtml_legend=1 00:38:52.811 --rc geninfo_all_blocks=1 00:38:52.811 --rc geninfo_unexecuted_blocks=1 00:38:52.811 00:38:52.811 ' 00:38:52.811 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:38:52.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:52.811 --rc genhtml_branch_coverage=1 00:38:52.811 --rc genhtml_function_coverage=1 00:38:52.811 --rc genhtml_legend=1 00:38:52.811 --rc geninfo_all_blocks=1 00:38:52.811 --rc geninfo_unexecuted_blocks=1 00:38:52.811 00:38:52.811 ' 00:38:52.811 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:38:52.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:52.811 --rc genhtml_branch_coverage=1 00:38:52.811 --rc genhtml_function_coverage=1 00:38:52.811 --rc genhtml_legend=1 00:38:52.811 --rc geninfo_all_blocks=1 00:38:52.811 --rc geninfo_unexecuted_blocks=1 00:38:52.811 00:38:52.811 ' 00:38:52.811 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:38:52.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:52.811 --rc genhtml_branch_coverage=1 00:38:52.811 --rc genhtml_function_coverage=1 00:38:52.811 --rc genhtml_legend=1 00:38:52.811 --rc geninfo_all_blocks=1 00:38:52.811 --rc geninfo_unexecuted_blocks=1 00:38:52.811 00:38:52.811 ' 00:38:52.811 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:52.811 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:38:52.811 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:52.811 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:52.811 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:52.811 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:52.811 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:52.811 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:52.811 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:52.811 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:52.811 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:52.811 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:52.811 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:52.811 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:52.811 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:52.811 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:52.811 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:52.811 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:52.811 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:52.811 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:38:52.811 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:52.811 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:52.811 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:52.811 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:52.811 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:52.811 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:52.811 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:38:52.811 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:52.811 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:38:52.811 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:52.811 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:52.811 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:52.812 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:52.812 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:52.812 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:52.812 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:52.812 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:52.812 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:52.812 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:52.812 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:52.812 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:52.812 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:38:52.812 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:38:52.812 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:52.812 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@472 -- # prepare_net_devs 00:38:52.812 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@434 -- # local -g is_hw=no 00:38:52.812 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@436 -- # remove_spdk_ns 00:38:52.812 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:52.812 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:52.812 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:52.812 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:38:52.812 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:38:52.812 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:38:52.812 16:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:55.340 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:55.340 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:38:55.340 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:55.340 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:55.340 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:55.340 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:55.340 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:55.340 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:38:55.340 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:55.340 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:38:55.340 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:38:55.340 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:38:55.340 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:38:55.340 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:38:55.340 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:38:55.340 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:55.340 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:55.340 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:55.340 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:55.340 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:55.340 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:55.340 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:55.340 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:55.340 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:55.340 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:55.340 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:55.340 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:38:55.340 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:38:55.340 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:38:55.340 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:38:55.340 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:38:55.340 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:38:55.340 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:38:55.340 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:55.340 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:55.340 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:38:55.340 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:38:55.340 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:55.340 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:55.340 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:38:55.340 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:38:55.340 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:55.340 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:55.340 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:38:55.340 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:38:55.340 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:55.340 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:55.340 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:38:55.340 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:38:55.340 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:38:55.340 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:38:55.340 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:38:55.340 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:55.340 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:38:55.340 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:55.340 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ up == up ]] 00:38:55.340 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:38:55.340 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:55.340 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:55.340 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:55.340 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:38:55.340 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:38:55.340 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:55.340 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:38:55.340 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:55.340 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ up == up ]] 00:38:55.340 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:38:55.340 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:55.340 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:55.340 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:55.341 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:38:55.341 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:38:55.341 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # is_hw=yes 00:38:55.341 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:38:55.341 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:38:55.341 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:38:55.341 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:55.341 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:55.341 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:55.341 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:55.341 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:55.341 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:55.341 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:55.341 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:55.341 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:55.341 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:55.341 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:55.341 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:55.341 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:55.341 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:55.341 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:55.341 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:55.341 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:55.341 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:55.341 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:55.341 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:55.341 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:55.341 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:55.341 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:55.341 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:55.341 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:38:55.341 00:38:55.341 --- 10.0.0.2 ping statistics --- 00:38:55.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:55.341 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:38:55.341 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:55.341 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:55.341 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:38:55.341 00:38:55.341 --- 10.0.0.1 ping statistics --- 00:38:55.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:55.341 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:38:55.341 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:55.341 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # return 0 00:38:55.341 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:38:55.341 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:55.341 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:38:55.341 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:38:55.341 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:55.341 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:38:55.341 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:38:55.341 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:38:55.341 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:38:55.341 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:38:55.341 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:38:55.341 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:55.341 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:55.341 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@505 -- # nvmfpid=3346373 00:38:55.341 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:38:55.341 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@506 -- # waitforlisten 3346373 00:38:55.341 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 3346373 ']' 00:38:55.341 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:55.341 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:55.341 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:55.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:55.341 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:55.341 16:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:55.341 [2024-09-29 16:46:55.578707] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:55.341 [2024-09-29 16:46:55.581213] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:38:55.341 [2024-09-29 16:46:55.581310] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:55.341 [2024-09-29 16:46:55.723074] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:55.599 [2024-09-29 16:46:55.984913] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:55.599 [2024-09-29 16:46:55.985014] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:55.599 [2024-09-29 16:46:55.985043] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:55.599 [2024-09-29 16:46:55.985066] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:55.599 [2024-09-29 16:46:55.985089] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:55.599 [2024-09-29 16:46:55.985242] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:38:55.599 [2024-09-29 16:46:55.985337] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:38:55.599 [2024-09-29 16:46:55.985382] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:38:55.599 [2024-09-29 16:46:55.985394] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:38:55.856 [2024-09-29 16:46:56.352200] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:55.856 [2024-09-29 16:46:56.353264] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:55.856 [2024-09-29 16:46:56.354429] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:55.856 [2024-09-29 16:46:56.355177] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:55.856 [2024-09-29 16:46:56.355459] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:38:56.114 16:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:56.114 16:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:38:56.114 16:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:38:56.114 16:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:56.114 16:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:56.114 16:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:56.114 16:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:56.114 16:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:56.114 16:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:56.114 [2024-09-29 16:46:56.558480] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:56.114 16:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:56.114 16:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:38:56.114 16:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:56.114 16:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:56.114 16:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:38:56.114 16:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:38:56.114 16:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:38:56.114 16:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:56.114 16:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:56.114 Malloc0 00:38:56.373 [2024-09-29 16:46:56.682594] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:56.373 16:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:56.373 16:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:38:56.373 16:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:56.373 16:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:56.373 16:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3346596 00:38:56.373 16:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3346596 /var/tmp/bdevperf.sock 00:38:56.373 16:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 3346596 ']' 00:38:56.373 16:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:56.373 16:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:38:56.373 16:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:38:56.373 16:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:56.373 16:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:38:56.373 16:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:56.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:56.373 16:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:56.373 16:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:38:56.373 16:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:56.373 16:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:38:56.373 16:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:38:56.373 { 00:38:56.373 "params": { 00:38:56.373 "name": "Nvme$subsystem", 00:38:56.373 "trtype": "$TEST_TRANSPORT", 00:38:56.373 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:56.373 "adrfam": "ipv4", 00:38:56.373 "trsvcid": "$NVMF_PORT", 00:38:56.373 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:56.373 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:56.373 "hdgst": ${hdgst:-false}, 00:38:56.373 "ddgst": ${ddgst:-false} 00:38:56.373 }, 00:38:56.373 "method": "bdev_nvme_attach_controller" 00:38:56.373 } 00:38:56.373 EOF 00:38:56.373 )") 00:38:56.373 16:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:38:56.373 16:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:38:56.373 16:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:38:56.373 16:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:38:56.373 "params": { 00:38:56.373 "name": "Nvme0", 00:38:56.373 "trtype": "tcp", 00:38:56.373 "traddr": "10.0.0.2", 00:38:56.373 "adrfam": "ipv4", 00:38:56.373 "trsvcid": "4420", 00:38:56.373 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:56.373 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:56.373 "hdgst": false, 00:38:56.373 "ddgst": false 00:38:56.373 }, 00:38:56.373 "method": "bdev_nvme_attach_controller" 00:38:56.373 }' 00:38:56.373 [2024-09-29 16:46:56.795405] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:38:56.373 [2024-09-29 16:46:56.795543] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3346596 ] 00:38:56.373 [2024-09-29 16:46:56.924398] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:56.630 [2024-09-29 16:46:57.160459] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:38:57.195 Running I/O for 10 seconds... 00:38:57.454 16:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:57.454 16:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:38:57.454 16:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:38:57.454 16:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:57.454 16:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:57.454 16:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:57.454 16:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:57.454 16:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:38:57.454 16:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:38:57.454 16:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:38:57.454 16:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:38:57.454 16:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:38:57.454 16:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:38:57.454 16:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:38:57.454 16:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:38:57.454 16:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:38:57.454 16:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:57.454 16:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:57.454 16:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:57.454 16:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=181 00:38:57.454 16:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 181 -ge 100 ']' 00:38:57.454 16:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:38:57.454 16:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:38:57.454 16:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:38:57.454 16:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:38:57.454 16:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:57.454 16:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:57.454 [2024-09-29 16:46:57.818461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:38:57.454 [2024-09-29 16:46:57.818524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.454 [2024-09-29 16:46:57.818554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:38:57.454 [2024-09-29 16:46:57.818576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.454 [2024-09-29 16:46:57.818597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:38:57.454 [2024-09-29 16:46:57.818617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.454 [2024-09-29 16:46:57.818639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:38:57.454 [2024-09-29 16:46:57.818669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.454 [2024-09-29 16:46:57.818700] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:38:57.454 [2024-09-29 16:46:57.818825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.454 [2024-09-29 16:46:57.818856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.454 [2024-09-29 16:46:57.818893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.454 [2024-09-29 16:46:57.818932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.454 [2024-09-29 16:46:57.818959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.454 [2024-09-29 16:46:57.818989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.454 [2024-09-29 16:46:57.819012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.454 [2024-09-29 16:46:57.819033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.454 [2024-09-29 16:46:57.819056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.454 [2024-09-29 16:46:57.819077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.454 [2024-09-29 16:46:57.819100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.454 [2024-09-29 16:46:57.819120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.454 [2024-09-29 16:46:57.819143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.454 [2024-09-29 16:46:57.819164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.454 [2024-09-29 16:46:57.819197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.454 [2024-09-29 16:46:57.819218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.454 [2024-09-29 16:46:57.819241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.454 [2024-09-29 16:46:57.819262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.455 [2024-09-29 16:46:57.819285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.455 [2024-09-29 16:46:57.819306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.455 [2024-09-29 16:46:57.819329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.455 [2024-09-29 16:46:57.819349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.455 [2024-09-29 16:46:57.819372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.455 [2024-09-29 16:46:57.819393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.455 [2024-09-29 16:46:57.819415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.455 [2024-09-29 16:46:57.819436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.455 [2024-09-29 16:46:57.819459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.455 [2024-09-29 16:46:57.819480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.455 [2024-09-29 16:46:57.819503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.455 [2024-09-29 16:46:57.819524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.455 [2024-09-29 16:46:57.819547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.455 [2024-09-29 16:46:57.819568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.455 [2024-09-29 16:46:57.819590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.455 [2024-09-29 16:46:57.819610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.455 [2024-09-29 16:46:57.819633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.455 [2024-09-29 16:46:57.819654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.455 [2024-09-29 16:46:57.819697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.455 [2024-09-29 16:46:57.819720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.455 [2024-09-29 16:46:57.819743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.455 [2024-09-29 16:46:57.819769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.455 [2024-09-29 16:46:57.819793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.455 [2024-09-29 16:46:57.819814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.455 [2024-09-29 16:46:57.819836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.455 [2024-09-29 16:46:57.819857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.455 [2024-09-29 16:46:57.819880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.455 [2024-09-29 16:46:57.819900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.455 [2024-09-29 16:46:57.819922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.455 [2024-09-29 16:46:57.819942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.455 [2024-09-29 16:46:57.819965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.455 [2024-09-29 16:46:57.819996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.455 [2024-09-29 16:46:57.820019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.455 [2024-09-29 16:46:57.820039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.455 [2024-09-29 16:46:57.820066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.455 [2024-09-29 16:46:57.820087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.455 [2024-09-29 16:46:57.820110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.455 [2024-09-29 16:46:57.820130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.455 [2024-09-29 16:46:57.820152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.455 [2024-09-29 16:46:57.820173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.455 [2024-09-29 16:46:57.820196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.455 [2024-09-29 16:46:57.820227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.455 [2024-09-29 16:46:57.820249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.455 [2024-09-29 16:46:57.820269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.455 [2024-09-29 16:46:57.820296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.455 [2024-09-29 16:46:57.820316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.455 [2024-09-29 16:46:57.820344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.455 [2024-09-29 16:46:57.820365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.455 [2024-09-29 16:46:57.820387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.455 [2024-09-29 16:46:57.820408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.455 [2024-09-29 16:46:57.820440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.455 [2024-09-29 16:46:57.820461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.455 [2024-09-29 16:46:57.820483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.455 [2024-09-29 16:46:57.820513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.455 [2024-09-29 16:46:57.820536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.455 [2024-09-29 16:46:57.820556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.455 [2024-09-29 16:46:57.820579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.455 [2024-09-29 16:46:57.820599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.455 [2024-09-29 16:46:57.820622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.455 [2024-09-29 16:46:57.820642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.455 [2024-09-29 16:46:57.820669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.455 [2024-09-29 16:46:57.820699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.455 [2024-09-29 16:46:57.820725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.455 [2024-09-29 16:46:57.820746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.455 [2024-09-29 16:46:57.820769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.456 [2024-09-29 16:46:57.820789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.456 [2024-09-29 16:46:57.820812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.456 [2024-09-29 16:46:57.820833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.456 [2024-09-29 16:46:57.820855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.456 [2024-09-29 16:46:57.820876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.456 [2024-09-29 16:46:57.820898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.456 [2024-09-29 16:46:57.820923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.456 [2024-09-29 16:46:57.820947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.456 [2024-09-29 16:46:57.820967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.456 [2024-09-29 16:46:57.820990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.456 [2024-09-29 16:46:57.821011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.456 [2024-09-29 16:46:57.821038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.456 [2024-09-29 16:46:57.821059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.456 [2024-09-29 16:46:57.821092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.456 [2024-09-29 16:46:57.821112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.456 [2024-09-29 16:46:57.821135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.456 [2024-09-29 16:46:57.821155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.456 [2024-09-29 16:46:57.821178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.456 [2024-09-29 16:46:57.821199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.456 [2024-09-29 16:46:57.821221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.456 [2024-09-29 16:46:57.821242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.456 [2024-09-29 16:46:57.821264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.456 [2024-09-29 16:46:57.821295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.456 [2024-09-29 16:46:57.821318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.456 [2024-09-29 16:46:57.821345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.456 [2024-09-29 16:46:57.821368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.456 [2024-09-29 16:46:57.821388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.456 [2024-09-29 16:46:57.821411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.456 [2024-09-29 16:46:57.821432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.456 [2024-09-29 16:46:57.821455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.456 [2024-09-29 16:46:57.821484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.456 [2024-09-29 16:46:57.821512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.456 [2024-09-29 16:46:57.821544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.456 [2024-09-29 16:46:57.821568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.456 [2024-09-29 16:46:57.821588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.456 [2024-09-29 16:46:57.821611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.456 [2024-09-29 16:46:57.821631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.456 [2024-09-29 16:46:57.821654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.456 [2024-09-29 16:46:57.821682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.456 [2024-09-29 16:46:57.821718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.456 [2024-09-29 16:46:57.821739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.456 [2024-09-29 16:46:57.821762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.456 [2024-09-29 16:46:57.821782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.456 [2024-09-29 16:46:57.821804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.456 [2024-09-29 16:46:57.821824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.456 [2024-09-29 16:46:57.822125] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f2f00 was disconnected and freed. reset controller. 00:38:57.456 16:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:57.456 16:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:38:57.456 16:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:57.456 [2024-09-29 16:46:57.823455] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:38:57.456 16:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:57.456 task offset: 30976 on job bdev=Nvme0n1 fails 00:38:57.456 00:38:57.456 Latency(us) 00:38:57.456 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:57.456 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:38:57.456 Job: Nvme0n1 ended in about 0.19 seconds with error 00:38:57.456 Verification LBA range: start 0x0 length 0x400 00:38:57.456 Nvme0n1 : 0.19 988.97 61.81 329.66 0.00 45880.70 4271.98 45438.29 00:38:57.456 =================================================================================================================== 00:38:57.456 Total : 988.97 61.81 329.66 0.00 45880.70 4271.98 45438.29 00:38:57.456 [2024-09-29 16:46:57.828402] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:38:57.456 [2024-09-29 16:46:57.828478] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:38:57.456 16:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:57.456 16:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:38:57.456 [2024-09-29 16:46:57.920883] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:38:58.389 16:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3346596 00:38:58.389 16:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:38:58.389 16:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:38:58.389 16:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:38:58.389 16:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:38:58.389 16:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:38:58.389 16:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:38:58.389 16:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:38:58.389 { 00:38:58.389 "params": { 00:38:58.389 "name": "Nvme$subsystem", 00:38:58.389 "trtype": "$TEST_TRANSPORT", 00:38:58.389 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:58.389 "adrfam": "ipv4", 00:38:58.389 "trsvcid": "$NVMF_PORT", 00:38:58.389 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:58.389 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:58.389 "hdgst": ${hdgst:-false}, 00:38:58.389 "ddgst": ${ddgst:-false} 00:38:58.389 }, 00:38:58.389 "method": "bdev_nvme_attach_controller" 00:38:58.389 } 00:38:58.389 EOF 00:38:58.389 )") 00:38:58.389 16:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:38:58.389 16:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:38:58.389 16:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:38:58.389 16:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:38:58.389 "params": { 00:38:58.389 "name": "Nvme0", 00:38:58.389 "trtype": "tcp", 00:38:58.389 "traddr": "10.0.0.2", 00:38:58.389 "adrfam": "ipv4", 00:38:58.389 "trsvcid": "4420", 00:38:58.389 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:58.389 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:58.389 "hdgst": false, 00:38:58.389 "ddgst": false 00:38:58.389 }, 00:38:58.389 "method": "bdev_nvme_attach_controller" 00:38:58.389 }' 00:38:58.389 [2024-09-29 16:46:58.923526] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:38:58.389 [2024-09-29 16:46:58.923701] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3346790 ] 00:38:58.646 [2024-09-29 16:46:59.056818] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:58.904 [2024-09-29 16:46:59.294406] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:38:59.469 Running I/O for 1 seconds... 00:39:00.402 1344.00 IOPS, 84.00 MiB/s 00:39:00.402 Latency(us) 00:39:00.402 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:00.402 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:39:00.402 Verification LBA range: start 0x0 length 0x400 00:39:00.402 Nvme0n1 : 1.03 1365.50 85.34 0.00 0.00 46078.84 7281.78 40777.96 00:39:00.402 =================================================================================================================== 00:39:00.402 Total : 1365.50 85.34 0.00 0.00 46078.84 7281.78 40777.96 00:39:01.336 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 3346596 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:39:01.336 16:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:39:01.336 16:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:39:01.336 16:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:39:01.336 16:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:39:01.336 16:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:39:01.336 16:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # nvmfcleanup 00:39:01.336 16:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:39:01.336 16:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:01.336 16:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:39:01.336 16:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:01.336 16:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:01.336 rmmod nvme_tcp 00:39:01.336 rmmod nvme_fabrics 00:39:01.336 rmmod nvme_keyring 00:39:01.336 16:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:01.336 16:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:39:01.336 16:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:39:01.336 16:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@513 -- # '[' -n 3346373 ']' 00:39:01.336 16:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@514 -- # killprocess 3346373 00:39:01.336 16:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 3346373 ']' 00:39:01.336 16:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 3346373 00:39:01.336 16:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:39:01.336 16:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:01.336 16:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3346373 00:39:01.594 16:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:39:01.594 16:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:39:01.594 16:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3346373' 00:39:01.594 killing process with pid 3346373 00:39:01.594 16:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 3346373 00:39:01.594 16:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 3346373 00:39:02.966 [2024-09-29 16:47:03.366401] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:39:02.966 16:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:39:02.966 16:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:39:02.966 16:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:39:02.966 16:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:39:02.966 16:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-save 00:39:02.966 16:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:39:02.966 16:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-restore 00:39:02.966 16:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:02.966 16:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:02.966 16:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:02.966 16:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:02.966 16:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:05.499 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:05.499 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:39:05.499 00:39:05.499 real 0m12.376s 00:39:05.499 user 0m27.310s 00:39:05.499 sys 0m4.814s 00:39:05.499 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:05.499 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:05.499 ************************************ 00:39:05.499 END TEST nvmf_host_management 00:39:05.499 ************************************ 00:39:05.499 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:39:05.499 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:39:05.499 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:05.499 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:05.499 ************************************ 00:39:05.499 START TEST nvmf_lvol 00:39:05.499 ************************************ 00:39:05.499 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:39:05.499 * Looking for test storage... 00:39:05.499 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:05.499 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:39:05.499 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:39:05.499 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:39:05.499 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:39:05.499 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:05.499 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:05.499 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:05.499 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:39:05.499 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:39:05.499 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:39:05.499 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:39:05.499 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:39:05.499 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:39:05.499 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:39:05.499 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:05.499 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:39:05.499 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:39:05.499 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:05.499 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:05.499 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:39:05.499 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:39:05.499 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:05.499 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:39:05.499 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:39:05.499 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:39:05.499 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:39:05.499 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:05.499 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:39:05.499 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:39:05.499 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:05.499 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:05.499 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:39:05.499 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:05.499 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:39:05.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:05.499 --rc genhtml_branch_coverage=1 00:39:05.499 --rc genhtml_function_coverage=1 00:39:05.499 --rc genhtml_legend=1 00:39:05.499 --rc geninfo_all_blocks=1 00:39:05.499 --rc geninfo_unexecuted_blocks=1 00:39:05.499 00:39:05.499 ' 00:39:05.499 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:39:05.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:05.499 --rc genhtml_branch_coverage=1 00:39:05.499 --rc genhtml_function_coverage=1 00:39:05.499 --rc genhtml_legend=1 00:39:05.499 --rc geninfo_all_blocks=1 00:39:05.499 --rc geninfo_unexecuted_blocks=1 00:39:05.499 00:39:05.499 ' 00:39:05.499 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:39:05.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:05.499 --rc genhtml_branch_coverage=1 00:39:05.499 --rc genhtml_function_coverage=1 00:39:05.499 --rc genhtml_legend=1 00:39:05.499 --rc geninfo_all_blocks=1 00:39:05.499 --rc geninfo_unexecuted_blocks=1 00:39:05.499 00:39:05.499 ' 00:39:05.499 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:39:05.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:05.499 --rc genhtml_branch_coverage=1 00:39:05.499 --rc genhtml_function_coverage=1 00:39:05.499 --rc genhtml_legend=1 00:39:05.499 --rc geninfo_all_blocks=1 00:39:05.499 --rc geninfo_unexecuted_blocks=1 00:39:05.499 00:39:05.499 ' 00:39:05.499 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:05.499 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:39:05.499 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:05.499 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:05.499 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:05.499 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:05.499 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:05.499 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:05.499 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:05.499 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:05.499 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:05.499 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:05.499 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:05.499 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:05.499 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:05.499 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:05.499 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:05.499 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:05.499 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:05.499 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:39:05.499 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:05.499 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:05.499 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:05.499 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:05.500 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:05.500 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:05.500 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:39:05.500 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:05.500 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:39:05.500 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:05.500 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:05.500 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:05.500 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:05.500 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:05.500 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:05.500 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:05.500 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:05.500 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:05.500 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:05.500 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:05.500 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:05.500 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:39:05.500 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:39:05.500 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:05.500 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:39:05.500 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:39:05.500 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:05.500 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@472 -- # prepare_net_devs 00:39:05.500 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@434 -- # local -g is_hw=no 00:39:05.500 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@436 -- # remove_spdk_ns 00:39:05.500 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:05.500 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:05.500 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:05.500 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:39:05.500 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:39:05.500 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:39:05.500 16:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:39:07.402 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:07.402 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:39:07.402 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:07.402 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:07.402 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:07.402 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:07.402 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:07.402 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:39:07.402 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:07.402 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:39:07.402 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:39:07.402 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:39:07.402 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:39:07.402 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:39:07.402 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:39:07.402 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:07.402 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:07.402 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:07.402 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:07.402 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:07.402 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:07.402 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:07.402 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:07.402 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:07.402 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:07.402 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:07.402 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:39:07.402 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:39:07.402 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:39:07.402 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:39:07.402 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:39:07.402 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:39:07.402 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:39:07.402 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:07.402 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:07.402 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:39:07.402 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:39:07.402 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:07.402 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:07.402 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:39:07.402 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:39:07.402 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:07.402 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:07.402 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:39:07.402 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:39:07.402 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:07.402 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:07.402 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:39:07.402 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:39:07.402 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:39:07.402 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:39:07.402 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:39:07.402 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:07.402 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:39:07.402 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:07.402 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ up == up ]] 00:39:07.402 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:39:07.402 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:07.402 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:07.402 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:07.402 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:39:07.402 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:39:07.402 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:07.402 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:39:07.402 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:07.402 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ up == up ]] 00:39:07.402 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:39:07.403 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:07.403 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:07.403 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:07.403 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:39:07.403 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:39:07.403 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # is_hw=yes 00:39:07.403 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:39:07.403 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:39:07.403 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:39:07.403 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:07.403 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:07.403 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:07.403 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:07.403 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:07.403 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:07.403 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:07.403 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:07.403 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:07.403 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:07.403 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:07.403 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:07.403 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:07.403 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:07.403 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:07.403 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:07.403 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:07.403 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:07.403 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:07.403 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:07.403 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:07.403 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:07.403 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:07.403 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:07.403 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:39:07.403 00:39:07.403 --- 10.0.0.2 ping statistics --- 00:39:07.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:07.403 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:39:07.403 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:07.403 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:07.403 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:39:07.403 00:39:07.403 --- 10.0.0.1 ping statistics --- 00:39:07.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:07.403 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:39:07.403 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:07.403 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # return 0 00:39:07.403 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:39:07.403 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:07.403 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:39:07.403 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:39:07.403 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:07.403 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:39:07.403 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:39:07.403 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:39:07.403 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:39:07.403 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:07.403 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:39:07.403 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@505 -- # nvmfpid=3349216 00:39:07.403 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:39:07.403 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@506 -- # waitforlisten 3349216 00:39:07.403 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 3349216 ']' 00:39:07.403 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:07.403 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:07.403 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:07.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:07.403 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:07.403 16:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:39:07.661 [2024-09-29 16:47:07.968806] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:07.661 [2024-09-29 16:47:07.971487] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:39:07.661 [2024-09-29 16:47:07.971593] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:07.661 [2024-09-29 16:47:08.114154] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:07.919 [2024-09-29 16:47:08.364370] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:07.919 [2024-09-29 16:47:08.364447] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:07.919 [2024-09-29 16:47:08.364476] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:07.919 [2024-09-29 16:47:08.364498] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:07.919 [2024-09-29 16:47:08.364519] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:07.919 [2024-09-29 16:47:08.364692] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:39:07.919 [2024-09-29 16:47:08.364739] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:39:07.919 [2024-09-29 16:47:08.364748] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:39:08.177 [2024-09-29 16:47:08.736247] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:08.177 [2024-09-29 16:47:08.737334] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:08.177 [2024-09-29 16:47:08.738139] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:08.177 [2024-09-29 16:47:08.738492] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:08.435 16:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:08.435 16:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:39:08.435 16:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:39:08.435 16:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:08.435 16:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:39:08.435 16:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:08.435 16:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:39:08.701 [2024-09-29 16:47:09.225871] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:08.701 16:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:09.346 16:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:39:09.346 16:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:09.604 16:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:39:09.604 16:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:39:09.862 16:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:39:10.120 16:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=24329c4a-3d7c-4c3c-a06b-2d0ddda34ce3 00:39:10.120 16:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 24329c4a-3d7c-4c3c-a06b-2d0ddda34ce3 lvol 20 00:39:10.378 16:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=5bc4d0c5-33c4-4fd9-a7d5-8dbf2af76285 00:39:10.378 16:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:39:10.636 16:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5bc4d0c5-33c4-4fd9-a7d5-8dbf2af76285 00:39:10.893 16:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:11.151 [2024-09-29 16:47:11.646073] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:11.151 16:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:11.408 16:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3349770 00:39:11.408 16:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:39:11.408 16:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:39:12.784 16:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 5bc4d0c5-33c4-4fd9-a7d5-8dbf2af76285 MY_SNAPSHOT 00:39:12.784 16:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=d3898463-3992-4e66-a037-6713e71931b5 00:39:12.784 16:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 5bc4d0c5-33c4-4fd9-a7d5-8dbf2af76285 30 00:39:13.042 16:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone d3898463-3992-4e66-a037-6713e71931b5 MY_CLONE 00:39:13.608 16:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=2d7a9b61-b8d8-486d-9d25-b9d5caf30b4c 00:39:13.608 16:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 2d7a9b61-b8d8-486d-9d25-b9d5caf30b4c 00:39:14.173 16:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3349770 00:39:22.283 Initializing NVMe Controllers 00:39:22.283 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:39:22.283 Controller IO queue size 128, less than required. 00:39:22.283 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:22.283 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:39:22.283 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:39:22.283 Initialization complete. Launching workers. 00:39:22.283 ======================================================== 00:39:22.283 Latency(us) 00:39:22.283 Device Information : IOPS MiB/s Average min max 00:39:22.283 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 8113.53 31.69 15781.80 347.00 184407.22 00:39:22.283 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 8271.02 32.31 15485.07 5660.71 157922.48 00:39:22.283 ======================================================== 00:39:22.283 Total : 16384.55 64.00 15632.01 347.00 184407.22 00:39:22.283 00:39:22.284 16:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:22.284 16:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5bc4d0c5-33c4-4fd9-a7d5-8dbf2af76285 00:39:22.541 16:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 24329c4a-3d7c-4c3c-a06b-2d0ddda34ce3 00:39:22.799 16:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:39:22.799 16:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:39:22.799 16:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:39:22.799 16:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # nvmfcleanup 00:39:22.799 16:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:39:22.799 16:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:22.799 16:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:39:22.799 16:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:22.799 16:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:22.799 rmmod nvme_tcp 00:39:22.799 rmmod nvme_fabrics 00:39:22.799 rmmod nvme_keyring 00:39:22.799 16:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:22.799 16:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:39:22.799 16:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:39:22.799 16:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@513 -- # '[' -n 3349216 ']' 00:39:22.799 16:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@514 -- # killprocess 3349216 00:39:22.799 16:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 3349216 ']' 00:39:22.799 16:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 3349216 00:39:22.799 16:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:39:22.799 16:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:22.799 16:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3349216 00:39:22.799 16:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:22.799 16:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:22.799 16:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3349216' 00:39:22.799 killing process with pid 3349216 00:39:22.799 16:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 3349216 00:39:22.799 16:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 3349216 00:39:24.699 16:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:39:24.699 16:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:39:24.699 16:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:39:24.699 16:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:39:24.699 16:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-save 00:39:24.699 16:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:39:24.699 16:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-restore 00:39:24.699 16:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:24.699 16:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:24.699 16:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:24.699 16:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:24.699 16:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:26.603 16:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:26.603 00:39:26.603 real 0m21.373s 00:39:26.603 user 0m58.059s 00:39:26.603 sys 0m8.161s 00:39:26.603 16:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:26.603 16:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:39:26.603 ************************************ 00:39:26.603 END TEST nvmf_lvol 00:39:26.603 ************************************ 00:39:26.603 16:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:39:26.603 16:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:39:26.603 16:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:26.603 16:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:26.603 ************************************ 00:39:26.603 START TEST nvmf_lvs_grow 00:39:26.603 ************************************ 00:39:26.603 16:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:39:26.603 * Looking for test storage... 00:39:26.603 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:26.603 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:39:26.603 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:39:26.603 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:39:26.603 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:39:26.603 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:26.603 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:26.603 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:26.603 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:39:26.603 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:39:26.603 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:39:26.603 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:39:26.603 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:39:26.603 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:39:26.603 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:39:26.603 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:26.603 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:39:26.603 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:39:26.603 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:26.603 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:26.603 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:39:26.603 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:39:26.603 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:26.603 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:39:26.603 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:39:26.603 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:39:26.603 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:39:26.603 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:26.603 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:39:26.603 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:39:26.603 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:26.603 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:26.603 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:39:26.603 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:26.603 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:39:26.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:26.603 --rc genhtml_branch_coverage=1 00:39:26.603 --rc genhtml_function_coverage=1 00:39:26.603 --rc genhtml_legend=1 00:39:26.603 --rc geninfo_all_blocks=1 00:39:26.603 --rc geninfo_unexecuted_blocks=1 00:39:26.603 00:39:26.603 ' 00:39:26.603 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:39:26.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:26.603 --rc genhtml_branch_coverage=1 00:39:26.603 --rc genhtml_function_coverage=1 00:39:26.603 --rc genhtml_legend=1 00:39:26.603 --rc geninfo_all_blocks=1 00:39:26.603 --rc geninfo_unexecuted_blocks=1 00:39:26.603 00:39:26.604 ' 00:39:26.604 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:39:26.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:26.604 --rc genhtml_branch_coverage=1 00:39:26.604 --rc genhtml_function_coverage=1 00:39:26.604 --rc genhtml_legend=1 00:39:26.604 --rc geninfo_all_blocks=1 00:39:26.604 --rc geninfo_unexecuted_blocks=1 00:39:26.604 00:39:26.604 ' 00:39:26.604 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:39:26.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:26.604 --rc genhtml_branch_coverage=1 00:39:26.604 --rc genhtml_function_coverage=1 00:39:26.604 --rc genhtml_legend=1 00:39:26.604 --rc geninfo_all_blocks=1 00:39:26.604 --rc geninfo_unexecuted_blocks=1 00:39:26.604 00:39:26.604 ' 00:39:26.604 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:26.604 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:39:26.604 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:26.604 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:26.604 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:26.604 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:26.604 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:26.604 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:26.604 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:26.604 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:26.604 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:26.604 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:26.604 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:26.604 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:26.604 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:26.604 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:26.604 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:26.604 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:26.604 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:26.604 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:39:26.604 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:26.604 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:26.604 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:26.604 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:26.604 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:26.604 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:26.604 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:39:26.604 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:26.604 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:39:26.604 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:26.604 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:26.604 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:26.604 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:26.604 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:26.604 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:26.604 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:26.604 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:26.604 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:26.604 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:26.604 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:26.604 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:39:26.604 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:39:26.604 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:39:26.604 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:26.604 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@472 -- # prepare_net_devs 00:39:26.604 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@434 -- # local -g is_hw=no 00:39:26.604 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@436 -- # remove_spdk_ns 00:39:26.604 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:26.604 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:26.604 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:26.604 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:39:26.604 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:39:26.604 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:39:26.604 16:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:29.137 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:29.137 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ up == up ]] 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:29.137 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ up == up ]] 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:29.137 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # is_hw=yes 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:29.137 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:29.138 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:29.138 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:29.138 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:29.138 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:29.138 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:29.138 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:29.138 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:29.138 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:29.138 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:29.138 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:39:29.138 00:39:29.138 --- 10.0.0.2 ping statistics --- 00:39:29.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:29.138 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:39:29.138 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:29.138 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:29.138 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:39:29.138 00:39:29.138 --- 10.0.0.1 ping statistics --- 00:39:29.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:29.138 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:39:29.138 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:29.138 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # return 0 00:39:29.138 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:39:29.138 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:29.138 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:39:29.138 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:39:29.138 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:29.138 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:39:29.138 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:39:29.138 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:39:29.138 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:39:29.138 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:29.138 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:29.138 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@505 -- # nvmfpid=3353155 00:39:29.138 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:39:29.138 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@506 -- # waitforlisten 3353155 00:39:29.138 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 3353155 ']' 00:39:29.138 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:29.138 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:29.138 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:29.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:29.138 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:29.138 16:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:29.138 [2024-09-29 16:47:29.362335] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:29.138 [2024-09-29 16:47:29.364935] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:39:29.138 [2024-09-29 16:47:29.365055] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:29.138 [2024-09-29 16:47:29.506590] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:29.396 [2024-09-29 16:47:29.768198] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:29.396 [2024-09-29 16:47:29.768275] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:29.396 [2024-09-29 16:47:29.768304] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:29.396 [2024-09-29 16:47:29.768327] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:29.396 [2024-09-29 16:47:29.768349] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:29.396 [2024-09-29 16:47:29.768400] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:39:29.654 [2024-09-29 16:47:30.144724] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:29.654 [2024-09-29 16:47:30.145162] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:29.912 16:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:29.912 16:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:39:29.912 16:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:39:29.912 16:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:29.912 16:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:29.912 16:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:29.912 16:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:39:30.171 [2024-09-29 16:47:30.561445] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:30.171 16:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:39:30.171 16:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:39:30.171 16:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:30.171 16:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:30.171 ************************************ 00:39:30.171 START TEST lvs_grow_clean 00:39:30.171 ************************************ 00:39:30.171 16:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:39:30.171 16:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:39:30.171 16:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:39:30.171 16:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:39:30.171 16:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:39:30.171 16:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:39:30.171 16:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:39:30.171 16:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:30.171 16:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:30.171 16:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:39:30.430 16:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:39:30.430 16:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:39:30.688 16:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=5203609e-5b91-4749-b033-7d93f886ae2c 00:39:30.688 16:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5203609e-5b91-4749-b033-7d93f886ae2c 00:39:30.688 16:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:39:30.946 16:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:39:30.946 16:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:39:30.946 16:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5203609e-5b91-4749-b033-7d93f886ae2c lvol 150 00:39:31.512 16:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=a8db39b8-54b2-4a81-834a-453854ee7df2 00:39:31.512 16:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:31.512 16:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:39:31.512 [2024-09-29 16:47:32.045303] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:39:31.512 [2024-09-29 16:47:32.045441] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:39:31.512 true 00:39:31.512 16:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5203609e-5b91-4749-b033-7d93f886ae2c 00:39:31.512 16:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:39:31.771 16:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:39:31.771 16:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:39:32.337 16:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a8db39b8-54b2-4a81-834a-453854ee7df2 00:39:32.337 16:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:32.596 [2024-09-29 16:47:33.137818] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:32.596 16:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:33.161 16:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3353719 00:39:33.161 16:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:39:33.161 16:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:33.161 16:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3353719 /var/tmp/bdevperf.sock 00:39:33.161 16:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 3353719 ']' 00:39:33.161 16:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:33.161 16:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:33.161 16:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:33.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:33.161 16:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:33.161 16:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:39:33.161 [2024-09-29 16:47:33.508351] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:39:33.161 [2024-09-29 16:47:33.508501] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3353719 ] 00:39:33.161 [2024-09-29 16:47:33.636360] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:33.419 [2024-09-29 16:47:33.880974] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:39:33.986 16:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:33.986 16:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:39:33.986 16:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:39:34.553 Nvme0n1 00:39:34.553 16:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:39:34.553 [ 00:39:34.553 { 00:39:34.553 "name": "Nvme0n1", 00:39:34.553 "aliases": [ 00:39:34.553 "a8db39b8-54b2-4a81-834a-453854ee7df2" 00:39:34.553 ], 00:39:34.553 "product_name": "NVMe disk", 00:39:34.553 "block_size": 4096, 00:39:34.553 "num_blocks": 38912, 00:39:34.553 "uuid": "a8db39b8-54b2-4a81-834a-453854ee7df2", 00:39:34.553 "numa_id": 0, 00:39:34.553 "assigned_rate_limits": { 00:39:34.553 "rw_ios_per_sec": 0, 00:39:34.553 "rw_mbytes_per_sec": 0, 00:39:34.553 "r_mbytes_per_sec": 0, 00:39:34.553 "w_mbytes_per_sec": 0 00:39:34.553 }, 00:39:34.553 "claimed": false, 00:39:34.553 "zoned": false, 00:39:34.553 "supported_io_types": { 00:39:34.553 "read": true, 00:39:34.553 "write": true, 00:39:34.553 "unmap": true, 00:39:34.553 "flush": true, 00:39:34.553 "reset": true, 00:39:34.553 "nvme_admin": true, 00:39:34.553 "nvme_io": true, 00:39:34.553 "nvme_io_md": false, 00:39:34.553 "write_zeroes": true, 00:39:34.553 "zcopy": false, 00:39:34.553 "get_zone_info": false, 00:39:34.553 "zone_management": false, 00:39:34.553 "zone_append": false, 00:39:34.553 "compare": true, 00:39:34.553 "compare_and_write": true, 00:39:34.553 "abort": true, 00:39:34.553 "seek_hole": false, 00:39:34.553 "seek_data": false, 00:39:34.553 "copy": true, 00:39:34.553 "nvme_iov_md": false 00:39:34.553 }, 00:39:34.553 "memory_domains": [ 00:39:34.553 { 00:39:34.553 "dma_device_id": "system", 00:39:34.553 "dma_device_type": 1 00:39:34.553 } 00:39:34.553 ], 00:39:34.553 "driver_specific": { 00:39:34.553 "nvme": [ 00:39:34.553 { 00:39:34.553 "trid": { 00:39:34.553 "trtype": "TCP", 00:39:34.553 "adrfam": "IPv4", 00:39:34.553 "traddr": "10.0.0.2", 00:39:34.553 "trsvcid": "4420", 00:39:34.553 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:39:34.553 }, 00:39:34.553 "ctrlr_data": { 00:39:34.553 "cntlid": 1, 00:39:34.553 "vendor_id": "0x8086", 00:39:34.553 "model_number": "SPDK bdev Controller", 00:39:34.553 "serial_number": "SPDK0", 00:39:34.553 "firmware_revision": "25.01", 00:39:34.554 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:34.554 "oacs": { 00:39:34.554 "security": 0, 00:39:34.554 "format": 0, 00:39:34.554 "firmware": 0, 00:39:34.554 "ns_manage": 0 00:39:34.554 }, 00:39:34.554 "multi_ctrlr": true, 00:39:34.554 "ana_reporting": false 00:39:34.554 }, 00:39:34.554 "vs": { 00:39:34.554 "nvme_version": "1.3" 00:39:34.554 }, 00:39:34.554 "ns_data": { 00:39:34.554 "id": 1, 00:39:34.554 "can_share": true 00:39:34.554 } 00:39:34.554 } 00:39:34.554 ], 00:39:34.554 "mp_policy": "active_passive" 00:39:34.554 } 00:39:34.554 } 00:39:34.554 ] 00:39:34.554 16:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3353859 00:39:34.554 16:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:39:34.554 16:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:39:34.812 Running I/O for 10 seconds... 00:39:35.746 Latency(us) 00:39:35.746 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:35.746 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:35.746 Nvme0n1 : 1.00 10509.00 41.05 0.00 0.00 0.00 0.00 0.00 00:39:35.746 =================================================================================================================== 00:39:35.746 Total : 10509.00 41.05 0.00 0.00 0.00 0.00 0.00 00:39:35.746 00:39:36.681 16:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 5203609e-5b91-4749-b033-7d93f886ae2c 00:39:36.681 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:36.681 Nvme0n1 : 2.00 10572.50 41.30 0.00 0.00 0.00 0.00 0.00 00:39:36.681 =================================================================================================================== 00:39:36.681 Total : 10572.50 41.30 0.00 0.00 0.00 0.00 0.00 00:39:36.681 00:39:36.939 true 00:39:36.939 16:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5203609e-5b91-4749-b033-7d93f886ae2c 00:39:36.939 16:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:39:37.197 16:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:39:37.197 16:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:39:37.197 16:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3353859 00:39:37.764 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:37.764 Nvme0n1 : 3.00 10607.33 41.43 0.00 0.00 0.00 0.00 0.00 00:39:37.764 =================================================================================================================== 00:39:37.764 Total : 10607.33 41.43 0.00 0.00 0.00 0.00 0.00 00:39:37.764 00:39:38.697 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:38.697 Nvme0n1 : 4.00 10659.25 41.64 0.00 0.00 0.00 0.00 0.00 00:39:38.697 =================================================================================================================== 00:39:38.697 Total : 10659.25 41.64 0.00 0.00 0.00 0.00 0.00 00:39:38.697 00:39:40.070 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:40.070 Nvme0n1 : 5.00 10719.00 41.87 0.00 0.00 0.00 0.00 0.00 00:39:40.070 =================================================================================================================== 00:39:40.070 Total : 10719.00 41.87 0.00 0.00 0.00 0.00 0.00 00:39:40.070 00:39:41.006 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:41.006 Nvme0n1 : 6.00 10817.67 42.26 0.00 0.00 0.00 0.00 0.00 00:39:41.006 =================================================================================================================== 00:39:41.006 Total : 10817.67 42.26 0.00 0.00 0.00 0.00 0.00 00:39:41.006 00:39:41.941 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:41.941 Nvme0n1 : 7.00 10826.43 42.29 0.00 0.00 0.00 0.00 0.00 00:39:41.941 =================================================================================================================== 00:39:41.941 Total : 10826.43 42.29 0.00 0.00 0.00 0.00 0.00 00:39:41.941 00:39:42.875 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:42.875 Nvme0n1 : 8.00 10841.12 42.35 0.00 0.00 0.00 0.00 0.00 00:39:42.875 =================================================================================================================== 00:39:42.875 Total : 10841.12 42.35 0.00 0.00 0.00 0.00 0.00 00:39:42.875 00:39:43.809 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:43.809 Nvme0n1 : 9.00 10852.56 42.39 0.00 0.00 0.00 0.00 0.00 00:39:43.809 =================================================================================================================== 00:39:43.809 Total : 10852.56 42.39 0.00 0.00 0.00 0.00 0.00 00:39:43.809 00:39:44.740 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:44.740 Nvme0n1 : 10.00 10865.80 42.44 0.00 0.00 0.00 0.00 0.00 00:39:44.740 =================================================================================================================== 00:39:44.740 Total : 10865.80 42.44 0.00 0.00 0.00 0.00 0.00 00:39:44.740 00:39:44.740 00:39:44.740 Latency(us) 00:39:44.740 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:44.740 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:44.740 Nvme0n1 : 10.01 10871.00 42.46 0.00 0.00 11767.76 6747.78 26796.94 00:39:44.740 =================================================================================================================== 00:39:44.740 Total : 10871.00 42.46 0.00 0.00 11767.76 6747.78 26796.94 00:39:44.740 { 00:39:44.740 "results": [ 00:39:44.740 { 00:39:44.740 "job": "Nvme0n1", 00:39:44.740 "core_mask": "0x2", 00:39:44.740 "workload": "randwrite", 00:39:44.740 "status": "finished", 00:39:44.740 "queue_depth": 128, 00:39:44.740 "io_size": 4096, 00:39:44.740 "runtime": 10.006994, 00:39:44.740 "iops": 10870.99682482072, 00:39:44.740 "mibps": 42.46483134695594, 00:39:44.740 "io_failed": 0, 00:39:44.740 "io_timeout": 0, 00:39:44.740 "avg_latency_us": 11767.763478715604, 00:39:44.740 "min_latency_us": 6747.780740740741, 00:39:44.740 "max_latency_us": 26796.942222222224 00:39:44.740 } 00:39:44.740 ], 00:39:44.740 "core_count": 1 00:39:44.740 } 00:39:44.740 16:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3353719 00:39:44.740 16:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 3353719 ']' 00:39:44.740 16:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 3353719 00:39:44.740 16:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:39:44.740 16:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:44.740 16:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3353719 00:39:44.740 16:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:39:44.740 16:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:39:44.740 16:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3353719' 00:39:44.740 killing process with pid 3353719 00:39:44.740 16:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 3353719 00:39:44.740 Received shutdown signal, test time was about 10.000000 seconds 00:39:44.740 00:39:44.740 Latency(us) 00:39:44.740 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:44.740 =================================================================================================================== 00:39:44.740 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:44.740 16:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 3353719 00:39:46.156 16:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:46.156 16:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:46.441 16:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5203609e-5b91-4749-b033-7d93f886ae2c 00:39:46.441 16:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:39:46.699 16:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:39:46.699 16:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:39:46.700 16:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:39:46.957 [2024-09-29 16:47:47.465326] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:39:46.957 16:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5203609e-5b91-4749-b033-7d93f886ae2c 00:39:46.957 16:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:39:46.957 16:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5203609e-5b91-4749-b033-7d93f886ae2c 00:39:46.957 16:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:46.957 16:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:46.957 16:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:46.957 16:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:46.957 16:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:46.957 16:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:46.957 16:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:46.957 16:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:39:46.957 16:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5203609e-5b91-4749-b033-7d93f886ae2c 00:39:47.215 request: 00:39:47.215 { 00:39:47.215 "uuid": "5203609e-5b91-4749-b033-7d93f886ae2c", 00:39:47.215 "method": "bdev_lvol_get_lvstores", 00:39:47.215 "req_id": 1 00:39:47.215 } 00:39:47.215 Got JSON-RPC error response 00:39:47.215 response: 00:39:47.215 { 00:39:47.215 "code": -19, 00:39:47.215 "message": "No such device" 00:39:47.215 } 00:39:47.473 16:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:39:47.473 16:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:39:47.473 16:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:39:47.473 16:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:39:47.473 16:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:39:47.731 aio_bdev 00:39:47.731 16:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a8db39b8-54b2-4a81-834a-453854ee7df2 00:39:47.731 16:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=a8db39b8-54b2-4a81-834a-453854ee7df2 00:39:47.731 16:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:39:47.731 16:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:39:47.731 16:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:39:47.731 16:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:39:47.731 16:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:39:47.990 16:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a8db39b8-54b2-4a81-834a-453854ee7df2 -t 2000 00:39:48.248 [ 00:39:48.248 { 00:39:48.248 "name": "a8db39b8-54b2-4a81-834a-453854ee7df2", 00:39:48.248 "aliases": [ 00:39:48.248 "lvs/lvol" 00:39:48.248 ], 00:39:48.248 "product_name": "Logical Volume", 00:39:48.248 "block_size": 4096, 00:39:48.248 "num_blocks": 38912, 00:39:48.248 "uuid": "a8db39b8-54b2-4a81-834a-453854ee7df2", 00:39:48.248 "assigned_rate_limits": { 00:39:48.248 "rw_ios_per_sec": 0, 00:39:48.248 "rw_mbytes_per_sec": 0, 00:39:48.248 "r_mbytes_per_sec": 0, 00:39:48.248 "w_mbytes_per_sec": 0 00:39:48.248 }, 00:39:48.248 "claimed": false, 00:39:48.248 "zoned": false, 00:39:48.248 "supported_io_types": { 00:39:48.248 "read": true, 00:39:48.248 "write": true, 00:39:48.248 "unmap": true, 00:39:48.248 "flush": false, 00:39:48.248 "reset": true, 00:39:48.248 "nvme_admin": false, 00:39:48.248 "nvme_io": false, 00:39:48.248 "nvme_io_md": false, 00:39:48.248 "write_zeroes": true, 00:39:48.248 "zcopy": false, 00:39:48.248 "get_zone_info": false, 00:39:48.248 "zone_management": false, 00:39:48.248 "zone_append": false, 00:39:48.248 "compare": false, 00:39:48.248 "compare_and_write": false, 00:39:48.248 "abort": false, 00:39:48.248 "seek_hole": true, 00:39:48.248 "seek_data": true, 00:39:48.248 "copy": false, 00:39:48.248 "nvme_iov_md": false 00:39:48.248 }, 00:39:48.248 "driver_specific": { 00:39:48.248 "lvol": { 00:39:48.248 "lvol_store_uuid": "5203609e-5b91-4749-b033-7d93f886ae2c", 00:39:48.248 "base_bdev": "aio_bdev", 00:39:48.248 "thin_provision": false, 00:39:48.248 "num_allocated_clusters": 38, 00:39:48.248 "snapshot": false, 00:39:48.248 "clone": false, 00:39:48.248 "esnap_clone": false 00:39:48.248 } 00:39:48.248 } 00:39:48.248 } 00:39:48.248 ] 00:39:48.248 16:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:39:48.248 16:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5203609e-5b91-4749-b033-7d93f886ae2c 00:39:48.248 16:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:39:48.506 16:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:39:48.506 16:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5203609e-5b91-4749-b033-7d93f886ae2c 00:39:48.506 16:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:39:48.763 16:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:39:48.763 16:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a8db39b8-54b2-4a81-834a-453854ee7df2 00:39:49.020 16:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5203609e-5b91-4749-b033-7d93f886ae2c 00:39:49.278 16:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:39:49.537 16:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:49.537 00:39:49.537 real 0m19.441s 00:39:49.537 user 0m19.224s 00:39:49.537 sys 0m1.895s 00:39:49.537 16:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:49.537 16:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:39:49.537 ************************************ 00:39:49.537 END TEST lvs_grow_clean 00:39:49.537 ************************************ 00:39:49.537 16:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:39:49.537 16:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:39:49.537 16:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:49.537 16:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:49.796 ************************************ 00:39:49.796 START TEST lvs_grow_dirty 00:39:49.796 ************************************ 00:39:49.796 16:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:39:49.796 16:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:39:49.796 16:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:39:49.796 16:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:39:49.796 16:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:39:49.796 16:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:39:49.796 16:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:39:49.796 16:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:49.796 16:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:49.796 16:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:39:50.054 16:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:39:50.054 16:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:39:50.312 16:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=fc089b89-815a-471c-a1de-6912074ef088 00:39:50.312 16:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc089b89-815a-471c-a1de-6912074ef088 00:39:50.312 16:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:39:50.570 16:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:39:50.570 16:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:39:50.570 16:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u fc089b89-815a-471c-a1de-6912074ef088 lvol 150 00:39:50.829 16:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=b7d76e60-0e79-40a6-8ae3-423f2e7e2dc8 00:39:50.829 16:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:50.829 16:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:39:51.087 [2024-09-29 16:47:51.521271] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:39:51.087 [2024-09-29 16:47:51.521400] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:39:51.087 true 00:39:51.087 16:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc089b89-815a-471c-a1de-6912074ef088 00:39:51.087 16:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:39:51.345 16:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:39:51.345 16:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:39:51.603 16:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b7d76e60-0e79-40a6-8ae3-423f2e7e2dc8 00:39:51.862 16:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:52.121 [2024-09-29 16:47:52.609588] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:52.121 16:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:52.380 16:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3356009 00:39:52.380 16:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:39:52.380 16:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:52.380 16:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3356009 /var/tmp/bdevperf.sock 00:39:52.380 16:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 3356009 ']' 00:39:52.380 16:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:52.380 16:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:52.380 16:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:52.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:52.380 16:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:52.380 16:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:39:52.639 [2024-09-29 16:47:52.976563] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:39:52.639 [2024-09-29 16:47:52.976713] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3356009 ] 00:39:52.639 [2024-09-29 16:47:53.104737] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:52.898 [2024-09-29 16:47:53.347559] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:39:53.464 16:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:53.464 16:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:39:53.464 16:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:39:54.030 Nvme0n1 00:39:54.030 16:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:39:54.288 [ 00:39:54.288 { 00:39:54.288 "name": "Nvme0n1", 00:39:54.288 "aliases": [ 00:39:54.288 "b7d76e60-0e79-40a6-8ae3-423f2e7e2dc8" 00:39:54.288 ], 00:39:54.288 "product_name": "NVMe disk", 00:39:54.288 "block_size": 4096, 00:39:54.288 "num_blocks": 38912, 00:39:54.288 "uuid": "b7d76e60-0e79-40a6-8ae3-423f2e7e2dc8", 00:39:54.288 "numa_id": 0, 00:39:54.288 "assigned_rate_limits": { 00:39:54.288 "rw_ios_per_sec": 0, 00:39:54.288 "rw_mbytes_per_sec": 0, 00:39:54.288 "r_mbytes_per_sec": 0, 00:39:54.288 "w_mbytes_per_sec": 0 00:39:54.288 }, 00:39:54.288 "claimed": false, 00:39:54.288 "zoned": false, 00:39:54.288 "supported_io_types": { 00:39:54.288 "read": true, 00:39:54.288 "write": true, 00:39:54.288 "unmap": true, 00:39:54.288 "flush": true, 00:39:54.288 "reset": true, 00:39:54.288 "nvme_admin": true, 00:39:54.288 "nvme_io": true, 00:39:54.288 "nvme_io_md": false, 00:39:54.288 "write_zeroes": true, 00:39:54.288 "zcopy": false, 00:39:54.288 "get_zone_info": false, 00:39:54.288 "zone_management": false, 00:39:54.288 "zone_append": false, 00:39:54.288 "compare": true, 00:39:54.288 "compare_and_write": true, 00:39:54.288 "abort": true, 00:39:54.288 "seek_hole": false, 00:39:54.288 "seek_data": false, 00:39:54.288 "copy": true, 00:39:54.288 "nvme_iov_md": false 00:39:54.288 }, 00:39:54.288 "memory_domains": [ 00:39:54.288 { 00:39:54.288 "dma_device_id": "system", 00:39:54.288 "dma_device_type": 1 00:39:54.288 } 00:39:54.288 ], 00:39:54.288 "driver_specific": { 00:39:54.288 "nvme": [ 00:39:54.288 { 00:39:54.288 "trid": { 00:39:54.288 "trtype": "TCP", 00:39:54.288 "adrfam": "IPv4", 00:39:54.288 "traddr": "10.0.0.2", 00:39:54.288 "trsvcid": "4420", 00:39:54.288 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:39:54.288 }, 00:39:54.288 "ctrlr_data": { 00:39:54.288 "cntlid": 1, 00:39:54.288 "vendor_id": "0x8086", 00:39:54.288 "model_number": "SPDK bdev Controller", 00:39:54.288 "serial_number": "SPDK0", 00:39:54.288 "firmware_revision": "25.01", 00:39:54.288 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:54.288 "oacs": { 00:39:54.288 "security": 0, 00:39:54.288 "format": 0, 00:39:54.289 "firmware": 0, 00:39:54.289 "ns_manage": 0 00:39:54.289 }, 00:39:54.289 "multi_ctrlr": true, 00:39:54.289 "ana_reporting": false 00:39:54.289 }, 00:39:54.289 "vs": { 00:39:54.289 "nvme_version": "1.3" 00:39:54.289 }, 00:39:54.289 "ns_data": { 00:39:54.289 "id": 1, 00:39:54.289 "can_share": true 00:39:54.289 } 00:39:54.289 } 00:39:54.289 ], 00:39:54.289 "mp_policy": "active_passive" 00:39:54.289 } 00:39:54.289 } 00:39:54.289 ] 00:39:54.289 16:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3356251 00:39:54.289 16:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:39:54.289 16:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:39:54.547 Running I/O for 10 seconds... 00:39:55.483 Latency(us) 00:39:55.483 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:55.483 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:55.483 Nvme0n1 : 1.00 11082.00 43.29 0.00 0.00 0.00 0.00 0.00 00:39:55.483 =================================================================================================================== 00:39:55.483 Total : 11082.00 43.29 0.00 0.00 0.00 0.00 0.00 00:39:55.483 00:39:56.419 16:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u fc089b89-815a-471c-a1de-6912074ef088 00:39:56.419 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:56.419 Nvme0n1 : 2.00 10922.50 42.67 0.00 0.00 0.00 0.00 0.00 00:39:56.419 =================================================================================================================== 00:39:56.419 Total : 10922.50 42.67 0.00 0.00 0.00 0.00 0.00 00:39:56.419 00:39:56.677 true 00:39:56.677 16:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc089b89-815a-471c-a1de-6912074ef088 00:39:56.677 16:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:39:56.936 16:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:39:56.936 16:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:39:56.936 16:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3356251 00:39:57.502 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:57.502 Nvme0n1 : 3.00 10868.33 42.45 0.00 0.00 0.00 0.00 0.00 00:39:57.502 =================================================================================================================== 00:39:57.502 Total : 10868.33 42.45 0.00 0.00 0.00 0.00 0.00 00:39:57.502 00:39:58.438 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:58.438 Nvme0n1 : 4.00 10872.25 42.47 0.00 0.00 0.00 0.00 0.00 00:39:58.438 =================================================================================================================== 00:39:58.438 Total : 10872.25 42.47 0.00 0.00 0.00 0.00 0.00 00:39:58.438 00:39:59.811 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:59.811 Nvme0n1 : 5.00 10863.40 42.44 0.00 0.00 0.00 0.00 0.00 00:39:59.811 =================================================================================================================== 00:39:59.811 Total : 10863.40 42.44 0.00 0.00 0.00 0.00 0.00 00:39:59.811 00:40:00.746 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:00.746 Nvme0n1 : 6.00 10835.33 42.33 0.00 0.00 0.00 0.00 0.00 00:40:00.746 =================================================================================================================== 00:40:00.746 Total : 10835.33 42.33 0.00 0.00 0.00 0.00 0.00 00:40:00.746 00:40:01.680 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:01.680 Nvme0n1 : 7.00 10833.86 42.32 0.00 0.00 0.00 0.00 0.00 00:40:01.680 =================================================================================================================== 00:40:01.680 Total : 10833.86 42.32 0.00 0.00 0.00 0.00 0.00 00:40:01.680 00:40:02.615 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:02.615 Nvme0n1 : 8.00 10856.75 42.41 0.00 0.00 0.00 0.00 0.00 00:40:02.615 =================================================================================================================== 00:40:02.615 Total : 10856.75 42.41 0.00 0.00 0.00 0.00 0.00 00:40:02.615 00:40:03.550 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:03.550 Nvme0n1 : 9.00 10867.67 42.45 0.00 0.00 0.00 0.00 0.00 00:40:03.550 =================================================================================================================== 00:40:03.550 Total : 10867.67 42.45 0.00 0.00 0.00 0.00 0.00 00:40:03.550 00:40:04.486 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:04.486 Nvme0n1 : 10.00 10936.90 42.72 0.00 0.00 0.00 0.00 0.00 00:40:04.486 =================================================================================================================== 00:40:04.486 Total : 10936.90 42.72 0.00 0.00 0.00 0.00 0.00 00:40:04.486 00:40:04.486 00:40:04.486 Latency(us) 00:40:04.486 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:04.486 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:04.486 Nvme0n1 : 10.01 10937.70 42.73 0.00 0.00 11695.47 6747.78 24855.13 00:40:04.486 =================================================================================================================== 00:40:04.486 Total : 10937.70 42.73 0.00 0.00 11695.47 6747.78 24855.13 00:40:04.486 { 00:40:04.486 "results": [ 00:40:04.486 { 00:40:04.486 "job": "Nvme0n1", 00:40:04.486 "core_mask": "0x2", 00:40:04.486 "workload": "randwrite", 00:40:04.486 "status": "finished", 00:40:04.486 "queue_depth": 128, 00:40:04.486 "io_size": 4096, 00:40:04.486 "runtime": 10.005119, 00:40:04.486 "iops": 10937.700990862777, 00:40:04.486 "mibps": 42.72539449555772, 00:40:04.486 "io_failed": 0, 00:40:04.486 "io_timeout": 0, 00:40:04.486 "avg_latency_us": 11695.468370371047, 00:40:04.486 "min_latency_us": 6747.780740740741, 00:40:04.486 "max_latency_us": 24855.134814814814 00:40:04.486 } 00:40:04.486 ], 00:40:04.486 "core_count": 1 00:40:04.486 } 00:40:04.486 16:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3356009 00:40:04.486 16:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 3356009 ']' 00:40:04.486 16:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 3356009 00:40:04.486 16:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:40:04.486 16:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:04.486 16:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3356009 00:40:04.486 16:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:40:04.486 16:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:40:04.486 16:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3356009' 00:40:04.486 killing process with pid 3356009 00:40:04.486 16:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 3356009 00:40:04.486 Received shutdown signal, test time was about 10.000000 seconds 00:40:04.486 00:40:04.486 Latency(us) 00:40:04.486 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:04.486 =================================================================================================================== 00:40:04.486 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:04.486 16:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 3356009 00:40:05.862 16:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:05.862 16:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:06.429 16:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc089b89-815a-471c-a1de-6912074ef088 00:40:06.429 16:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:40:06.429 16:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:40:06.429 16:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:40:06.429 16:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3353155 00:40:06.429 16:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3353155 00:40:06.687 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3353155 Killed "${NVMF_APP[@]}" "$@" 00:40:06.687 16:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:40:06.687 16:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:40:06.687 16:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:40:06.687 16:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:06.687 16:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:06.687 16:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # nvmfpid=3357712 00:40:06.687 16:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:40:06.687 16:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # waitforlisten 3357712 00:40:06.687 16:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 3357712 ']' 00:40:06.687 16:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:06.687 16:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:06.687 16:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:06.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:06.687 16:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:06.687 16:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:06.687 [2024-09-29 16:48:07.096288] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:06.687 [2024-09-29 16:48:07.098888] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:40:06.687 [2024-09-29 16:48:07.098981] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:06.687 [2024-09-29 16:48:07.239172] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:06.943 [2024-09-29 16:48:07.474375] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:06.943 [2024-09-29 16:48:07.474448] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:06.943 [2024-09-29 16:48:07.474489] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:06.943 [2024-09-29 16:48:07.474507] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:06.943 [2024-09-29 16:48:07.474525] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:06.944 [2024-09-29 16:48:07.474575] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:40:07.507 [2024-09-29 16:48:07.827806] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:07.507 [2024-09-29 16:48:07.828231] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:07.763 16:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:07.763 16:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:40:07.763 16:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:40:07.763 16:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:07.763 16:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:07.763 16:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:07.763 16:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:40:08.021 [2024-09-29 16:48:08.404876] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:40:08.021 [2024-09-29 16:48:08.405123] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:40:08.021 [2024-09-29 16:48:08.405208] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:40:08.021 16:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:40:08.021 16:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev b7d76e60-0e79-40a6-8ae3-423f2e7e2dc8 00:40:08.021 16:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=b7d76e60-0e79-40a6-8ae3-423f2e7e2dc8 00:40:08.021 16:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:40:08.021 16:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:40:08.021 16:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:40:08.021 16:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:40:08.021 16:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:40:08.278 16:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b7d76e60-0e79-40a6-8ae3-423f2e7e2dc8 -t 2000 00:40:08.536 [ 00:40:08.536 { 00:40:08.536 "name": "b7d76e60-0e79-40a6-8ae3-423f2e7e2dc8", 00:40:08.536 "aliases": [ 00:40:08.536 "lvs/lvol" 00:40:08.536 ], 00:40:08.536 "product_name": "Logical Volume", 00:40:08.536 "block_size": 4096, 00:40:08.536 "num_blocks": 38912, 00:40:08.536 "uuid": "b7d76e60-0e79-40a6-8ae3-423f2e7e2dc8", 00:40:08.536 "assigned_rate_limits": { 00:40:08.536 "rw_ios_per_sec": 0, 00:40:08.536 "rw_mbytes_per_sec": 0, 00:40:08.536 "r_mbytes_per_sec": 0, 00:40:08.536 "w_mbytes_per_sec": 0 00:40:08.536 }, 00:40:08.536 "claimed": false, 00:40:08.536 "zoned": false, 00:40:08.536 "supported_io_types": { 00:40:08.536 "read": true, 00:40:08.536 "write": true, 00:40:08.536 "unmap": true, 00:40:08.536 "flush": false, 00:40:08.536 "reset": true, 00:40:08.536 "nvme_admin": false, 00:40:08.536 "nvme_io": false, 00:40:08.536 "nvme_io_md": false, 00:40:08.536 "write_zeroes": true, 00:40:08.536 "zcopy": false, 00:40:08.536 "get_zone_info": false, 00:40:08.536 "zone_management": false, 00:40:08.536 "zone_append": false, 00:40:08.536 "compare": false, 00:40:08.536 "compare_and_write": false, 00:40:08.536 "abort": false, 00:40:08.536 "seek_hole": true, 00:40:08.536 "seek_data": true, 00:40:08.536 "copy": false, 00:40:08.536 "nvme_iov_md": false 00:40:08.536 }, 00:40:08.536 "driver_specific": { 00:40:08.536 "lvol": { 00:40:08.536 "lvol_store_uuid": "fc089b89-815a-471c-a1de-6912074ef088", 00:40:08.536 "base_bdev": "aio_bdev", 00:40:08.536 "thin_provision": false, 00:40:08.536 "num_allocated_clusters": 38, 00:40:08.536 "snapshot": false, 00:40:08.536 "clone": false, 00:40:08.536 "esnap_clone": false 00:40:08.536 } 00:40:08.536 } 00:40:08.536 } 00:40:08.536 ] 00:40:08.536 16:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:40:08.536 16:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc089b89-815a-471c-a1de-6912074ef088 00:40:08.536 16:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:40:08.793 16:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:40:08.794 16:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc089b89-815a-471c-a1de-6912074ef088 00:40:08.794 16:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:40:09.051 16:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:40:09.051 16:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:40:09.309 [2024-09-29 16:48:09.835484] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:40:09.309 16:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc089b89-815a-471c-a1de-6912074ef088 00:40:09.309 16:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:40:09.309 16:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc089b89-815a-471c-a1de-6912074ef088 00:40:09.309 16:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:09.309 16:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:09.309 16:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:09.309 16:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:09.309 16:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:09.309 16:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:09.309 16:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:09.309 16:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:40:09.309 16:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc089b89-815a-471c-a1de-6912074ef088 00:40:09.878 request: 00:40:09.878 { 00:40:09.878 "uuid": "fc089b89-815a-471c-a1de-6912074ef088", 00:40:09.878 "method": "bdev_lvol_get_lvstores", 00:40:09.878 "req_id": 1 00:40:09.878 } 00:40:09.878 Got JSON-RPC error response 00:40:09.878 response: 00:40:09.878 { 00:40:09.878 "code": -19, 00:40:09.878 "message": "No such device" 00:40:09.878 } 00:40:09.878 16:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:40:09.878 16:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:40:09.878 16:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:40:09.878 16:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:40:09.878 16:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:40:09.878 aio_bdev 00:40:09.878 16:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev b7d76e60-0e79-40a6-8ae3-423f2e7e2dc8 00:40:09.878 16:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=b7d76e60-0e79-40a6-8ae3-423f2e7e2dc8 00:40:09.878 16:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:40:09.878 16:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:40:09.878 16:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:40:09.878 16:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:40:09.878 16:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:40:10.444 16:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b7d76e60-0e79-40a6-8ae3-423f2e7e2dc8 -t 2000 00:40:10.444 [ 00:40:10.444 { 00:40:10.444 "name": "b7d76e60-0e79-40a6-8ae3-423f2e7e2dc8", 00:40:10.444 "aliases": [ 00:40:10.444 "lvs/lvol" 00:40:10.444 ], 00:40:10.444 "product_name": "Logical Volume", 00:40:10.444 "block_size": 4096, 00:40:10.444 "num_blocks": 38912, 00:40:10.444 "uuid": "b7d76e60-0e79-40a6-8ae3-423f2e7e2dc8", 00:40:10.444 "assigned_rate_limits": { 00:40:10.444 "rw_ios_per_sec": 0, 00:40:10.444 "rw_mbytes_per_sec": 0, 00:40:10.444 "r_mbytes_per_sec": 0, 00:40:10.444 "w_mbytes_per_sec": 0 00:40:10.444 }, 00:40:10.444 "claimed": false, 00:40:10.444 "zoned": false, 00:40:10.444 "supported_io_types": { 00:40:10.444 "read": true, 00:40:10.444 "write": true, 00:40:10.444 "unmap": true, 00:40:10.444 "flush": false, 00:40:10.444 "reset": true, 00:40:10.444 "nvme_admin": false, 00:40:10.444 "nvme_io": false, 00:40:10.444 "nvme_io_md": false, 00:40:10.444 "write_zeroes": true, 00:40:10.444 "zcopy": false, 00:40:10.444 "get_zone_info": false, 00:40:10.444 "zone_management": false, 00:40:10.444 "zone_append": false, 00:40:10.444 "compare": false, 00:40:10.444 "compare_and_write": false, 00:40:10.444 "abort": false, 00:40:10.444 "seek_hole": true, 00:40:10.444 "seek_data": true, 00:40:10.444 "copy": false, 00:40:10.444 "nvme_iov_md": false 00:40:10.444 }, 00:40:10.444 "driver_specific": { 00:40:10.444 "lvol": { 00:40:10.444 "lvol_store_uuid": "fc089b89-815a-471c-a1de-6912074ef088", 00:40:10.444 "base_bdev": "aio_bdev", 00:40:10.444 "thin_provision": false, 00:40:10.444 "num_allocated_clusters": 38, 00:40:10.444 "snapshot": false, 00:40:10.444 "clone": false, 00:40:10.444 "esnap_clone": false 00:40:10.444 } 00:40:10.444 } 00:40:10.444 } 00:40:10.444 ] 00:40:10.444 16:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:40:10.444 16:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc089b89-815a-471c-a1de-6912074ef088 00:40:10.444 16:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:40:10.702 16:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:40:10.702 16:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:40:10.702 16:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc089b89-815a-471c-a1de-6912074ef088 00:40:11.267 16:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:40:11.267 16:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b7d76e60-0e79-40a6-8ae3-423f2e7e2dc8 00:40:11.267 16:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fc089b89-815a-471c-a1de-6912074ef088 00:40:11.832 16:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:40:11.832 16:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:12.090 00:40:12.090 real 0m22.303s 00:40:12.090 user 0m39.885s 00:40:12.090 sys 0m4.656s 00:40:12.090 16:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:12.090 16:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:12.090 ************************************ 00:40:12.090 END TEST lvs_grow_dirty 00:40:12.090 ************************************ 00:40:12.090 16:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:40:12.090 16:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:40:12.090 16:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:40:12.090 16:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:40:12.090 16:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:40:12.090 16:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:40:12.090 16:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:40:12.090 16:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:40:12.090 16:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:40:12.090 nvmf_trace.0 00:40:12.090 16:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:40:12.090 16:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:40:12.090 16:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # nvmfcleanup 00:40:12.090 16:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:40:12.090 16:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:12.090 16:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:40:12.090 16:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:12.090 16:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:12.090 rmmod nvme_tcp 00:40:12.090 rmmod nvme_fabrics 00:40:12.090 rmmod nvme_keyring 00:40:12.090 16:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:12.090 16:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:40:12.090 16:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:40:12.090 16:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@513 -- # '[' -n 3357712 ']' 00:40:12.090 16:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@514 -- # killprocess 3357712 00:40:12.090 16:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 3357712 ']' 00:40:12.090 16:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 3357712 00:40:12.090 16:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:40:12.090 16:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:12.090 16:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3357712 00:40:12.090 16:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:40:12.090 16:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:40:12.090 16:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3357712' 00:40:12.090 killing process with pid 3357712 00:40:12.090 16:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 3357712 00:40:12.090 16:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 3357712 00:40:13.464 16:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:40:13.464 16:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:40:13.464 16:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:40:13.464 16:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:40:13.464 16:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-save 00:40:13.464 16:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:40:13.464 16:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-restore 00:40:13.464 16:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:13.464 16:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:13.464 16:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:13.464 16:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:13.464 16:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:15.995 16:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:15.995 00:40:15.995 real 0m48.963s 00:40:15.995 user 1m2.523s 00:40:15.995 sys 0m8.669s 00:40:15.995 16:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:15.995 16:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:15.995 ************************************ 00:40:15.995 END TEST nvmf_lvs_grow 00:40:15.995 ************************************ 00:40:15.995 16:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:40:15.995 16:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:40:15.995 16:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:15.995 16:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:15.995 ************************************ 00:40:15.995 START TEST nvmf_bdev_io_wait 00:40:15.995 ************************************ 00:40:15.995 16:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:40:15.995 * Looking for test storage... 00:40:15.995 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:15.995 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:40:15.995 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:40:15.995 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:40:15.995 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:40:15.995 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:15.995 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:15.995 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:15.995 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:40:15.995 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:40:15.995 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:40:15.995 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:40:15.995 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:40:15.995 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:40:15.995 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:40:15.995 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:15.995 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:40:15.995 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:40:15.995 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:15.995 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:15.995 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:40:15.995 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:40:15.995 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:15.995 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:40:15.995 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:40:15.995 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:40:15.995 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:40:15.995 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:15.995 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:40:15.995 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:40:15.995 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:15.995 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:15.995 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:40:15.995 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:15.995 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:40:15.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:15.995 --rc genhtml_branch_coverage=1 00:40:15.995 --rc genhtml_function_coverage=1 00:40:15.995 --rc genhtml_legend=1 00:40:15.995 --rc geninfo_all_blocks=1 00:40:15.995 --rc geninfo_unexecuted_blocks=1 00:40:15.995 00:40:15.995 ' 00:40:15.995 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:40:15.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:15.995 --rc genhtml_branch_coverage=1 00:40:15.995 --rc genhtml_function_coverage=1 00:40:15.995 --rc genhtml_legend=1 00:40:15.995 --rc geninfo_all_blocks=1 00:40:15.995 --rc geninfo_unexecuted_blocks=1 00:40:15.995 00:40:15.995 ' 00:40:15.995 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:40:15.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:15.995 --rc genhtml_branch_coverage=1 00:40:15.995 --rc genhtml_function_coverage=1 00:40:15.995 --rc genhtml_legend=1 00:40:15.995 --rc geninfo_all_blocks=1 00:40:15.995 --rc geninfo_unexecuted_blocks=1 00:40:15.995 00:40:15.995 ' 00:40:15.995 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:40:15.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:15.995 --rc genhtml_branch_coverage=1 00:40:15.995 --rc genhtml_function_coverage=1 00:40:15.995 --rc genhtml_legend=1 00:40:15.995 --rc geninfo_all_blocks=1 00:40:15.995 --rc geninfo_unexecuted_blocks=1 00:40:15.995 00:40:15.995 ' 00:40:15.995 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:15.995 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:40:15.995 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:15.995 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:15.995 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:15.995 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:15.995 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:15.995 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:15.995 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:15.995 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:15.995 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:15.995 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:15.995 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:15.995 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:15.995 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:15.996 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:15.996 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:15.996 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:15.996 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:15.996 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:40:15.996 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:15.996 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:15.996 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:15.996 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:15.996 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:15.996 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:15.996 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:40:15.996 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:15.996 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:40:15.996 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:15.996 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:15.996 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:15.996 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:15.996 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:15.996 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:15.996 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:15.996 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:15.996 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:15.996 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:15.996 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:15.996 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:15.996 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:40:15.996 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:40:15.996 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:15.996 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # prepare_net_devs 00:40:15.996 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@434 -- # local -g is_hw=no 00:40:15.996 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # remove_spdk_ns 00:40:15.996 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:15.996 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:15.996 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:15.996 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:40:15.996 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:40:15.996 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:40:15.996 16:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:17.904 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:17.904 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:40:17.904 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:17.904 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:17.904 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:17.904 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:17.904 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:17.904 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:40:17.904 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:17.904 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:40:17.904 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:40:17.904 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:40:17.904 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:40:17.904 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:40:17.904 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:40:17.904 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:17.904 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:17.904 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:17.904 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:17.904 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:17.904 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:17.904 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:17.904 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:17.904 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:17.904 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:17.904 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:17.904 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:40:17.904 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:40:17.904 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:40:17.904 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:40:17.904 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:40:17.904 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:40:17.904 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:40:17.904 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:17.904 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:17.904 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:40:17.904 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:40:17.904 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:17.904 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:17.904 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:40:17.904 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:40:17.904 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:17.904 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:17.904 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:40:17.904 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:40:17.904 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:17.904 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:17.904 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:40:17.904 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:40:17.904 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:40:17.904 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:40:17.904 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:40:17.904 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:17.904 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:40:17.904 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:17.904 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ up == up ]] 00:40:17.904 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:40:17.904 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:17.904 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:17.904 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:17.904 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:40:17.904 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:40:17.904 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:17.904 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:40:17.904 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:17.904 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ up == up ]] 00:40:17.904 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:40:17.904 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:17.904 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:17.904 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:17.904 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:40:17.905 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:40:17.905 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # is_hw=yes 00:40:17.905 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:40:17.905 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:40:17.905 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:40:17.905 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:17.905 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:17.905 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:17.905 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:17.905 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:17.905 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:17.905 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:17.905 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:17.905 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:17.905 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:17.905 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:17.905 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:17.905 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:17.905 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:17.905 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:17.905 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:17.905 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:17.905 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:17.905 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:17.905 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:17.905 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:17.905 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:17.905 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:17.905 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:17.905 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.353 ms 00:40:17.905 00:40:17.905 --- 10.0.0.2 ping statistics --- 00:40:17.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:17.905 rtt min/avg/max/mdev = 0.353/0.353/0.353/0.000 ms 00:40:17.905 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:17.905 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:17.905 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:40:17.905 00:40:17.905 --- 10.0.0.1 ping statistics --- 00:40:17.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:17.905 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:40:17.905 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:17.905 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # return 0 00:40:17.905 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:40:17.905 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:17.905 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:40:17.905 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:40:17.905 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:17.905 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:40:17.905 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:40:17.905 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:40:17.905 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:40:17.905 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:17.905 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:17.905 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # nvmfpid=3360999 00:40:17.905 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:40:17.905 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # waitforlisten 3360999 00:40:17.905 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 3360999 ']' 00:40:17.905 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:17.905 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:17.905 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:17.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:17.905 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:17.905 16:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:18.221 [2024-09-29 16:48:18.508818] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:18.221 [2024-09-29 16:48:18.511570] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:40:18.221 [2024-09-29 16:48:18.511685] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:18.221 [2024-09-29 16:48:18.650915] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:18.481 [2024-09-29 16:48:18.907628] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:18.481 [2024-09-29 16:48:18.907703] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:18.481 [2024-09-29 16:48:18.907734] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:18.481 [2024-09-29 16:48:18.907756] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:18.481 [2024-09-29 16:48:18.907777] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:18.481 [2024-09-29 16:48:18.907900] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:40:18.481 [2024-09-29 16:48:18.907983] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:40:18.481 [2024-09-29 16:48:18.908031] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:40:18.481 [2024-09-29 16:48:18.908039] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:40:18.481 [2024-09-29 16:48:18.908729] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:19.046 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:19.046 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:40:19.046 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:40:19.046 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:19.046 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:19.046 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:19.046 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:40:19.046 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:19.046 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:19.046 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:19.046 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:40:19.046 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:19.046 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:19.305 [2024-09-29 16:48:19.751832] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:19.305 [2024-09-29 16:48:19.752998] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:19.305 [2024-09-29 16:48:19.754118] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:19.305 [2024-09-29 16:48:19.755238] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:40:19.305 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:19.305 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:19.305 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:19.305 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:19.305 [2024-09-29 16:48:19.761044] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:19.305 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:19.305 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:19.305 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:19.305 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:19.563 Malloc0 00:40:19.563 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:19.563 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:19.563 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:19.563 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:19.563 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:19.564 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:19.564 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:19.564 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:19.564 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:19.564 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:19.564 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:19.564 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:19.564 [2024-09-29 16:48:19.893307] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:19.564 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:19.564 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3361207 00:40:19.564 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3361210 00:40:19.564 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:40:19.564 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:40:19.564 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:40:19.564 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:40:19.564 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:40:19.564 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:40:19.564 { 00:40:19.564 "params": { 00:40:19.564 "name": "Nvme$subsystem", 00:40:19.564 "trtype": "$TEST_TRANSPORT", 00:40:19.564 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:19.564 "adrfam": "ipv4", 00:40:19.564 "trsvcid": "$NVMF_PORT", 00:40:19.564 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:19.564 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:19.564 "hdgst": ${hdgst:-false}, 00:40:19.564 "ddgst": ${ddgst:-false} 00:40:19.564 }, 00:40:19.564 "method": "bdev_nvme_attach_controller" 00:40:19.564 } 00:40:19.564 EOF 00:40:19.564 )") 00:40:19.564 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:40:19.564 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:40:19.564 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3361213 00:40:19.564 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:40:19.564 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:40:19.564 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:40:19.564 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:40:19.564 { 00:40:19.564 "params": { 00:40:19.564 "name": "Nvme$subsystem", 00:40:19.564 "trtype": "$TEST_TRANSPORT", 00:40:19.564 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:19.564 "adrfam": "ipv4", 00:40:19.564 "trsvcid": "$NVMF_PORT", 00:40:19.564 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:19.564 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:19.564 "hdgst": ${hdgst:-false}, 00:40:19.564 "ddgst": ${ddgst:-false} 00:40:19.564 }, 00:40:19.564 "method": "bdev_nvme_attach_controller" 00:40:19.564 } 00:40:19.564 EOF 00:40:19.564 )") 00:40:19.564 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:40:19.564 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:40:19.564 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3361217 00:40:19.564 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:40:19.564 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:40:19.564 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:40:19.564 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:40:19.564 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:40:19.564 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:40:19.564 { 00:40:19.564 "params": { 00:40:19.564 "name": "Nvme$subsystem", 00:40:19.564 "trtype": "$TEST_TRANSPORT", 00:40:19.564 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:19.564 "adrfam": "ipv4", 00:40:19.564 "trsvcid": "$NVMF_PORT", 00:40:19.564 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:19.564 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:19.564 "hdgst": ${hdgst:-false}, 00:40:19.564 "ddgst": ${ddgst:-false} 00:40:19.564 }, 00:40:19.564 "method": "bdev_nvme_attach_controller" 00:40:19.564 } 00:40:19.564 EOF 00:40:19.564 )") 00:40:19.564 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:40:19.564 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:40:19.564 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:40:19.564 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:40:19.564 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:40:19.564 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:40:19.564 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:40:19.564 { 00:40:19.564 "params": { 00:40:19.564 "name": "Nvme$subsystem", 00:40:19.564 "trtype": "$TEST_TRANSPORT", 00:40:19.564 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:19.564 "adrfam": "ipv4", 00:40:19.564 "trsvcid": "$NVMF_PORT", 00:40:19.564 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:19.564 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:19.564 "hdgst": ${hdgst:-false}, 00:40:19.564 "ddgst": ${ddgst:-false} 00:40:19.564 }, 00:40:19.564 "method": "bdev_nvme_attach_controller" 00:40:19.564 } 00:40:19.564 EOF 00:40:19.564 )") 00:40:19.564 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:40:19.564 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3361207 00:40:19.564 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:40:19.564 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:40:19.564 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:40:19.564 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:40:19.564 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:40:19.564 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:40:19.564 "params": { 00:40:19.564 "name": "Nvme1", 00:40:19.564 "trtype": "tcp", 00:40:19.564 "traddr": "10.0.0.2", 00:40:19.564 "adrfam": "ipv4", 00:40:19.564 "trsvcid": "4420", 00:40:19.564 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:19.564 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:19.564 "hdgst": false, 00:40:19.564 "ddgst": false 00:40:19.564 }, 00:40:19.564 "method": "bdev_nvme_attach_controller" 00:40:19.564 }' 00:40:19.565 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:40:19.565 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:40:19.565 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:40:19.565 "params": { 00:40:19.565 "name": "Nvme1", 00:40:19.565 "trtype": "tcp", 00:40:19.565 "traddr": "10.0.0.2", 00:40:19.565 "adrfam": "ipv4", 00:40:19.565 "trsvcid": "4420", 00:40:19.565 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:19.565 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:19.565 "hdgst": false, 00:40:19.565 "ddgst": false 00:40:19.565 }, 00:40:19.565 "method": "bdev_nvme_attach_controller" 00:40:19.565 }' 00:40:19.565 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:40:19.565 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:40:19.565 "params": { 00:40:19.565 "name": "Nvme1", 00:40:19.565 "trtype": "tcp", 00:40:19.565 "traddr": "10.0.0.2", 00:40:19.565 "adrfam": "ipv4", 00:40:19.565 "trsvcid": "4420", 00:40:19.565 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:19.565 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:19.565 "hdgst": false, 00:40:19.565 "ddgst": false 00:40:19.565 }, 00:40:19.565 "method": "bdev_nvme_attach_controller" 00:40:19.565 }' 00:40:19.565 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:40:19.565 16:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:40:19.565 "params": { 00:40:19.565 "name": "Nvme1", 00:40:19.565 "trtype": "tcp", 00:40:19.565 "traddr": "10.0.0.2", 00:40:19.565 "adrfam": "ipv4", 00:40:19.565 "trsvcid": "4420", 00:40:19.565 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:19.565 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:19.565 "hdgst": false, 00:40:19.565 "ddgst": false 00:40:19.565 }, 00:40:19.565 "method": "bdev_nvme_attach_controller" 00:40:19.565 }' 00:40:19.565 [2024-09-29 16:48:19.984114] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:40:19.565 [2024-09-29 16:48:19.984114] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:40:19.565 [2024-09-29 16:48:19.984116] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:40:19.565 [2024-09-29 16:48:19.984177] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:40:19.565 [2024-09-29 16:48:19.984261] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-09-29 16:48:19.984261] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-09-29 16:48:19.984262] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:40:19.565 --proc-type=auto ] 00:40:19.565 --proc-type=auto ] 00:40:19.565 [2024-09-29 16:48:19.984317] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:40:19.823 [2024-09-29 16:48:20.230602] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:19.823 [2024-09-29 16:48:20.332763] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:20.081 [2024-09-29 16:48:20.445343] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:20.081 [2024-09-29 16:48:20.461216] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:40:20.081 [2024-09-29 16:48:20.517242] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:20.081 [2024-09-29 16:48:20.559552] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:40:20.339 [2024-09-29 16:48:20.671456] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:40:20.339 [2024-09-29 16:48:20.733202] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:40:20.597 Running I/O for 1 seconds... 00:40:20.597 Running I/O for 1 seconds... 00:40:20.855 Running I/O for 1 seconds... 00:40:20.855 Running I/O for 1 seconds... 00:40:21.420 116320.00 IOPS, 454.38 MiB/s 00:40:21.420 Latency(us) 00:40:21.420 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:21.420 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:40:21.420 Nvme1n1 : 1.00 116037.56 453.27 0.00 0.00 1097.22 485.45 2342.31 00:40:21.420 =================================================================================================================== 00:40:21.420 Total : 116037.56 453.27 0.00 0.00 1097.22 485.45 2342.31 00:40:21.678 7937.00 IOPS, 31.00 MiB/s 00:40:21.678 Latency(us) 00:40:21.678 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:21.678 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:40:21.678 Nvme1n1 : 1.01 7975.86 31.16 0.00 0.00 15952.77 6068.15 20777.34 00:40:21.678 =================================================================================================================== 00:40:21.678 Total : 7975.86 31.16 0.00 0.00 15952.77 6068.15 20777.34 00:40:21.936 6046.00 IOPS, 23.62 MiB/s 00:40:21.936 Latency(us) 00:40:21.936 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:21.936 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:40:21.936 Nvme1n1 : 1.01 6105.04 23.85 0.00 0.00 20828.99 7524.50 29903.83 00:40:21.936 =================================================================================================================== 00:40:21.936 Total : 6105.04 23.85 0.00 0.00 20828.99 7524.50 29903.83 00:40:21.936 6834.00 IOPS, 26.70 MiB/s 00:40:21.936 Latency(us) 00:40:21.936 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:21.936 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:40:21.936 Nvme1n1 : 1.01 6901.92 26.96 0.00 0.00 18446.73 7767.23 26796.94 00:40:21.936 =================================================================================================================== 00:40:21.936 Total : 6901.92 26.96 0.00 0.00 18446.73 7767.23 26796.94 00:40:22.869 16:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3361210 00:40:22.869 16:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3361213 00:40:22.869 16:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3361217 00:40:22.869 16:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:22.869 16:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:22.869 16:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:23.127 16:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:23.127 16:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:40:23.127 16:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:40:23.127 16:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # nvmfcleanup 00:40:23.127 16:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:40:23.127 16:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:23.127 16:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:40:23.127 16:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:23.127 16:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:23.127 rmmod nvme_tcp 00:40:23.127 rmmod nvme_fabrics 00:40:23.127 rmmod nvme_keyring 00:40:23.127 16:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:23.127 16:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:40:23.127 16:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:40:23.127 16:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@513 -- # '[' -n 3360999 ']' 00:40:23.127 16:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # killprocess 3360999 00:40:23.127 16:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 3360999 ']' 00:40:23.127 16:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 3360999 00:40:23.127 16:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:40:23.127 16:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:23.127 16:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3360999 00:40:23.127 16:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:40:23.127 16:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:40:23.127 16:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3360999' 00:40:23.127 killing process with pid 3360999 00:40:23.127 16:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 3360999 00:40:23.127 16:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 3360999 00:40:24.503 16:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:40:24.503 16:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:40:24.503 16:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:40:24.503 16:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:40:24.503 16:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-save 00:40:24.503 16:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-restore 00:40:24.503 16:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:40:24.503 16:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:24.503 16:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:24.503 16:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:24.503 16:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:24.503 16:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:26.406 16:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:26.406 00:40:26.406 real 0m10.846s 00:40:26.406 user 0m26.461s 00:40:26.406 sys 0m5.564s 00:40:26.406 16:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:26.406 16:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:26.406 ************************************ 00:40:26.406 END TEST nvmf_bdev_io_wait 00:40:26.406 ************************************ 00:40:26.406 16:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:40:26.406 16:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:40:26.406 16:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:26.406 16:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:26.406 ************************************ 00:40:26.406 START TEST nvmf_queue_depth 00:40:26.406 ************************************ 00:40:26.406 16:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:40:26.406 * Looking for test storage... 00:40:26.406 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:26.406 16:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:40:26.406 16:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:40:26.406 16:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:40:26.665 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:40:26.665 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:26.665 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:26.665 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:26.665 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:40:26.665 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:40:26.665 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:40:26.665 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:40:26.665 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:40:26.665 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:40:26.665 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:40:26.665 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:26.665 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:40:26.665 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:40:26.665 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:26.665 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:26.665 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:40:26.665 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:40:26.665 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:26.665 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:40:26.665 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:40:26.665 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:40:26.665 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:40:26.665 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:26.665 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:40:26.665 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:40:26.665 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:26.665 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:26.665 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:40:26.665 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:26.665 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:40:26.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:26.665 --rc genhtml_branch_coverage=1 00:40:26.665 --rc genhtml_function_coverage=1 00:40:26.665 --rc genhtml_legend=1 00:40:26.665 --rc geninfo_all_blocks=1 00:40:26.665 --rc geninfo_unexecuted_blocks=1 00:40:26.665 00:40:26.665 ' 00:40:26.665 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:40:26.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:26.665 --rc genhtml_branch_coverage=1 00:40:26.665 --rc genhtml_function_coverage=1 00:40:26.665 --rc genhtml_legend=1 00:40:26.665 --rc geninfo_all_blocks=1 00:40:26.665 --rc geninfo_unexecuted_blocks=1 00:40:26.665 00:40:26.665 ' 00:40:26.665 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:40:26.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:26.665 --rc genhtml_branch_coverage=1 00:40:26.665 --rc genhtml_function_coverage=1 00:40:26.665 --rc genhtml_legend=1 00:40:26.665 --rc geninfo_all_blocks=1 00:40:26.665 --rc geninfo_unexecuted_blocks=1 00:40:26.665 00:40:26.665 ' 00:40:26.665 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:40:26.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:26.665 --rc genhtml_branch_coverage=1 00:40:26.665 --rc genhtml_function_coverage=1 00:40:26.665 --rc genhtml_legend=1 00:40:26.665 --rc geninfo_all_blocks=1 00:40:26.665 --rc geninfo_unexecuted_blocks=1 00:40:26.665 00:40:26.665 ' 00:40:26.665 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:26.665 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:40:26.665 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:26.665 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:26.665 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:26.665 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:26.665 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:26.665 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:26.665 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:26.665 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:26.665 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:26.665 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:26.665 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:26.665 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:26.665 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:26.665 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:26.665 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:26.665 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:26.665 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:26.665 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:40:26.665 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:26.665 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:26.665 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:26.665 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:26.666 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:26.666 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:26.666 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:40:26.666 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:26.666 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:40:26.666 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:26.666 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:26.666 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:26.666 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:26.666 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:26.666 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:26.666 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:26.666 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:26.666 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:26.666 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:26.666 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:40:26.666 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:40:26.666 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:40:26.666 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:40:26.666 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:40:26.666 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:26.666 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@472 -- # prepare_net_devs 00:40:26.666 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@434 -- # local -g is_hw=no 00:40:26.666 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@436 -- # remove_spdk_ns 00:40:26.666 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:26.666 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:26.666 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:26.666 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:40:26.666 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:40:26.666 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:40:26.666 16:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:28.569 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:28.569 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:40:28.569 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:28.569 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:28.569 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:28.570 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:28.570 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ up == up ]] 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:28.570 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ up == up ]] 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:28.570 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # is_hw=yes 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:28.570 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:28.829 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:28.829 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:28.829 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:28.829 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:28.829 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:40:28.829 00:40:28.829 --- 10.0.0.2 ping statistics --- 00:40:28.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:28.829 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:40:28.829 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:28.829 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:28.829 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:40:28.829 00:40:28.829 --- 10.0.0.1 ping statistics --- 00:40:28.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:28.829 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:40:28.829 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:28.829 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # return 0 00:40:28.829 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:40:28.829 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:28.829 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:40:28.829 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:40:28.829 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:28.829 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:40:28.829 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:40:28.829 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:40:28.829 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:40:28.829 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:28.829 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:28.829 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@505 -- # nvmfpid=3363763 00:40:28.829 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:40:28.829 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@506 -- # waitforlisten 3363763 00:40:28.829 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 3363763 ']' 00:40:28.829 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:28.829 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:28.829 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:28.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:28.829 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:28.829 16:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:28.829 [2024-09-29 16:48:29.253422] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:28.829 [2024-09-29 16:48:29.255777] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:40:28.829 [2024-09-29 16:48:29.255880] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:29.087 [2024-09-29 16:48:29.395472] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:29.345 [2024-09-29 16:48:29.652591] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:29.345 [2024-09-29 16:48:29.652667] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:29.345 [2024-09-29 16:48:29.652707] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:29.345 [2024-09-29 16:48:29.652728] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:29.345 [2024-09-29 16:48:29.652750] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:29.345 [2024-09-29 16:48:29.652806] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:40:29.603 [2024-09-29 16:48:30.029129] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:29.603 [2024-09-29 16:48:30.029557] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:29.861 16:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:29.861 16:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:40:29.861 16:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:40:29.861 16:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:29.861 16:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:29.861 16:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:29.861 16:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:29.861 16:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:29.861 16:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:29.861 [2024-09-29 16:48:30.333787] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:29.861 16:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:29.861 16:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:29.861 16:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:29.861 16:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:29.861 Malloc0 00:40:29.861 16:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:29.861 16:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:29.861 16:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:29.861 16:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:30.119 16:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:30.119 16:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:30.119 16:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:30.119 16:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:30.119 16:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:30.119 16:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:30.120 16:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:30.120 16:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:30.120 [2024-09-29 16:48:30.441993] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:30.120 16:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:30.120 16:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3363917 00:40:30.120 16:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:30.120 16:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:40:30.120 16:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3363917 /var/tmp/bdevperf.sock 00:40:30.120 16:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 3363917 ']' 00:40:30.120 16:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:40:30.120 16:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:30.120 16:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:40:30.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:40:30.120 16:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:30.120 16:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:30.120 [2024-09-29 16:48:30.528645] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:40:30.120 [2024-09-29 16:48:30.528795] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3363917 ] 00:40:30.120 [2024-09-29 16:48:30.658747] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:30.378 [2024-09-29 16:48:30.887575] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:40:31.314 16:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:31.314 16:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:40:31.314 16:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:40:31.314 16:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:31.314 16:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:31.314 NVMe0n1 00:40:31.314 16:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:31.314 16:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:40:31.314 Running I/O for 10 seconds... 00:40:41.596 5721.00 IOPS, 22.35 MiB/s 5955.50 IOPS, 23.26 MiB/s 5963.00 IOPS, 23.29 MiB/s 6003.75 IOPS, 23.45 MiB/s 6018.00 IOPS, 23.51 MiB/s 6075.00 IOPS, 23.73 MiB/s 6143.71 IOPS, 24.00 MiB/s 6144.75 IOPS, 24.00 MiB/s 6142.00 IOPS, 23.99 MiB/s 6136.90 IOPS, 23.97 MiB/s 00:40:41.596 Latency(us) 00:40:41.596 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:41.596 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:40:41.596 Verification LBA range: start 0x0 length 0x4000 00:40:41.596 NVMe0n1 : 10.18 6119.87 23.91 0.00 0.00 165659.75 27185.30 99032.18 00:40:41.596 =================================================================================================================== 00:40:41.596 Total : 6119.87 23.91 0.00 0.00 165659.75 27185.30 99032.18 00:40:41.596 { 00:40:41.596 "results": [ 00:40:41.596 { 00:40:41.596 "job": "NVMe0n1", 00:40:41.596 "core_mask": "0x1", 00:40:41.596 "workload": "verify", 00:40:41.596 "status": "finished", 00:40:41.596 "verify_range": { 00:40:41.596 "start": 0, 00:40:41.596 "length": 16384 00:40:41.596 }, 00:40:41.596 "queue_depth": 1024, 00:40:41.596 "io_size": 4096, 00:40:41.596 "runtime": 10.18371, 00:40:41.596 "iops": 6119.8718345278885, 00:40:41.596 "mibps": 23.905749353624564, 00:40:41.596 "io_failed": 0, 00:40:41.596 "io_timeout": 0, 00:40:41.596 "avg_latency_us": 165659.75470291273, 00:40:41.596 "min_latency_us": 27185.303703703703, 00:40:41.596 "max_latency_us": 99032.17777777778 00:40:41.596 } 00:40:41.596 ], 00:40:41.596 "core_count": 1 00:40:41.596 } 00:40:41.596 16:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3363917 00:40:41.596 16:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 3363917 ']' 00:40:41.596 16:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 3363917 00:40:41.596 16:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:40:41.596 16:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:41.596 16:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3363917 00:40:41.596 16:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:40:41.596 16:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:40:41.596 16:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3363917' 00:40:41.596 killing process with pid 3363917 00:40:41.596 16:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 3363917 00:40:41.596 Received shutdown signal, test time was about 10.000000 seconds 00:40:41.596 00:40:41.596 Latency(us) 00:40:41.596 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:41.596 =================================================================================================================== 00:40:41.596 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:41.596 16:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 3363917 00:40:42.970 16:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:40:42.970 16:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:40:42.970 16:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # nvmfcleanup 00:40:42.970 16:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:40:42.970 16:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:42.970 16:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:40:42.970 16:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:42.970 16:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:42.970 rmmod nvme_tcp 00:40:42.970 rmmod nvme_fabrics 00:40:42.970 rmmod nvme_keyring 00:40:42.970 16:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:42.970 16:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:40:42.970 16:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:40:42.970 16:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@513 -- # '[' -n 3363763 ']' 00:40:42.970 16:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@514 -- # killprocess 3363763 00:40:42.970 16:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 3363763 ']' 00:40:42.970 16:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 3363763 00:40:42.970 16:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:40:42.970 16:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:42.970 16:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3363763 00:40:42.971 16:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:40:42.971 16:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:40:42.971 16:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3363763' 00:40:42.971 killing process with pid 3363763 00:40:42.971 16:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 3363763 00:40:42.971 16:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 3363763 00:40:44.345 16:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:40:44.345 16:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:40:44.345 16:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:40:44.345 16:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:40:44.345 16:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-save 00:40:44.345 16:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:40:44.345 16:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-restore 00:40:44.345 16:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:44.345 16:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:44.345 16:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:44.345 16:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:44.345 16:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:46.244 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:46.244 00:40:46.244 real 0m19.865s 00:40:46.244 user 0m27.683s 00:40:46.244 sys 0m3.769s 00:40:46.244 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:46.244 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:46.244 ************************************ 00:40:46.244 END TEST nvmf_queue_depth 00:40:46.244 ************************************ 00:40:46.244 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:40:46.244 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:40:46.244 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:46.244 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:46.503 ************************************ 00:40:46.503 START TEST nvmf_target_multipath 00:40:46.503 ************************************ 00:40:46.503 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:40:46.503 * Looking for test storage... 00:40:46.503 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:46.503 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:40:46.503 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:40:46.503 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:40:46.503 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:40:46.503 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:46.503 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:46.503 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:46.503 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:40:46.503 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:40:46.503 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:40:46.503 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:40:46.503 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:40:46.503 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:40:46.503 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:40:46.503 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:46.503 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:40:46.503 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:40:46.503 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:46.503 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:46.503 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:40:46.503 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:40:46.503 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:46.503 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:40:46.503 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:40:46.503 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:40:46.503 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:40:46.503 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:46.503 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:40:46.503 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:40:46.503 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:46.503 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:46.503 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:40:46.503 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:46.503 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:40:46.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:46.503 --rc genhtml_branch_coverage=1 00:40:46.503 --rc genhtml_function_coverage=1 00:40:46.503 --rc genhtml_legend=1 00:40:46.503 --rc geninfo_all_blocks=1 00:40:46.503 --rc geninfo_unexecuted_blocks=1 00:40:46.503 00:40:46.503 ' 00:40:46.503 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:40:46.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:46.503 --rc genhtml_branch_coverage=1 00:40:46.503 --rc genhtml_function_coverage=1 00:40:46.503 --rc genhtml_legend=1 00:40:46.503 --rc geninfo_all_blocks=1 00:40:46.503 --rc geninfo_unexecuted_blocks=1 00:40:46.503 00:40:46.503 ' 00:40:46.503 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:40:46.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:46.503 --rc genhtml_branch_coverage=1 00:40:46.503 --rc genhtml_function_coverage=1 00:40:46.503 --rc genhtml_legend=1 00:40:46.503 --rc geninfo_all_blocks=1 00:40:46.503 --rc geninfo_unexecuted_blocks=1 00:40:46.503 00:40:46.503 ' 00:40:46.503 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:40:46.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:46.503 --rc genhtml_branch_coverage=1 00:40:46.503 --rc genhtml_function_coverage=1 00:40:46.503 --rc genhtml_legend=1 00:40:46.503 --rc geninfo_all_blocks=1 00:40:46.503 --rc geninfo_unexecuted_blocks=1 00:40:46.503 00:40:46.503 ' 00:40:46.503 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:46.503 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:40:46.503 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:46.503 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:46.503 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:46.503 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:46.503 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:46.503 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:46.503 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:46.503 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:46.503 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:46.503 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:46.503 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:46.503 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:46.503 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:46.503 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:46.503 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:46.503 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:46.503 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:46.503 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:40:46.503 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:46.503 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:46.503 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:46.503 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:46.503 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:46.503 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:46.503 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:40:46.504 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:46.504 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:40:46.504 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:46.504 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:46.504 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:46.504 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:46.504 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:46.504 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:46.504 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:46.504 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:46.504 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:46.504 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:46.504 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:46.504 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:46.504 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:40:46.504 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:46.504 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:40:46.504 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:40:46.504 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:46.504 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@472 -- # prepare_net_devs 00:40:46.504 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@434 -- # local -g is_hw=no 00:40:46.504 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@436 -- # remove_spdk_ns 00:40:46.504 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:46.504 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:46.504 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:46.504 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:40:46.504 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:40:46.504 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:40:46.504 16:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:40:48.404 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:48.404 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:40:48.404 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:48.404 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:48.404 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:48.404 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:48.404 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:48.404 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:40:48.404 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:48.404 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:40:48.404 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:40:48.404 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:40:48.404 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:40:48.404 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:40:48.404 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:40:48.404 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:48.404 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:48.404 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:48.404 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:48.404 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:48.404 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:48.404 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:48.404 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:48.404 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:48.404 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:48.404 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:48.405 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:40:48.405 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:40:48.405 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:40:48.405 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:40:48.405 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:40:48.405 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:40:48.405 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:40:48.405 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:48.405 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:48.405 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:40:48.405 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:40:48.405 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:48.405 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:48.405 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:40:48.405 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:40:48.405 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:48.405 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:48.664 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:40:48.664 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:40:48.664 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:48.664 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:48.664 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:40:48.664 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:40:48.664 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:40:48.664 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:40:48.664 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:40:48.664 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:48.664 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:40:48.664 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:48.664 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ up == up ]] 00:40:48.664 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:40:48.664 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:48.664 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:48.664 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:48.664 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:40:48.664 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:40:48.664 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:48.664 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:40:48.664 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:48.664 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ up == up ]] 00:40:48.664 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:40:48.664 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:48.664 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:48.664 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:48.664 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:40:48.664 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:40:48.664 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # is_hw=yes 00:40:48.664 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:40:48.664 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:40:48.664 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:40:48.664 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:48.664 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:48.664 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:48.664 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:48.664 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:48.664 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:48.664 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:48.664 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:48.664 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:48.664 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:48.664 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:48.664 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:48.664 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:48.664 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:48.664 16:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:48.664 16:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:48.664 16:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:48.664 16:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:48.664 16:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:48.664 16:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:48.664 16:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:48.664 16:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:48.664 16:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:48.664 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:48.664 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:40:48.664 00:40:48.664 --- 10.0.0.2 ping statistics --- 00:40:48.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:48.664 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:40:48.664 16:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:48.664 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:48.664 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:40:48.664 00:40:48.664 --- 10.0.0.1 ping statistics --- 00:40:48.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:48.664 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:40:48.664 16:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:48.664 16:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # return 0 00:40:48.664 16:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:40:48.664 16:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:48.664 16:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:40:48.664 16:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:40:48.664 16:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:48.664 16:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:40:48.664 16:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:40:48.664 16:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:40:48.664 16:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:40:48.664 only one NIC for nvmf test 00:40:48.664 16:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:40:48.664 16:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:40:48.664 16:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:40:48.664 16:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:48.664 16:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:40:48.664 16:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:48.664 16:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:48.664 rmmod nvme_tcp 00:40:48.664 rmmod nvme_fabrics 00:40:48.665 rmmod nvme_keyring 00:40:48.665 16:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:48.665 16:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:40:48.665 16:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:40:48.665 16:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:40:48.665 16:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:40:48.665 16:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:40:48.665 16:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:40:48.665 16:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:40:48.665 16:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-save 00:40:48.665 16:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:40:48.665 16:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:40:48.665 16:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:48.665 16:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:48.665 16:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:48.665 16:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:48.665 16:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:51.198 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:51.198 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:40:51.198 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:40:51.198 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:40:51.198 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-save 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:51.199 00:40:51.199 real 0m4.436s 00:40:51.199 user 0m0.916s 00:40:51.199 sys 0m1.527s 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:40:51.199 ************************************ 00:40:51.199 END TEST nvmf_target_multipath 00:40:51.199 ************************************ 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:51.199 ************************************ 00:40:51.199 START TEST nvmf_zcopy 00:40:51.199 ************************************ 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:40:51.199 * Looking for test storage... 00:40:51.199 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:40:51.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:51.199 --rc genhtml_branch_coverage=1 00:40:51.199 --rc genhtml_function_coverage=1 00:40:51.199 --rc genhtml_legend=1 00:40:51.199 --rc geninfo_all_blocks=1 00:40:51.199 --rc geninfo_unexecuted_blocks=1 00:40:51.199 00:40:51.199 ' 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:40:51.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:51.199 --rc genhtml_branch_coverage=1 00:40:51.199 --rc genhtml_function_coverage=1 00:40:51.199 --rc genhtml_legend=1 00:40:51.199 --rc geninfo_all_blocks=1 00:40:51.199 --rc geninfo_unexecuted_blocks=1 00:40:51.199 00:40:51.199 ' 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:40:51.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:51.199 --rc genhtml_branch_coverage=1 00:40:51.199 --rc genhtml_function_coverage=1 00:40:51.199 --rc genhtml_legend=1 00:40:51.199 --rc geninfo_all_blocks=1 00:40:51.199 --rc geninfo_unexecuted_blocks=1 00:40:51.199 00:40:51.199 ' 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:40:51.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:51.199 --rc genhtml_branch_coverage=1 00:40:51.199 --rc genhtml_function_coverage=1 00:40:51.199 --rc genhtml_legend=1 00:40:51.199 --rc geninfo_all_blocks=1 00:40:51.199 --rc geninfo_unexecuted_blocks=1 00:40:51.199 00:40:51.199 ' 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:51.199 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:51.200 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:51.200 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:51.200 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:40:51.200 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:51.200 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:51.200 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:51.200 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:51.200 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:51.200 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:51.200 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:40:51.200 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:51.200 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:40:51.200 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:51.200 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:51.200 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:51.200 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:51.200 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:51.200 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:51.200 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:51.200 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:51.200 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:51.200 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:51.200 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:40:51.200 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:40:51.200 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:51.200 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@472 -- # prepare_net_devs 00:40:51.200 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@434 -- # local -g is_hw=no 00:40:51.200 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@436 -- # remove_spdk_ns 00:40:51.200 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:51.200 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:51.200 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:51.200 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:40:51.200 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:40:51.200 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:40:51.200 16:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:53.162 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:53.162 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:40:53.162 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:53.162 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:53.162 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:53.162 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:53.162 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:53.162 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:40:53.162 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:53.162 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:40:53.162 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:40:53.162 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:40:53.162 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:40:53.162 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:40:53.162 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:40:53.162 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:53.162 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:53.162 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:53.162 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:53.162 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:53.162 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:53.162 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:53.162 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:53.162 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:53.162 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:53.162 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:53.162 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:40:53.162 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:53.163 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:53.163 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ up == up ]] 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:53.163 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ up == up ]] 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:53.163 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # is_hw=yes 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:53.163 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:53.163 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:40:53.163 00:40:53.163 --- 10.0.0.2 ping statistics --- 00:40:53.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:53.163 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:53.163 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:53.163 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.063 ms 00:40:53.163 00:40:53.163 --- 10.0.0.1 ping statistics --- 00:40:53.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:53.163 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # return 0 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@505 -- # nvmfpid=3369355 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@506 -- # waitforlisten 3369355 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 3369355 ']' 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:53.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:53.163 16:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:53.163 [2024-09-29 16:48:53.615302] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:53.163 [2024-09-29 16:48:53.617896] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:40:53.164 [2024-09-29 16:48:53.617998] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:53.446 [2024-09-29 16:48:53.760428] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:53.704 [2024-09-29 16:48:54.017944] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:53.704 [2024-09-29 16:48:54.018029] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:53.704 [2024-09-29 16:48:54.018058] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:53.704 [2024-09-29 16:48:54.018081] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:53.704 [2024-09-29 16:48:54.018103] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:53.705 [2024-09-29 16:48:54.018163] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:40:53.962 [2024-09-29 16:48:54.390644] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:53.962 [2024-09-29 16:48:54.391092] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:54.221 16:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:54.221 16:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:40:54.221 16:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:40:54.221 16:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:54.221 16:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:54.221 16:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:54.221 16:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:40:54.221 16:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:40:54.221 16:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:54.221 16:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:54.221 [2024-09-29 16:48:54.611192] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:54.221 16:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:54.221 16:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:40:54.221 16:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:54.221 16:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:54.221 16:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:54.221 16:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:54.221 16:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:54.221 16:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:54.221 [2024-09-29 16:48:54.643421] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:54.221 16:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:54.221 16:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:54.221 16:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:54.221 16:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:54.221 16:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:54.221 16:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:40:54.221 16:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:54.221 16:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:54.221 malloc0 00:40:54.221 16:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:54.221 16:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:40:54.221 16:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:54.221 16:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:54.221 16:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:54.221 16:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:40:54.221 16:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:40:54.221 16:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:40:54.221 16:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:40:54.221 16:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:40:54.221 16:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:40:54.221 { 00:40:54.221 "params": { 00:40:54.221 "name": "Nvme$subsystem", 00:40:54.221 "trtype": "$TEST_TRANSPORT", 00:40:54.221 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:54.221 "adrfam": "ipv4", 00:40:54.221 "trsvcid": "$NVMF_PORT", 00:40:54.221 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:54.221 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:54.221 "hdgst": ${hdgst:-false}, 00:40:54.221 "ddgst": ${ddgst:-false} 00:40:54.221 }, 00:40:54.221 "method": "bdev_nvme_attach_controller" 00:40:54.221 } 00:40:54.221 EOF 00:40:54.221 )") 00:40:54.221 16:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:40:54.221 16:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:40:54.221 16:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:40:54.221 16:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:40:54.221 "params": { 00:40:54.221 "name": "Nvme1", 00:40:54.221 "trtype": "tcp", 00:40:54.221 "traddr": "10.0.0.2", 00:40:54.221 "adrfam": "ipv4", 00:40:54.221 "trsvcid": "4420", 00:40:54.221 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:54.221 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:54.221 "hdgst": false, 00:40:54.221 "ddgst": false 00:40:54.221 }, 00:40:54.222 "method": "bdev_nvme_attach_controller" 00:40:54.222 }' 00:40:54.480 [2024-09-29 16:48:54.792849] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:40:54.480 [2024-09-29 16:48:54.792971] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3369515 ] 00:40:54.480 [2024-09-29 16:48:54.923635] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:54.738 [2024-09-29 16:48:55.182781] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:40:55.305 Running I/O for 10 seconds... 00:41:05.602 3836.00 IOPS, 29.97 MiB/s 3884.00 IOPS, 30.34 MiB/s 3907.00 IOPS, 30.52 MiB/s 3916.25 IOPS, 30.60 MiB/s 3905.40 IOPS, 30.51 MiB/s 3904.67 IOPS, 30.51 MiB/s 3911.29 IOPS, 30.56 MiB/s 3915.25 IOPS, 30.59 MiB/s 3918.44 IOPS, 30.61 MiB/s 3914.20 IOPS, 30.58 MiB/s 00:41:05.602 Latency(us) 00:41:05.602 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:05.602 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:41:05.602 Verification LBA range: start 0x0 length 0x1000 00:41:05.602 Nvme1n1 : 10.06 3900.28 30.47 0.00 0.00 32592.53 2087.44 43496.49 00:41:05.602 =================================================================================================================== 00:41:05.602 Total : 3900.28 30.47 0.00 0.00 32592.53 2087.44 43496.49 00:41:06.538 16:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3370886 00:41:06.538 16:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:41:06.538 16:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:06.538 16:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:41:06.538 16:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:41:06.538 16:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:41:06.538 16:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:41:06.538 16:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:41:06.538 16:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:41:06.538 { 00:41:06.538 "params": { 00:41:06.538 "name": "Nvme$subsystem", 00:41:06.538 "trtype": "$TEST_TRANSPORT", 00:41:06.538 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:06.538 "adrfam": "ipv4", 00:41:06.538 "trsvcid": "$NVMF_PORT", 00:41:06.538 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:06.538 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:06.538 "hdgst": ${hdgst:-false}, 00:41:06.538 "ddgst": ${ddgst:-false} 00:41:06.538 }, 00:41:06.538 "method": "bdev_nvme_attach_controller" 00:41:06.538 } 00:41:06.538 EOF 00:41:06.538 )") 00:41:06.538 16:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:41:06.538 [2024-09-29 16:49:06.927185] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.538 [2024-09-29 16:49:06.927253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.538 16:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:41:06.538 16:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:41:06.538 16:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:41:06.538 "params": { 00:41:06.538 "name": "Nvme1", 00:41:06.538 "trtype": "tcp", 00:41:06.538 "traddr": "10.0.0.2", 00:41:06.538 "adrfam": "ipv4", 00:41:06.538 "trsvcid": "4420", 00:41:06.538 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:06.538 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:06.538 "hdgst": false, 00:41:06.538 "ddgst": false 00:41:06.538 }, 00:41:06.538 "method": "bdev_nvme_attach_controller" 00:41:06.538 }' 00:41:06.538 [2024-09-29 16:49:06.935024] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.538 [2024-09-29 16:49:06.935060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.538 [2024-09-29 16:49:06.943063] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.538 [2024-09-29 16:49:06.943096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.539 [2024-09-29 16:49:06.951056] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.539 [2024-09-29 16:49:06.951087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.539 [2024-09-29 16:49:06.959018] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.539 [2024-09-29 16:49:06.959049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.539 [2024-09-29 16:49:06.967094] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.539 [2024-09-29 16:49:06.967128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.539 [2024-09-29 16:49:06.975055] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.539 [2024-09-29 16:49:06.975083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.539 [2024-09-29 16:49:06.983040] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.539 [2024-09-29 16:49:06.983067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.539 [2024-09-29 16:49:06.991045] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.539 [2024-09-29 16:49:06.991072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.539 [2024-09-29 16:49:06.999015] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.539 [2024-09-29 16:49:06.999058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.539 [2024-09-29 16:49:07.007056] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.539 [2024-09-29 16:49:07.007083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.539 [2024-09-29 16:49:07.007850] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:41:06.539 [2024-09-29 16:49:07.007972] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3370886 ] 00:41:06.539 [2024-09-29 16:49:07.015045] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.539 [2024-09-29 16:49:07.015073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.539 [2024-09-29 16:49:07.023005] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.539 [2024-09-29 16:49:07.023046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.539 [2024-09-29 16:49:07.031049] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.539 [2024-09-29 16:49:07.031076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.539 [2024-09-29 16:49:07.039059] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.539 [2024-09-29 16:49:07.039086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.539 [2024-09-29 16:49:07.047014] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.539 [2024-09-29 16:49:07.047057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.539 [2024-09-29 16:49:07.055029] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.539 [2024-09-29 16:49:07.055055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.539 [2024-09-29 16:49:07.063046] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.539 [2024-09-29 16:49:07.063073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.539 [2024-09-29 16:49:07.071049] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.539 [2024-09-29 16:49:07.071076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.539 [2024-09-29 16:49:07.079046] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.539 [2024-09-29 16:49:07.079072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.539 [2024-09-29 16:49:07.087015] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.539 [2024-09-29 16:49:07.087066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.539 [2024-09-29 16:49:07.095054] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.539 [2024-09-29 16:49:07.095088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.797 [2024-09-29 16:49:07.103064] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.797 [2024-09-29 16:49:07.103111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.797 [2024-09-29 16:49:07.111038] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.797 [2024-09-29 16:49:07.111075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.797 [2024-09-29 16:49:07.119055] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.797 [2024-09-29 16:49:07.119089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.797 [2024-09-29 16:49:07.127033] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.797 [2024-09-29 16:49:07.127066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.797 [2024-09-29 16:49:07.135054] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.797 [2024-09-29 16:49:07.135087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.797 [2024-09-29 16:49:07.143049] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.797 [2024-09-29 16:49:07.143083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.797 [2024-09-29 16:49:07.148862] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:06.797 [2024-09-29 16:49:07.151012] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.797 [2024-09-29 16:49:07.151059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.797 [2024-09-29 16:49:07.159115] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.797 [2024-09-29 16:49:07.159159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.797 [2024-09-29 16:49:07.167133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.797 [2024-09-29 16:49:07.167186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.797 [2024-09-29 16:49:07.175011] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.798 [2024-09-29 16:49:07.175067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.798 [2024-09-29 16:49:07.183052] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.798 [2024-09-29 16:49:07.183085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.798 [2024-09-29 16:49:07.191006] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.798 [2024-09-29 16:49:07.191051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.798 [2024-09-29 16:49:07.199044] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.798 [2024-09-29 16:49:07.199077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.798 [2024-09-29 16:49:07.207051] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.798 [2024-09-29 16:49:07.207084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.798 [2024-09-29 16:49:07.215012] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.798 [2024-09-29 16:49:07.215059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.798 [2024-09-29 16:49:07.223044] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.798 [2024-09-29 16:49:07.223076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.798 [2024-09-29 16:49:07.231051] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.798 [2024-09-29 16:49:07.231084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.798 [2024-09-29 16:49:07.239013] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.798 [2024-09-29 16:49:07.239045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.798 [2024-09-29 16:49:07.247045] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.798 [2024-09-29 16:49:07.247079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.798 [2024-09-29 16:49:07.255044] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.798 [2024-09-29 16:49:07.255077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.798 [2024-09-29 16:49:07.263048] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.798 [2024-09-29 16:49:07.263081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.798 [2024-09-29 16:49:07.271043] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.798 [2024-09-29 16:49:07.271076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.798 [2024-09-29 16:49:07.279014] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.798 [2024-09-29 16:49:07.279046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.798 [2024-09-29 16:49:07.287102] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.798 [2024-09-29 16:49:07.287152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.798 [2024-09-29 16:49:07.295068] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.798 [2024-09-29 16:49:07.295107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.798 [2024-09-29 16:49:07.303014] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.798 [2024-09-29 16:49:07.303062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.798 [2024-09-29 16:49:07.311055] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.798 [2024-09-29 16:49:07.311088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.798 [2024-09-29 16:49:07.319030] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.798 [2024-09-29 16:49:07.319062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.798 [2024-09-29 16:49:07.327048] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.798 [2024-09-29 16:49:07.327081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.798 [2024-09-29 16:49:07.335047] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.798 [2024-09-29 16:49:07.335081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.798 [2024-09-29 16:49:07.343003] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.798 [2024-09-29 16:49:07.343046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.798 [2024-09-29 16:49:07.351058] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.798 [2024-09-29 16:49:07.351091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.798 [2024-09-29 16:49:07.359060] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.798 [2024-09-29 16:49:07.359098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.057 [2024-09-29 16:49:07.367012] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.057 [2024-09-29 16:49:07.367062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.057 [2024-09-29 16:49:07.375051] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.057 [2024-09-29 16:49:07.375086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.057 [2024-09-29 16:49:07.383007] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.057 [2024-09-29 16:49:07.383053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.057 [2024-09-29 16:49:07.391045] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.057 [2024-09-29 16:49:07.391079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.057 [2024-09-29 16:49:07.399036] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.057 [2024-09-29 16:49:07.399069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.057 [2024-09-29 16:49:07.406850] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:41:07.057 [2024-09-29 16:49:07.407021] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.057 [2024-09-29 16:49:07.407068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.057 [2024-09-29 16:49:07.415048] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.057 [2024-09-29 16:49:07.415082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.057 [2024-09-29 16:49:07.423099] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.057 [2024-09-29 16:49:07.423142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.057 [2024-09-29 16:49:07.431109] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.057 [2024-09-29 16:49:07.431161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.057 [2024-09-29 16:49:07.439062] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.057 [2024-09-29 16:49:07.439096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.057 [2024-09-29 16:49:07.447050] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.057 [2024-09-29 16:49:07.447084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.057 [2024-09-29 16:49:07.455054] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.057 [2024-09-29 16:49:07.455087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.057 [2024-09-29 16:49:07.463038] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.057 [2024-09-29 16:49:07.463071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.057 [2024-09-29 16:49:07.471020] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.057 [2024-09-29 16:49:07.471059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.057 [2024-09-29 16:49:07.479047] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.057 [2024-09-29 16:49:07.479080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.057 [2024-09-29 16:49:07.487041] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.057 [2024-09-29 16:49:07.487073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.057 [2024-09-29 16:49:07.495036] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.057 [2024-09-29 16:49:07.495076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.057 [2024-09-29 16:49:07.503132] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.057 [2024-09-29 16:49:07.503183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.057 [2024-09-29 16:49:07.511121] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.057 [2024-09-29 16:49:07.511173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.057 [2024-09-29 16:49:07.519133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.057 [2024-09-29 16:49:07.519184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.057 [2024-09-29 16:49:07.527092] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.057 [2024-09-29 16:49:07.527131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.057 [2024-09-29 16:49:07.535014] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.057 [2024-09-29 16:49:07.535071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.057 [2024-09-29 16:49:07.543078] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.057 [2024-09-29 16:49:07.543112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.057 [2024-09-29 16:49:07.551060] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.057 [2024-09-29 16:49:07.551093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.057 [2024-09-29 16:49:07.559011] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.057 [2024-09-29 16:49:07.559044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.057 [2024-09-29 16:49:07.567039] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.057 [2024-09-29 16:49:07.567073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.057 [2024-09-29 16:49:07.575004] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.057 [2024-09-29 16:49:07.575047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.057 [2024-09-29 16:49:07.583041] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.057 [2024-09-29 16:49:07.583074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.057 [2024-09-29 16:49:07.591042] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.057 [2024-09-29 16:49:07.591075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.057 [2024-09-29 16:49:07.599011] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.057 [2024-09-29 16:49:07.599054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.057 [2024-09-29 16:49:07.607036] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.057 [2024-09-29 16:49:07.607069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.057 [2024-09-29 16:49:07.615095] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.057 [2024-09-29 16:49:07.615141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.316 [2024-09-29 16:49:07.623038] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.316 [2024-09-29 16:49:07.623083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.316 [2024-09-29 16:49:07.631048] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.316 [2024-09-29 16:49:07.631083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.316 [2024-09-29 16:49:07.639042] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.316 [2024-09-29 16:49:07.639076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.316 [2024-09-29 16:49:07.647050] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.316 [2024-09-29 16:49:07.647086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.316 [2024-09-29 16:49:07.655106] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.316 [2024-09-29 16:49:07.655159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.316 [2024-09-29 16:49:07.663094] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.316 [2024-09-29 16:49:07.663149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.316 [2024-09-29 16:49:07.671096] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.316 [2024-09-29 16:49:07.671134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.316 [2024-09-29 16:49:07.679040] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.316 [2024-09-29 16:49:07.679074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.316 [2024-09-29 16:49:07.687005] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.316 [2024-09-29 16:49:07.687050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.316 [2024-09-29 16:49:07.695075] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.316 [2024-09-29 16:49:07.695109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.316 [2024-09-29 16:49:07.703016] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.316 [2024-09-29 16:49:07.703049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.316 [2024-09-29 16:49:07.711042] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.316 [2024-09-29 16:49:07.711076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.316 [2024-09-29 16:49:07.719046] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.316 [2024-09-29 16:49:07.719079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.316 [2024-09-29 16:49:07.727021] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.316 [2024-09-29 16:49:07.727052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.316 [2024-09-29 16:49:07.735060] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.316 [2024-09-29 16:49:07.735093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.316 [2024-09-29 16:49:07.743043] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.316 [2024-09-29 16:49:07.743076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.316 [2024-09-29 16:49:07.751019] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.316 [2024-09-29 16:49:07.751051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.316 [2024-09-29 16:49:07.759044] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.316 [2024-09-29 16:49:07.759077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.316 [2024-09-29 16:49:07.767016] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.316 [2024-09-29 16:49:07.767051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.316 [2024-09-29 16:49:07.775885] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.316 [2024-09-29 16:49:07.775924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.316 [2024-09-29 16:49:07.783049] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.316 [2024-09-29 16:49:07.783086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.316 [2024-09-29 16:49:07.791008] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.316 [2024-09-29 16:49:07.791059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.317 [2024-09-29 16:49:07.799048] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.317 [2024-09-29 16:49:07.799085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.317 [2024-09-29 16:49:07.807046] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.317 [2024-09-29 16:49:07.807082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.317 [2024-09-29 16:49:07.815021] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.317 [2024-09-29 16:49:07.815054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.317 [2024-09-29 16:49:07.823081] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.317 [2024-09-29 16:49:07.823114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.317 [2024-09-29 16:49:07.831007] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.317 [2024-09-29 16:49:07.831054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.317 [2024-09-29 16:49:07.839044] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.317 [2024-09-29 16:49:07.839078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.317 [2024-09-29 16:49:07.847043] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.317 [2024-09-29 16:49:07.847080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.317 [2024-09-29 16:49:07.855016] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.317 [2024-09-29 16:49:07.855053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.317 [2024-09-29 16:49:07.863073] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.317 [2024-09-29 16:49:07.863110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.317 [2024-09-29 16:49:07.871738] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.317 [2024-09-29 16:49:07.871771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.575 [2024-09-29 16:49:07.879060] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.575 [2024-09-29 16:49:07.879099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.575 [2024-09-29 16:49:07.887058] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.576 [2024-09-29 16:49:07.887095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.576 Running I/O for 5 seconds... 00:41:07.576 [2024-09-29 16:49:07.904117] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.576 [2024-09-29 16:49:07.904159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.576 [2024-09-29 16:49:07.919214] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.576 [2024-09-29 16:49:07.919256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.576 [2024-09-29 16:49:07.935456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.576 [2024-09-29 16:49:07.935495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.576 [2024-09-29 16:49:07.950404] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.576 [2024-09-29 16:49:07.950443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.576 [2024-09-29 16:49:07.965747] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.576 [2024-09-29 16:49:07.965782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.576 [2024-09-29 16:49:07.982249] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.576 [2024-09-29 16:49:07.982289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.576 [2024-09-29 16:49:07.998616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.576 [2024-09-29 16:49:07.998667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.576 [2024-09-29 16:49:08.014644] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.576 [2024-09-29 16:49:08.014720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.576 [2024-09-29 16:49:08.031138] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.576 [2024-09-29 16:49:08.031177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.576 [2024-09-29 16:49:08.047622] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.576 [2024-09-29 16:49:08.047661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.576 [2024-09-29 16:49:08.064579] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.576 [2024-09-29 16:49:08.064619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.576 [2024-09-29 16:49:08.080768] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.576 [2024-09-29 16:49:08.080801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.576 [2024-09-29 16:49:08.096205] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.576 [2024-09-29 16:49:08.096245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.576 [2024-09-29 16:49:08.112386] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.576 [2024-09-29 16:49:08.112426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.576 [2024-09-29 16:49:08.128876] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.576 [2024-09-29 16:49:08.128912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.834 [2024-09-29 16:49:08.145809] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.834 [2024-09-29 16:49:08.145843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.834 [2024-09-29 16:49:08.161561] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.834 [2024-09-29 16:49:08.161601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.834 [2024-09-29 16:49:08.177344] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.834 [2024-09-29 16:49:08.177383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.834 [2024-09-29 16:49:08.192784] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.834 [2024-09-29 16:49:08.192818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.834 [2024-09-29 16:49:08.207991] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.834 [2024-09-29 16:49:08.208041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.834 [2024-09-29 16:49:08.223337] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.834 [2024-09-29 16:49:08.223376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.834 [2024-09-29 16:49:08.237878] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.834 [2024-09-29 16:49:08.237912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.834 [2024-09-29 16:49:08.253492] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.834 [2024-09-29 16:49:08.253531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.834 [2024-09-29 16:49:08.269217] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.834 [2024-09-29 16:49:08.269256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.834 [2024-09-29 16:49:08.284616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.834 [2024-09-29 16:49:08.284655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.834 [2024-09-29 16:49:08.299617] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.834 [2024-09-29 16:49:08.299655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.834 [2024-09-29 16:49:08.315721] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.834 [2024-09-29 16:49:08.315767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.834 [2024-09-29 16:49:08.330135] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.834 [2024-09-29 16:49:08.330175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.834 [2024-09-29 16:49:08.347287] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.834 [2024-09-29 16:49:08.347325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.834 [2024-09-29 16:49:08.365650] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.834 [2024-09-29 16:49:08.365699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.834 [2024-09-29 16:49:08.379641] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.834 [2024-09-29 16:49:08.379691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.093 [2024-09-29 16:49:08.398206] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.093 [2024-09-29 16:49:08.398246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.093 [2024-09-29 16:49:08.414207] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.093 [2024-09-29 16:49:08.414247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.093 [2024-09-29 16:49:08.429803] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.093 [2024-09-29 16:49:08.429838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.093 [2024-09-29 16:49:08.444728] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.093 [2024-09-29 16:49:08.444764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.093 [2024-09-29 16:49:08.459587] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.093 [2024-09-29 16:49:08.459626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.093 [2024-09-29 16:49:08.475494] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.093 [2024-09-29 16:49:08.475533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.093 [2024-09-29 16:49:08.489889] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.093 [2024-09-29 16:49:08.489923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.093 [2024-09-29 16:49:08.507915] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.093 [2024-09-29 16:49:08.507948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.093 [2024-09-29 16:49:08.523947] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.093 [2024-09-29 16:49:08.524007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.093 [2024-09-29 16:49:08.540427] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.093 [2024-09-29 16:49:08.540466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.093 [2024-09-29 16:49:08.556526] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.093 [2024-09-29 16:49:08.556565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.093 [2024-09-29 16:49:08.571133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.093 [2024-09-29 16:49:08.571172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.093 [2024-09-29 16:49:08.586877] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.093 [2024-09-29 16:49:08.586911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.093 [2024-09-29 16:49:08.602900] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.093 [2024-09-29 16:49:08.602935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.093 [2024-09-29 16:49:08.618966] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.093 [2024-09-29 16:49:08.618999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.093 [2024-09-29 16:49:08.634571] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.093 [2024-09-29 16:49:08.634610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.093 [2024-09-29 16:49:08.650315] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.093 [2024-09-29 16:49:08.650353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.352 [2024-09-29 16:49:08.667236] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.352 [2024-09-29 16:49:08.667276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.352 [2024-09-29 16:49:08.680725] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.352 [2024-09-29 16:49:08.680761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.352 [2024-09-29 16:49:08.697131] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.352 [2024-09-29 16:49:08.697170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.352 [2024-09-29 16:49:08.714249] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.352 [2024-09-29 16:49:08.714288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.352 [2024-09-29 16:49:08.729633] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.352 [2024-09-29 16:49:08.729679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.352 [2024-09-29 16:49:08.744833] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.352 [2024-09-29 16:49:08.744883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.352 [2024-09-29 16:49:08.760910] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.352 [2024-09-29 16:49:08.760944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.352 [2024-09-29 16:49:08.776108] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.352 [2024-09-29 16:49:08.776147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.352 [2024-09-29 16:49:08.790669] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.352 [2024-09-29 16:49:08.790731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.352 [2024-09-29 16:49:08.807363] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.352 [2024-09-29 16:49:08.807403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.352 [2024-09-29 16:49:08.821648] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.352 [2024-09-29 16:49:08.821695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.352 [2024-09-29 16:49:08.836248] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.352 [2024-09-29 16:49:08.836287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.352 [2024-09-29 16:49:08.853181] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.352 [2024-09-29 16:49:08.853229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.352 [2024-09-29 16:49:08.869101] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.352 [2024-09-29 16:49:08.869140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.352 [2024-09-29 16:49:08.884501] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.352 [2024-09-29 16:49:08.884540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.352 7970.00 IOPS, 62.27 MiB/s [2024-09-29 16:49:08.899838] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.352 [2024-09-29 16:49:08.899874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.615 [2024-09-29 16:49:08.917366] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.615 [2024-09-29 16:49:08.917406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.615 [2024-09-29 16:49:08.933595] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.615 [2024-09-29 16:49:08.933635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.615 [2024-09-29 16:49:08.950214] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.615 [2024-09-29 16:49:08.950253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.615 [2024-09-29 16:49:08.965821] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.615 [2024-09-29 16:49:08.965857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.615 [2024-09-29 16:49:08.980698] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.615 [2024-09-29 16:49:08.980754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.615 [2024-09-29 16:49:08.996814] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.615 [2024-09-29 16:49:08.996850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.615 [2024-09-29 16:49:09.012445] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.615 [2024-09-29 16:49:09.012486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.615 [2024-09-29 16:49:09.027766] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.615 [2024-09-29 16:49:09.027800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.615 [2024-09-29 16:49:09.041637] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.615 [2024-09-29 16:49:09.041687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.615 [2024-09-29 16:49:09.059314] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.615 [2024-09-29 16:49:09.059353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.615 [2024-09-29 16:49:09.073052] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.615 [2024-09-29 16:49:09.073092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.615 [2024-09-29 16:49:09.088300] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.615 [2024-09-29 16:49:09.088339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.615 [2024-09-29 16:49:09.104481] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.615 [2024-09-29 16:49:09.104521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.615 [2024-09-29 16:49:09.120389] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.615 [2024-09-29 16:49:09.120428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.615 [2024-09-29 16:49:09.137115] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.615 [2024-09-29 16:49:09.137157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.615 [2024-09-29 16:49:09.152407] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.615 [2024-09-29 16:49:09.152456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.615 [2024-09-29 16:49:09.168832] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.615 [2024-09-29 16:49:09.168868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.875 [2024-09-29 16:49:09.186389] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.875 [2024-09-29 16:49:09.186431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.875 [2024-09-29 16:49:09.202875] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.875 [2024-09-29 16:49:09.202911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.875 [2024-09-29 16:49:09.219529] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.875 [2024-09-29 16:49:09.219568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.875 [2024-09-29 16:49:09.234873] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.875 [2024-09-29 16:49:09.234908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.875 [2024-09-29 16:49:09.250446] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.875 [2024-09-29 16:49:09.250485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.875 [2024-09-29 16:49:09.265443] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.875 [2024-09-29 16:49:09.265482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.875 [2024-09-29 16:49:09.281767] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.875 [2024-09-29 16:49:09.281801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.875 [2024-09-29 16:49:09.296885] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.875 [2024-09-29 16:49:09.296920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.875 [2024-09-29 16:49:09.312994] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.875 [2024-09-29 16:49:09.313034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.875 [2024-09-29 16:49:09.329558] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.875 [2024-09-29 16:49:09.329599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.875 [2024-09-29 16:49:09.344835] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.875 [2024-09-29 16:49:09.344869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.875 [2024-09-29 16:49:09.360627] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.875 [2024-09-29 16:49:09.360666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.875 [2024-09-29 16:49:09.375559] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.875 [2024-09-29 16:49:09.375597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.875 [2024-09-29 16:49:09.390999] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.875 [2024-09-29 16:49:09.391034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.875 [2024-09-29 16:49:09.406965] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.875 [2024-09-29 16:49:09.407006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:08.875 [2024-09-29 16:49:09.422652] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:08.875 [2024-09-29 16:49:09.422716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.134 [2024-09-29 16:49:09.439317] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.134 [2024-09-29 16:49:09.439357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.134 [2024-09-29 16:49:09.453390] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.134 [2024-09-29 16:49:09.453442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.134 [2024-09-29 16:49:09.467234] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.134 [2024-09-29 16:49:09.467273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.135 [2024-09-29 16:49:09.485095] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.135 [2024-09-29 16:49:09.485134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.135 [2024-09-29 16:49:09.501588] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.135 [2024-09-29 16:49:09.501627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.135 [2024-09-29 16:49:09.517979] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.135 [2024-09-29 16:49:09.518027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.135 [2024-09-29 16:49:09.533923] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.135 [2024-09-29 16:49:09.533973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.135 [2024-09-29 16:49:09.549294] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.135 [2024-09-29 16:49:09.549333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.135 [2024-09-29 16:49:09.564779] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.135 [2024-09-29 16:49:09.564814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.135 [2024-09-29 16:49:09.580784] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.135 [2024-09-29 16:49:09.580817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.135 [2024-09-29 16:49:09.597186] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.135 [2024-09-29 16:49:09.597225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.135 [2024-09-29 16:49:09.613311] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.135 [2024-09-29 16:49:09.613350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.135 [2024-09-29 16:49:09.629795] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.135 [2024-09-29 16:49:09.629829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.135 [2024-09-29 16:49:09.645886] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.135 [2024-09-29 16:49:09.645917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.135 [2024-09-29 16:49:09.661506] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.135 [2024-09-29 16:49:09.661545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.135 [2024-09-29 16:49:09.676731] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.135 [2024-09-29 16:49:09.676767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.135 [2024-09-29 16:49:09.692946] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.135 [2024-09-29 16:49:09.693000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.393 [2024-09-29 16:49:09.710662] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.393 [2024-09-29 16:49:09.710731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.393 [2024-09-29 16:49:09.727080] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.393 [2024-09-29 16:49:09.727121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.393 [2024-09-29 16:49:09.742641] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.393 [2024-09-29 16:49:09.742690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.393 [2024-09-29 16:49:09.758575] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.393 [2024-09-29 16:49:09.758614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.393 [2024-09-29 16:49:09.774450] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.393 [2024-09-29 16:49:09.774490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.393 [2024-09-29 16:49:09.790535] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.393 [2024-09-29 16:49:09.790575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.393 [2024-09-29 16:49:09.806225] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.393 [2024-09-29 16:49:09.806264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.393 [2024-09-29 16:49:09.822558] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.393 [2024-09-29 16:49:09.822598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.393 [2024-09-29 16:49:09.838770] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.393 [2024-09-29 16:49:09.838804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.393 [2024-09-29 16:49:09.854352] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.393 [2024-09-29 16:49:09.854391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.393 [2024-09-29 16:49:09.869821] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.393 [2024-09-29 16:49:09.869856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.394 [2024-09-29 16:49:09.885441] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.394 [2024-09-29 16:49:09.885479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.394 7970.00 IOPS, 62.27 MiB/s [2024-09-29 16:49:09.900810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.394 [2024-09-29 16:49:09.900845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.394 [2024-09-29 16:49:09.916498] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.394 [2024-09-29 16:49:09.916537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.394 [2024-09-29 16:49:09.932134] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.394 [2024-09-29 16:49:09.932168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.394 [2024-09-29 16:49:09.948243] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.394 [2024-09-29 16:49:09.948283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.652 [2024-09-29 16:49:09.964296] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.652 [2024-09-29 16:49:09.964338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.652 [2024-09-29 16:49:09.980719] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.652 [2024-09-29 16:49:09.980755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.652 [2024-09-29 16:49:09.996694] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.652 [2024-09-29 16:49:09.996744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.652 [2024-09-29 16:49:10.013491] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.652 [2024-09-29 16:49:10.013563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.652 [2024-09-29 16:49:10.030253] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.652 [2024-09-29 16:49:10.030309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.652 [2024-09-29 16:49:10.047337] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.652 [2024-09-29 16:49:10.047382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.652 [2024-09-29 16:49:10.061947] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.652 [2024-09-29 16:49:10.061988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.652 [2024-09-29 16:49:10.076797] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.652 [2024-09-29 16:49:10.076831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.652 [2024-09-29 16:49:10.093946] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.652 [2024-09-29 16:49:10.094001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.652 [2024-09-29 16:49:10.110228] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.652 [2024-09-29 16:49:10.110267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.652 [2024-09-29 16:49:10.126305] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.652 [2024-09-29 16:49:10.126344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.652 [2024-09-29 16:49:10.141768] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.652 [2024-09-29 16:49:10.141803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.652 [2024-09-29 16:49:10.158063] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.652 [2024-09-29 16:49:10.158104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.652 [2024-09-29 16:49:10.174201] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.652 [2024-09-29 16:49:10.174240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.652 [2024-09-29 16:49:10.190476] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.652 [2024-09-29 16:49:10.190515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.652 [2024-09-29 16:49:10.207329] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.652 [2024-09-29 16:49:10.207367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.910 [2024-09-29 16:49:10.221002] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.910 [2024-09-29 16:49:10.221042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.910 [2024-09-29 16:49:10.235972] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.910 [2024-09-29 16:49:10.236011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.910 [2024-09-29 16:49:10.253456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.910 [2024-09-29 16:49:10.253496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.910 [2024-09-29 16:49:10.270009] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.910 [2024-09-29 16:49:10.270049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.910 [2024-09-29 16:49:10.286968] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.910 [2024-09-29 16:49:10.287029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.910 [2024-09-29 16:49:10.302944] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.910 [2024-09-29 16:49:10.303006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.910 [2024-09-29 16:49:10.319693] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.910 [2024-09-29 16:49:10.319752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.910 [2024-09-29 16:49:10.335815] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.910 [2024-09-29 16:49:10.335851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.910 [2024-09-29 16:49:10.350587] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.910 [2024-09-29 16:49:10.350635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.910 [2024-09-29 16:49:10.367417] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.910 [2024-09-29 16:49:10.367455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.910 [2024-09-29 16:49:10.381683] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.910 [2024-09-29 16:49:10.381737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.910 [2024-09-29 16:49:10.397695] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.910 [2024-09-29 16:49:10.397748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.910 [2024-09-29 16:49:10.413576] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.910 [2024-09-29 16:49:10.413615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.910 [2024-09-29 16:49:10.429496] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.910 [2024-09-29 16:49:10.429535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.910 [2024-09-29 16:49:10.445538] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.910 [2024-09-29 16:49:10.445586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.910 [2024-09-29 16:49:10.460711] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.910 [2024-09-29 16:49:10.460763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.168 [2024-09-29 16:49:10.476347] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.168 [2024-09-29 16:49:10.476387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.168 [2024-09-29 16:49:10.492665] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.168 [2024-09-29 16:49:10.492713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.168 [2024-09-29 16:49:10.508437] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.168 [2024-09-29 16:49:10.508476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.168 [2024-09-29 16:49:10.523659] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.168 [2024-09-29 16:49:10.523708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.168 [2024-09-29 16:49:10.537702] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.168 [2024-09-29 16:49:10.537754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.168 [2024-09-29 16:49:10.553466] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.168 [2024-09-29 16:49:10.553505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.168 [2024-09-29 16:49:10.568222] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.168 [2024-09-29 16:49:10.568261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.168 [2024-09-29 16:49:10.584098] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.168 [2024-09-29 16:49:10.584138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.168 [2024-09-29 16:49:10.600483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.168 [2024-09-29 16:49:10.600523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.168 [2024-09-29 16:49:10.615689] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.168 [2024-09-29 16:49:10.615741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.168 [2024-09-29 16:49:10.631629] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.168 [2024-09-29 16:49:10.631667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.168 [2024-09-29 16:49:10.646172] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.168 [2024-09-29 16:49:10.646219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.168 [2024-09-29 16:49:10.662396] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.168 [2024-09-29 16:49:10.662434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.168 [2024-09-29 16:49:10.677398] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.168 [2024-09-29 16:49:10.677436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.168 [2024-09-29 16:49:10.692522] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.168 [2024-09-29 16:49:10.692560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.168 [2024-09-29 16:49:10.708345] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.168 [2024-09-29 16:49:10.708383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.168 [2024-09-29 16:49:10.723828] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.168 [2024-09-29 16:49:10.723863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.427 [2024-09-29 16:49:10.738331] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.427 [2024-09-29 16:49:10.738370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.427 [2024-09-29 16:49:10.754128] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.427 [2024-09-29 16:49:10.754169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.427 [2024-09-29 16:49:10.770295] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.427 [2024-09-29 16:49:10.770335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.427 [2024-09-29 16:49:10.786596] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.427 [2024-09-29 16:49:10.786636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.427 [2024-09-29 16:49:10.802749] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.427 [2024-09-29 16:49:10.802782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.427 [2024-09-29 16:49:10.818691] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.427 [2024-09-29 16:49:10.818742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.427 [2024-09-29 16:49:10.834495] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.427 [2024-09-29 16:49:10.834534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.427 [2024-09-29 16:49:10.850465] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.427 [2024-09-29 16:49:10.850503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.427 [2024-09-29 16:49:10.866964] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.427 [2024-09-29 16:49:10.867017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.427 [2024-09-29 16:49:10.883079] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.427 [2024-09-29 16:49:10.883118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.427 [2024-09-29 16:49:10.899863] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.427 [2024-09-29 16:49:10.899898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.427 7950.33 IOPS, 62.11 MiB/s [2024-09-29 16:49:10.915753] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.427 [2024-09-29 16:49:10.915802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.427 [2024-09-29 16:49:10.931923] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.427 [2024-09-29 16:49:10.931967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.427 [2024-09-29 16:49:10.948110] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.427 [2024-09-29 16:49:10.948158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.427 [2024-09-29 16:49:10.964504] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.427 [2024-09-29 16:49:10.964543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.427 [2024-09-29 16:49:10.980100] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.427 [2024-09-29 16:49:10.980139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.685 [2024-09-29 16:49:10.995800] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.685 [2024-09-29 16:49:10.995835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.685 [2024-09-29 16:49:11.011336] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.685 [2024-09-29 16:49:11.011375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.685 [2024-09-29 16:49:11.026667] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.685 [2024-09-29 16:49:11.026730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.685 [2024-09-29 16:49:11.042424] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.685 [2024-09-29 16:49:11.042464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.685 [2024-09-29 16:49:11.058498] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.685 [2024-09-29 16:49:11.058538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.685 [2024-09-29 16:49:11.074730] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.685 [2024-09-29 16:49:11.074765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.685 [2024-09-29 16:49:11.090314] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.685 [2024-09-29 16:49:11.090353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.685 [2024-09-29 16:49:11.105833] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.685 [2024-09-29 16:49:11.105868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.685 [2024-09-29 16:49:11.120808] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.685 [2024-09-29 16:49:11.120844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.685 [2024-09-29 16:49:11.135581] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.685 [2024-09-29 16:49:11.135621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.685 [2024-09-29 16:49:11.150786] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.685 [2024-09-29 16:49:11.150819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.685 [2024-09-29 16:49:11.165286] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.685 [2024-09-29 16:49:11.165327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.685 [2024-09-29 16:49:11.183227] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.685 [2024-09-29 16:49:11.183267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.685 [2024-09-29 16:49:11.197573] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.685 [2024-09-29 16:49:11.197611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.685 [2024-09-29 16:49:11.212950] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.685 [2024-09-29 16:49:11.213004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.685 [2024-09-29 16:49:11.228960] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.685 [2024-09-29 16:49:11.229001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.685 [2024-09-29 16:49:11.244075] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.685 [2024-09-29 16:49:11.244117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.943 [2024-09-29 16:49:11.260738] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.943 [2024-09-29 16:49:11.260771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.943 [2024-09-29 16:49:11.277475] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.943 [2024-09-29 16:49:11.277514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.943 [2024-09-29 16:49:11.293686] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.943 [2024-09-29 16:49:11.293738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.943 [2024-09-29 16:49:11.309937] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.943 [2024-09-29 16:49:11.309992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.943 [2024-09-29 16:49:11.326110] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.943 [2024-09-29 16:49:11.326150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.944 [2024-09-29 16:49:11.342405] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.944 [2024-09-29 16:49:11.342444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.944 [2024-09-29 16:49:11.357655] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.944 [2024-09-29 16:49:11.357702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.944 [2024-09-29 16:49:11.374079] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.944 [2024-09-29 16:49:11.374118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.944 [2024-09-29 16:49:11.390004] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.944 [2024-09-29 16:49:11.390059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.944 [2024-09-29 16:49:11.406244] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.944 [2024-09-29 16:49:11.406285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.944 [2024-09-29 16:49:11.421783] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.944 [2024-09-29 16:49:11.421818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.944 [2024-09-29 16:49:11.437410] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.944 [2024-09-29 16:49:11.437450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.944 [2024-09-29 16:49:11.452964] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.944 [2024-09-29 16:49:11.453004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.944 [2024-09-29 16:49:11.468192] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.944 [2024-09-29 16:49:11.468232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.944 [2024-09-29 16:49:11.484158] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.944 [2024-09-29 16:49:11.484197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.944 [2024-09-29 16:49:11.499692] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.944 [2024-09-29 16:49:11.499744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.202 [2024-09-29 16:49:11.514168] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.202 [2024-09-29 16:49:11.514207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.202 [2024-09-29 16:49:11.529877] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.202 [2024-09-29 16:49:11.529912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.202 [2024-09-29 16:49:11.545875] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.202 [2024-09-29 16:49:11.545910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.202 [2024-09-29 16:49:11.562437] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.202 [2024-09-29 16:49:11.562477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.202 [2024-09-29 16:49:11.578397] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.202 [2024-09-29 16:49:11.578436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.202 [2024-09-29 16:49:11.594180] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.202 [2024-09-29 16:49:11.594219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.202 [2024-09-29 16:49:11.609979] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.202 [2024-09-29 16:49:11.610032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.202 [2024-09-29 16:49:11.625806] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.202 [2024-09-29 16:49:11.625841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.202 [2024-09-29 16:49:11.641765] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.202 [2024-09-29 16:49:11.641800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.202 [2024-09-29 16:49:11.657843] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.202 [2024-09-29 16:49:11.657877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.202 [2024-09-29 16:49:11.673640] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.202 [2024-09-29 16:49:11.673687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.202 [2024-09-29 16:49:11.689836] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.202 [2024-09-29 16:49:11.689870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.202 [2024-09-29 16:49:11.704565] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.202 [2024-09-29 16:49:11.704604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.202 [2024-09-29 16:49:11.719783] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.202 [2024-09-29 16:49:11.719816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.202 [2024-09-29 16:49:11.733838] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.202 [2024-09-29 16:49:11.733874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.202 [2024-09-29 16:49:11.751145] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.202 [2024-09-29 16:49:11.751183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.460 [2024-09-29 16:49:11.766961] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.460 [2024-09-29 16:49:11.767012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.460 [2024-09-29 16:49:11.782921] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.460 [2024-09-29 16:49:11.782971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.460 [2024-09-29 16:49:11.797959] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.460 [2024-09-29 16:49:11.798011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.460 [2024-09-29 16:49:11.814666] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.460 [2024-09-29 16:49:11.814728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.460 [2024-09-29 16:49:11.830868] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.460 [2024-09-29 16:49:11.830902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.460 [2024-09-29 16:49:11.847193] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.460 [2024-09-29 16:49:11.847231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.460 [2024-09-29 16:49:11.863468] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.460 [2024-09-29 16:49:11.863507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.460 [2024-09-29 16:49:11.882306] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.460 [2024-09-29 16:49:11.882344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.460 [2024-09-29 16:49:11.896093] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.460 [2024-09-29 16:49:11.896132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.460 7969.75 IOPS, 62.26 MiB/s [2024-09-29 16:49:11.916024] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.460 [2024-09-29 16:49:11.916064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.460 [2024-09-29 16:49:11.929798] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.460 [2024-09-29 16:49:11.929832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.460 [2024-09-29 16:49:11.946592] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.460 [2024-09-29 16:49:11.946631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.460 [2024-09-29 16:49:11.962570] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.460 [2024-09-29 16:49:11.962610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.460 [2024-09-29 16:49:11.978901] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.460 [2024-09-29 16:49:11.978936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.460 [2024-09-29 16:49:11.994368] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.460 [2024-09-29 16:49:11.994406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.460 [2024-09-29 16:49:12.010486] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.460 [2024-09-29 16:49:12.010527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.719 [2024-09-29 16:49:12.027080] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.719 [2024-09-29 16:49:12.027121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.719 [2024-09-29 16:49:12.043477] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.719 [2024-09-29 16:49:12.043518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.719 [2024-09-29 16:49:12.060371] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.719 [2024-09-29 16:49:12.060410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.719 [2024-09-29 16:49:12.075580] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.719 [2024-09-29 16:49:12.075616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.719 [2024-09-29 16:49:12.093379] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.719 [2024-09-29 16:49:12.093415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.719 [2024-09-29 16:49:12.106444] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.719 [2024-09-29 16:49:12.106478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.719 [2024-09-29 16:49:12.122582] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.719 [2024-09-29 16:49:12.122616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.719 [2024-09-29 16:49:12.137404] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.719 [2024-09-29 16:49:12.137445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.719 [2024-09-29 16:49:12.151474] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.719 [2024-09-29 16:49:12.151524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.719 [2024-09-29 16:49:12.166592] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.719 [2024-09-29 16:49:12.166627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.719 [2024-09-29 16:49:12.181080] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.719 [2024-09-29 16:49:12.181114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.719 [2024-09-29 16:49:12.195373] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.719 [2024-09-29 16:49:12.195407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.719 [2024-09-29 16:49:12.207876] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.719 [2024-09-29 16:49:12.207912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.719 [2024-09-29 16:49:12.222348] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.719 [2024-09-29 16:49:12.222381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.719 [2024-09-29 16:49:12.236729] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.719 [2024-09-29 16:49:12.236779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.719 [2024-09-29 16:49:12.250780] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.719 [2024-09-29 16:49:12.250816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.719 [2024-09-29 16:49:12.265911] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.719 [2024-09-29 16:49:12.265961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.719 [2024-09-29 16:49:12.280668] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.719 [2024-09-29 16:49:12.280718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.977 [2024-09-29 16:49:12.295761] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.977 [2024-09-29 16:49:12.295798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.977 [2024-09-29 16:49:12.309475] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.977 [2024-09-29 16:49:12.309525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.977 [2024-09-29 16:49:12.326352] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.977 [2024-09-29 16:49:12.326386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.977 [2024-09-29 16:49:12.340978] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.977 [2024-09-29 16:49:12.341013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.977 [2024-09-29 16:49:12.355459] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.977 [2024-09-29 16:49:12.355493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.977 [2024-09-29 16:49:12.370492] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.977 [2024-09-29 16:49:12.370524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.977 [2024-09-29 16:49:12.385254] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.977 [2024-09-29 16:49:12.385287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.977 [2024-09-29 16:49:12.399595] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.977 [2024-09-29 16:49:12.399631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.977 [2024-09-29 16:49:12.416354] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.977 [2024-09-29 16:49:12.416463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.977 [2024-09-29 16:49:12.429425] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.977 [2024-09-29 16:49:12.429460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.977 [2024-09-29 16:49:12.445809] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.977 [2024-09-29 16:49:12.445845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.977 [2024-09-29 16:49:12.459879] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.977 [2024-09-29 16:49:12.459916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.977 [2024-09-29 16:49:12.474870] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.977 [2024-09-29 16:49:12.474907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.977 [2024-09-29 16:49:12.489778] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.977 [2024-09-29 16:49:12.489813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.977 [2024-09-29 16:49:12.504403] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.977 [2024-09-29 16:49:12.504436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.977 [2024-09-29 16:49:12.521614] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.977 [2024-09-29 16:49:12.521665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.977 [2024-09-29 16:49:12.534044] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.977 [2024-09-29 16:49:12.534082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.236 [2024-09-29 16:49:12.548349] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.236 [2024-09-29 16:49:12.548401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.236 [2024-09-29 16:49:12.563110] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.236 [2024-09-29 16:49:12.563145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.236 [2024-09-29 16:49:12.577914] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.236 [2024-09-29 16:49:12.577950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.236 [2024-09-29 16:49:12.593003] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.236 [2024-09-29 16:49:12.593051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.236 [2024-09-29 16:49:12.607791] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.236 [2024-09-29 16:49:12.607827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.236 [2024-09-29 16:49:12.621443] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.236 [2024-09-29 16:49:12.621476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.236 [2024-09-29 16:49:12.637473] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.236 [2024-09-29 16:49:12.637508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.236 [2024-09-29 16:49:12.651556] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.236 [2024-09-29 16:49:12.651607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.236 [2024-09-29 16:49:12.668994] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.236 [2024-09-29 16:49:12.669030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.236 [2024-09-29 16:49:12.681816] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.236 [2024-09-29 16:49:12.681851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.236 [2024-09-29 16:49:12.698137] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.236 [2024-09-29 16:49:12.698180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.236 [2024-09-29 16:49:12.713110] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.236 [2024-09-29 16:49:12.713147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.236 [2024-09-29 16:49:12.727480] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.236 [2024-09-29 16:49:12.727516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.236 [2024-09-29 16:49:12.742652] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.236 [2024-09-29 16:49:12.742699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.236 [2024-09-29 16:49:12.757427] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.236 [2024-09-29 16:49:12.757461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.236 [2024-09-29 16:49:12.771799] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.236 [2024-09-29 16:49:12.771835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.236 [2024-09-29 16:49:12.784907] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.236 [2024-09-29 16:49:12.784943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.494 [2024-09-29 16:49:12.801379] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.494 [2024-09-29 16:49:12.801416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.494 [2024-09-29 16:49:12.815968] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.495 [2024-09-29 16:49:12.816005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.495 [2024-09-29 16:49:12.829941] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.495 [2024-09-29 16:49:12.829977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.495 [2024-09-29 16:49:12.844142] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.495 [2024-09-29 16:49:12.844174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.495 [2024-09-29 16:49:12.857835] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.495 [2024-09-29 16:49:12.857871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.495 [2024-09-29 16:49:12.873788] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.495 [2024-09-29 16:49:12.873824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.495 [2024-09-29 16:49:12.888853] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.495 [2024-09-29 16:49:12.888889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.495 [2024-09-29 16:49:12.904025] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.495 [2024-09-29 16:49:12.904061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.495 8078.60 IOPS, 63.11 MiB/s [2024-09-29 16:49:12.917193] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.495 [2024-09-29 16:49:12.917229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.495 00:41:12.495 Latency(us) 00:41:12.495 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:12.495 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:41:12.495 Nvme1n1 : 5.01 8083.24 63.15 0.00 0.00 15808.41 3835.07 25631.86 00:41:12.495 =================================================================================================================== 00:41:12.495 Total : 8083.24 63.15 0.00 0.00 15808.41 3835.07 25631.86 00:41:12.495 [2024-09-29 16:49:12.923029] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.495 [2024-09-29 16:49:12.923061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.495 [2024-09-29 16:49:12.931062] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.495 [2024-09-29 16:49:12.931108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.495 [2024-09-29 16:49:12.939015] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.495 [2024-09-29 16:49:12.939060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.495 [2024-09-29 16:49:12.947042] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.495 [2024-09-29 16:49:12.947071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.495 [2024-09-29 16:49:12.955069] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.495 [2024-09-29 16:49:12.955097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.495 [2024-09-29 16:49:12.963009] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.495 [2024-09-29 16:49:12.963042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.495 [2024-09-29 16:49:12.971202] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.495 [2024-09-29 16:49:12.971273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.495 [2024-09-29 16:49:12.979144] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.495 [2024-09-29 16:49:12.979212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.495 [2024-09-29 16:49:12.987082] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.495 [2024-09-29 16:49:12.987125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.495 [2024-09-29 16:49:12.995031] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.495 [2024-09-29 16:49:12.995058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.495 [2024-09-29 16:49:13.003005] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.495 [2024-09-29 16:49:13.003032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.495 [2024-09-29 16:49:13.011031] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.495 [2024-09-29 16:49:13.011058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.495 [2024-09-29 16:49:13.019045] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.495 [2024-09-29 16:49:13.019071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.495 [2024-09-29 16:49:13.027005] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.495 [2024-09-29 16:49:13.027048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.495 [2024-09-29 16:49:13.035028] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.495 [2024-09-29 16:49:13.035055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.495 [2024-09-29 16:49:13.043006] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.495 [2024-09-29 16:49:13.043033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.495 [2024-09-29 16:49:13.051077] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.495 [2024-09-29 16:49:13.051114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.753 [2024-09-29 16:49:13.059199] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.753 [2024-09-29 16:49:13.059266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.753 [2024-09-29 16:49:13.067131] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.753 [2024-09-29 16:49:13.067197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.753 [2024-09-29 16:49:13.075062] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.753 [2024-09-29 16:49:13.075091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.753 [2024-09-29 16:49:13.083048] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.753 [2024-09-29 16:49:13.083075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.753 [2024-09-29 16:49:13.091005] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.753 [2024-09-29 16:49:13.091031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.753 [2024-09-29 16:49:13.099045] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.753 [2024-09-29 16:49:13.099072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.753 [2024-09-29 16:49:13.107007] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.753 [2024-09-29 16:49:13.107049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.753 [2024-09-29 16:49:13.115038] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.753 [2024-09-29 16:49:13.115065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.753 [2024-09-29 16:49:13.123043] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.753 [2024-09-29 16:49:13.123070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.753 [2024-09-29 16:49:13.131021] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.753 [2024-09-29 16:49:13.131047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.753 [2024-09-29 16:49:13.139045] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.753 [2024-09-29 16:49:13.139073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.753 [2024-09-29 16:49:13.147048] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.753 [2024-09-29 16:49:13.147076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.753 [2024-09-29 16:49:13.155005] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.753 [2024-09-29 16:49:13.155032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.753 [2024-09-29 16:49:13.163045] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.753 [2024-09-29 16:49:13.163073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.753 [2024-09-29 16:49:13.171008] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.753 [2024-09-29 16:49:13.171034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.753 [2024-09-29 16:49:13.179027] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.753 [2024-09-29 16:49:13.179053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.753 [2024-09-29 16:49:13.187062] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.753 [2024-09-29 16:49:13.187089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.753 [2024-09-29 16:49:13.195017] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.753 [2024-09-29 16:49:13.195059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.753 [2024-09-29 16:49:13.203087] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.753 [2024-09-29 16:49:13.203116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.753 [2024-09-29 16:49:13.211180] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.753 [2024-09-29 16:49:13.211241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.753 [2024-09-29 16:49:13.219093] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.753 [2024-09-29 16:49:13.219140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.753 [2024-09-29 16:49:13.227069] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.753 [2024-09-29 16:49:13.227097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.753 [2024-09-29 16:49:13.235028] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.753 [2024-09-29 16:49:13.235054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.753 [2024-09-29 16:49:13.243063] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.753 [2024-09-29 16:49:13.243091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.753 [2024-09-29 16:49:13.251030] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.753 [2024-09-29 16:49:13.251066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.753 [2024-09-29 16:49:13.259009] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.753 [2024-09-29 16:49:13.259051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.753 [2024-09-29 16:49:13.267184] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.753 [2024-09-29 16:49:13.267253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.753 [2024-09-29 16:49:13.275192] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.753 [2024-09-29 16:49:13.275259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.753 [2024-09-29 16:49:13.283148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.753 [2024-09-29 16:49:13.283219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.753 [2024-09-29 16:49:13.291169] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.753 [2024-09-29 16:49:13.291223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.753 [2024-09-29 16:49:13.299038] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.753 [2024-09-29 16:49:13.299065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.753 [2024-09-29 16:49:13.311048] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.753 [2024-09-29 16:49:13.311076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.012 [2024-09-29 16:49:13.319035] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.012 [2024-09-29 16:49:13.319080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.012 [2024-09-29 16:49:13.327014] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.012 [2024-09-29 16:49:13.327056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.012 [2024-09-29 16:49:13.335033] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.012 [2024-09-29 16:49:13.335061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.012 [2024-09-29 16:49:13.343041] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.012 [2024-09-29 16:49:13.343068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.012 [2024-09-29 16:49:13.351023] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.012 [2024-09-29 16:49:13.351050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.012 [2024-09-29 16:49:13.359031] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.012 [2024-09-29 16:49:13.359058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.012 [2024-09-29 16:49:13.367007] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.012 [2024-09-29 16:49:13.367048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.012 [2024-09-29 16:49:13.375048] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.012 [2024-09-29 16:49:13.375083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.012 [2024-09-29 16:49:13.383045] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.012 [2024-09-29 16:49:13.383072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.012 [2024-09-29 16:49:13.391006] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.012 [2024-09-29 16:49:13.391046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.012 [2024-09-29 16:49:13.399026] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.012 [2024-09-29 16:49:13.399052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.012 [2024-09-29 16:49:13.407041] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.012 [2024-09-29 16:49:13.407067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.012 [2024-09-29 16:49:13.415010] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.012 [2024-09-29 16:49:13.415036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.012 [2024-09-29 16:49:13.423039] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.012 [2024-09-29 16:49:13.423065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.012 [2024-09-29 16:49:13.431058] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.012 [2024-09-29 16:49:13.431108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.012 [2024-09-29 16:49:13.439158] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.012 [2024-09-29 16:49:13.439223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.012 [2024-09-29 16:49:13.447048] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.012 [2024-09-29 16:49:13.447076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.012 [2024-09-29 16:49:13.455011] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.012 [2024-09-29 16:49:13.455052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.012 [2024-09-29 16:49:13.463058] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.012 [2024-09-29 16:49:13.463085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.012 [2024-09-29 16:49:13.471047] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.012 [2024-09-29 16:49:13.471073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.013 [2024-09-29 16:49:13.479010] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.013 [2024-09-29 16:49:13.479037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.013 [2024-09-29 16:49:13.487046] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.013 [2024-09-29 16:49:13.487072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.013 [2024-09-29 16:49:13.495036] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.013 [2024-09-29 16:49:13.495063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.013 [2024-09-29 16:49:13.503056] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.013 [2024-09-29 16:49:13.503083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.013 [2024-09-29 16:49:13.511048] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.013 [2024-09-29 16:49:13.511075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.013 [2024-09-29 16:49:13.519007] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.013 [2024-09-29 16:49:13.519048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.013 [2024-09-29 16:49:13.527049] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.013 [2024-09-29 16:49:13.527083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.013 [2024-09-29 16:49:13.535070] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.013 [2024-09-29 16:49:13.535098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.013 [2024-09-29 16:49:13.543012] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.013 [2024-09-29 16:49:13.543055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.013 [2024-09-29 16:49:13.551192] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.013 [2024-09-29 16:49:13.551255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.013 [2024-09-29 16:49:13.559036] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.013 [2024-09-29 16:49:13.559063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.013 [2024-09-29 16:49:13.567058] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.013 [2024-09-29 16:49:13.567091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.272 [2024-09-29 16:49:13.575050] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.272 [2024-09-29 16:49:13.575083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.272 [2024-09-29 16:49:13.583018] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.272 [2024-09-29 16:49:13.583050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.272 [2024-09-29 16:49:13.591040] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.272 [2024-09-29 16:49:13.591073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.272 [2024-09-29 16:49:13.599041] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.272 [2024-09-29 16:49:13.599073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.272 [2024-09-29 16:49:13.607019] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.272 [2024-09-29 16:49:13.607051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.272 [2024-09-29 16:49:13.615057] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.272 [2024-09-29 16:49:13.615089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.272 [2024-09-29 16:49:13.623001] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.272 [2024-09-29 16:49:13.623045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.272 [2024-09-29 16:49:13.631064] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.272 [2024-09-29 16:49:13.631096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.272 [2024-09-29 16:49:13.639109] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.272 [2024-09-29 16:49:13.639157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.272 [2024-09-29 16:49:13.647102] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.272 [2024-09-29 16:49:13.647152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.272 [2024-09-29 16:49:13.655047] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.272 [2024-09-29 16:49:13.655081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.272 [2024-09-29 16:49:13.663041] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.272 [2024-09-29 16:49:13.663074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.272 [2024-09-29 16:49:13.671021] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.272 [2024-09-29 16:49:13.671053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.272 [2024-09-29 16:49:13.679037] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.272 [2024-09-29 16:49:13.679077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.272 [2024-09-29 16:49:13.687012] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.272 [2024-09-29 16:49:13.687044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.272 [2024-09-29 16:49:13.695042] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.272 [2024-09-29 16:49:13.695074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.272 [2024-09-29 16:49:13.703037] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.272 [2024-09-29 16:49:13.703071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.272 [2024-09-29 16:49:13.711012] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.272 [2024-09-29 16:49:13.711044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.272 [2024-09-29 16:49:13.719039] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.272 [2024-09-29 16:49:13.719073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.272 [2024-09-29 16:49:13.727055] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.272 [2024-09-29 16:49:13.727088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.272 [2024-09-29 16:49:13.735018] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.272 [2024-09-29 16:49:13.735053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.272 [2024-09-29 16:49:13.743034] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.272 [2024-09-29 16:49:13.743067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.272 [2024-09-29 16:49:13.751113] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.272 [2024-09-29 16:49:13.751171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.272 [2024-09-29 16:49:13.759063] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.272 [2024-09-29 16:49:13.759097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.272 [2024-09-29 16:49:13.767062] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.272 [2024-09-29 16:49:13.767095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.272 [2024-09-29 16:49:13.775019] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.272 [2024-09-29 16:49:13.775060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.272 [2024-09-29 16:49:13.783046] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.272 [2024-09-29 16:49:13.783079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.272 [2024-09-29 16:49:13.791026] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.272 [2024-09-29 16:49:13.791055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.272 [2024-09-29 16:49:13.799133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.272 [2024-09-29 16:49:13.799197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.272 [2024-09-29 16:49:13.807052] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.273 [2024-09-29 16:49:13.807085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.273 [2024-09-29 16:49:13.815028] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.273 [2024-09-29 16:49:13.815060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.273 [2024-09-29 16:49:13.823055] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.273 [2024-09-29 16:49:13.823089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.273 [2024-09-29 16:49:13.831041] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.273 [2024-09-29 16:49:13.831082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.530 [2024-09-29 16:49:13.839018] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.530 [2024-09-29 16:49:13.839051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.530 [2024-09-29 16:49:13.847042] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.530 [2024-09-29 16:49:13.847074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.530 [2024-09-29 16:49:13.855034] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.530 [2024-09-29 16:49:13.855067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.530 [2024-09-29 16:49:13.863026] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.530 [2024-09-29 16:49:13.863059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.530 [2024-09-29 16:49:13.871071] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.530 [2024-09-29 16:49:13.871103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.530 [2024-09-29 16:49:13.879012] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.530 [2024-09-29 16:49:13.879045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.530 [2024-09-29 16:49:13.887046] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.530 [2024-09-29 16:49:13.887079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.530 [2024-09-29 16:49:13.895035] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.530 [2024-09-29 16:49:13.895068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.530 [2024-09-29 16:49:13.903016] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.530 [2024-09-29 16:49:13.903048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.530 [2024-09-29 16:49:13.911041] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.530 [2024-09-29 16:49:13.911073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.530 [2024-09-29 16:49:13.919064] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.530 [2024-09-29 16:49:13.919098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.530 [2024-09-29 16:49:13.927028] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.530 [2024-09-29 16:49:13.927065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.530 [2024-09-29 16:49:13.935072] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.530 [2024-09-29 16:49:13.935105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.530 [2024-09-29 16:49:13.943020] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.530 [2024-09-29 16:49:13.943053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.530 [2024-09-29 16:49:13.951041] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.530 [2024-09-29 16:49:13.951074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3370886) - No such process 00:41:13.530 16:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3370886 00:41:13.530 16:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:41:13.530 16:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:13.530 16:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:13.530 16:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:13.530 16:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:41:13.530 16:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:13.530 16:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:13.530 delay0 00:41:13.530 16:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:13.530 16:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:41:13.530 16:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:13.530 16:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:13.530 16:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:13.530 16:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:41:13.788 [2024-09-29 16:49:14.119266] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:41:21.896 Initializing NVMe Controllers 00:41:21.896 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:41:21.896 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:41:21.896 Initialization complete. Launching workers. 00:41:21.896 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 218, failed: 17867 00:41:21.896 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 17927, failed to submit 158 00:41:21.896 success 17870, unsuccessful 57, failed 0 00:41:21.896 16:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:41:21.896 16:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:41:21.896 16:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # nvmfcleanup 00:41:21.896 16:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:41:21.896 16:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:21.896 16:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:41:21.896 16:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:21.896 16:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:21.896 rmmod nvme_tcp 00:41:21.896 rmmod nvme_fabrics 00:41:21.896 rmmod nvme_keyring 00:41:21.896 16:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:21.896 16:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:41:21.896 16:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:41:21.896 16:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@513 -- # '[' -n 3369355 ']' 00:41:21.896 16:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@514 -- # killprocess 3369355 00:41:21.896 16:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 3369355 ']' 00:41:21.896 16:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 3369355 00:41:21.896 16:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:41:21.896 16:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:21.896 16:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3369355 00:41:21.896 16:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:41:21.896 16:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:41:21.896 16:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3369355' 00:41:21.896 killing process with pid 3369355 00:41:21.896 16:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 3369355 00:41:21.896 16:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 3369355 00:41:22.461 16:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:41:22.461 16:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:41:22.461 16:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:41:22.461 16:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:41:22.461 16:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-save 00:41:22.461 16:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:41:22.461 16:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-restore 00:41:22.461 16:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:22.461 16:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:22.461 16:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:22.461 16:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:22.461 16:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:24.365 16:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:24.365 00:41:24.365 real 0m33.600s 00:41:24.365 user 0m48.434s 00:41:24.365 sys 0m10.517s 00:41:24.365 16:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:24.365 16:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:24.365 ************************************ 00:41:24.365 END TEST nvmf_zcopy 00:41:24.365 ************************************ 00:41:24.365 16:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:41:24.365 16:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:41:24.365 16:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:24.365 16:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:24.623 ************************************ 00:41:24.623 START TEST nvmf_nmic 00:41:24.623 ************************************ 00:41:24.623 16:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:41:24.623 * Looking for test storage... 00:41:24.623 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:24.623 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:41:24.623 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:41:24.623 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:41:24.623 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:41:24.623 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:24.623 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:24.623 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:24.623 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:41:24.623 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:41:24.623 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:41:24.623 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:41:24.623 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:41:24.623 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:41:24.623 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:41:24.623 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:24.623 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:41:24.623 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:41:24.623 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:24.623 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:24.623 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:41:24.623 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:41:24.623 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:24.623 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:41:24.623 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:41:24.623 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:41:24.623 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:41:24.623 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:24.623 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:41:24.623 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:41:24.623 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:24.623 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:24.623 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:41:24.623 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:24.623 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:41:24.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:24.623 --rc genhtml_branch_coverage=1 00:41:24.623 --rc genhtml_function_coverage=1 00:41:24.623 --rc genhtml_legend=1 00:41:24.623 --rc geninfo_all_blocks=1 00:41:24.623 --rc geninfo_unexecuted_blocks=1 00:41:24.623 00:41:24.623 ' 00:41:24.623 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:41:24.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:24.623 --rc genhtml_branch_coverage=1 00:41:24.623 --rc genhtml_function_coverage=1 00:41:24.623 --rc genhtml_legend=1 00:41:24.623 --rc geninfo_all_blocks=1 00:41:24.623 --rc geninfo_unexecuted_blocks=1 00:41:24.623 00:41:24.623 ' 00:41:24.623 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:41:24.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:24.623 --rc genhtml_branch_coverage=1 00:41:24.623 --rc genhtml_function_coverage=1 00:41:24.623 --rc genhtml_legend=1 00:41:24.623 --rc geninfo_all_blocks=1 00:41:24.623 --rc geninfo_unexecuted_blocks=1 00:41:24.623 00:41:24.623 ' 00:41:24.623 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:41:24.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:24.623 --rc genhtml_branch_coverage=1 00:41:24.623 --rc genhtml_function_coverage=1 00:41:24.623 --rc genhtml_legend=1 00:41:24.623 --rc geninfo_all_blocks=1 00:41:24.623 --rc geninfo_unexecuted_blocks=1 00:41:24.624 00:41:24.624 ' 00:41:24.624 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:24.624 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:41:24.624 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:24.624 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:24.624 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:24.624 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:24.624 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:24.624 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:24.624 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:24.624 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:24.624 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:24.624 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:24.624 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:41:24.624 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:41:24.624 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:24.624 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:24.624 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:24.624 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:24.624 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:24.624 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:41:24.624 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:24.624 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:24.624 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:24.624 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:24.624 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:24.624 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:24.624 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:41:24.624 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:24.624 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:41:24.624 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:24.624 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:24.624 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:24.624 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:24.624 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:24.624 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:24.624 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:24.624 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:24.624 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:24.624 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:24.624 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:24.624 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:24.624 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:41:24.624 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:41:24.624 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:24.624 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@472 -- # prepare_net_devs 00:41:24.624 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@434 -- # local -g is_hw=no 00:41:24.624 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@436 -- # remove_spdk_ns 00:41:24.624 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:24.624 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:24.624 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:24.624 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:41:24.624 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:41:24.624 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:41:24.624 16:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:27.150 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:27.150 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:41:27.150 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:27.150 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:27.150 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:27.150 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:27.150 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:27.150 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:41:27.150 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:27.150 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:41:27.150 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:41:27.150 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:41:27.150 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:41:27.150 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:41:27.150 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:41:27.150 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:27.150 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:27.150 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:27.150 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:27.150 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:27.150 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:27.150 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:27.150 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:27.150 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:27.150 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:27.150 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:27.150 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:41:27.150 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:41:27.150 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:41:27.150 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:41:27.150 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:41:27.150 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:41:27.150 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:41:27.150 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:41:27.150 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:41:27.150 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:41:27.150 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:41:27.150 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:27.150 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:27.150 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:41:27.150 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:41:27.150 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:41:27.150 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:41:27.150 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:41:27.150 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:41:27.150 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:27.150 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:27.150 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:41:27.150 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:41:27.150 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:41:27.150 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:41:27.150 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:41:27.150 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:27.150 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:41:27.150 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:27.150 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ up == up ]] 00:41:27.150 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:41:27.150 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:27.150 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:41:27.151 Found net devices under 0000:0a:00.0: cvl_0_0 00:41:27.151 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:41:27.151 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:41:27.151 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:27.151 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:41:27.151 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:27.151 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ up == up ]] 00:41:27.151 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:41:27.151 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:27.151 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:41:27.151 Found net devices under 0000:0a:00.1: cvl_0_1 00:41:27.151 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:41:27.151 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:41:27.151 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # is_hw=yes 00:41:27.151 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:41:27.151 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:41:27.151 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:41:27.151 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:27.151 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:27.151 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:27.151 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:27.151 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:27.151 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:27.151 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:27.151 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:27.151 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:27.151 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:27.151 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:27.151 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:27.151 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:27.151 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:27.151 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:27.151 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:27.151 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:27.151 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:27.151 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:27.151 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:27.151 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:27.151 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:27.151 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:27.151 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:27.151 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.366 ms 00:41:27.151 00:41:27.151 --- 10.0.0.2 ping statistics --- 00:41:27.151 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:27.151 rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms 00:41:27.151 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:27.151 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:27.151 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:41:27.151 00:41:27.151 --- 10.0.0.1 ping statistics --- 00:41:27.151 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:27.151 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:41:27.151 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:27.151 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # return 0 00:41:27.151 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:41:27.151 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:27.151 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:41:27.151 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:41:27.151 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:27.151 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:41:27.151 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:41:27.151 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:41:27.151 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:41:27.151 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:27.151 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:27.151 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@505 -- # nvmfpid=3374610 00:41:27.151 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:41:27.151 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@506 -- # waitforlisten 3374610 00:41:27.151 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 3374610 ']' 00:41:27.151 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:27.151 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:27.151 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:27.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:27.151 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:27.151 16:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:27.151 [2024-09-29 16:49:27.535468] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:27.151 [2024-09-29 16:49:27.538322] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:41:27.151 [2024-09-29 16:49:27.538427] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:27.151 [2024-09-29 16:49:27.679644] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:27.409 [2024-09-29 16:49:27.942797] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:27.409 [2024-09-29 16:49:27.942871] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:27.409 [2024-09-29 16:49:27.942899] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:27.409 [2024-09-29 16:49:27.942921] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:27.409 [2024-09-29 16:49:27.942943] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:27.409 [2024-09-29 16:49:27.943075] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:41:27.409 [2024-09-29 16:49:27.943132] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:41:27.409 [2024-09-29 16:49:27.943174] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:41:27.409 [2024-09-29 16:49:27.943185] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:41:27.974 [2024-09-29 16:49:28.319987] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:27.974 [2024-09-29 16:49:28.321144] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:27.974 [2024-09-29 16:49:28.322456] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:41:27.974 [2024-09-29 16:49:28.323185] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:27.974 [2024-09-29 16:49:28.323491] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:41:27.974 16:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:27.974 16:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:41:27.974 16:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:41:27.974 16:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:27.974 16:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:27.974 16:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:27.974 16:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:27.974 16:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:27.975 16:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:27.975 [2024-09-29 16:49:28.520187] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:27.975 16:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:27.975 16:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:41:27.975 16:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:27.975 16:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:28.233 Malloc0 00:41:28.233 16:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:28.233 16:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:41:28.233 16:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:28.233 16:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:28.233 16:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:28.233 16:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:28.233 16:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:28.233 16:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:28.233 16:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:28.233 16:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:28.233 16:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:28.233 16:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:28.233 [2024-09-29 16:49:28.636453] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:28.233 16:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:28.233 16:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:41:28.233 test case1: single bdev can't be used in multiple subsystems 00:41:28.233 16:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:41:28.233 16:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:28.233 16:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:28.233 16:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:28.233 16:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:41:28.233 16:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:28.233 16:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:28.233 16:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:28.233 16:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:41:28.233 16:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:41:28.233 16:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:28.233 16:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:28.233 [2024-09-29 16:49:28.660128] bdev.c:8193:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:41:28.234 [2024-09-29 16:49:28.660201] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:41:28.234 [2024-09-29 16:49:28.660237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:28.234 request: 00:41:28.234 { 00:41:28.234 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:41:28.234 "namespace": { 00:41:28.234 "bdev_name": "Malloc0", 00:41:28.234 "no_auto_visible": false 00:41:28.234 }, 00:41:28.234 "method": "nvmf_subsystem_add_ns", 00:41:28.234 "req_id": 1 00:41:28.234 } 00:41:28.234 Got JSON-RPC error response 00:41:28.234 response: 00:41:28.234 { 00:41:28.234 "code": -32602, 00:41:28.234 "message": "Invalid parameters" 00:41:28.234 } 00:41:28.234 16:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:41:28.234 16:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:41:28.234 16:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:41:28.234 16:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:41:28.234 Adding namespace failed - expected result. 00:41:28.234 16:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:41:28.234 test case2: host connect to nvmf target in multiple paths 00:41:28.234 16:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:41:28.234 16:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:28.234 16:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:28.234 [2024-09-29 16:49:28.668294] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:41:28.234 16:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:28.234 16:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:41:28.492 16:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:41:28.750 16:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:41:28.750 16:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:41:28.750 16:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:41:28.750 16:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:41:28.750 16:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:41:30.712 16:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:41:30.712 16:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:41:30.712 16:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:41:30.712 16:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:41:30.712 16:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:41:30.712 16:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:41:30.712 16:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:41:30.712 [global] 00:41:30.712 thread=1 00:41:30.712 invalidate=1 00:41:30.712 rw=write 00:41:30.712 time_based=1 00:41:30.712 runtime=1 00:41:30.712 ioengine=libaio 00:41:30.712 direct=1 00:41:30.712 bs=4096 00:41:30.712 iodepth=1 00:41:30.712 norandommap=0 00:41:30.712 numjobs=1 00:41:30.712 00:41:30.712 verify_dump=1 00:41:30.712 verify_backlog=512 00:41:30.712 verify_state_save=0 00:41:30.712 do_verify=1 00:41:30.712 verify=crc32c-intel 00:41:30.712 [job0] 00:41:30.712 filename=/dev/nvme0n1 00:41:30.712 Could not set queue depth (nvme0n1) 00:41:30.970 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:30.970 fio-3.35 00:41:30.970 Starting 1 thread 00:41:32.345 00:41:32.345 job0: (groupid=0, jobs=1): err= 0: pid=3375220: Sun Sep 29 16:49:32 2024 00:41:32.345 read: IOPS=20, BW=81.4KiB/s (83.3kB/s)(84.0KiB/1032msec) 00:41:32.345 slat (nsec): min=6786, max=27602, avg=13940.43, stdev=3759.08 00:41:32.345 clat (usec): min=40654, max=41014, avg=40965.58, stdev=72.93 00:41:32.345 lat (usec): min=40661, max=41029, avg=40979.52, stdev=74.76 00:41:32.345 clat percentiles (usec): 00:41:32.345 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:41:32.345 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:41:32.345 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:32.345 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:41:32.345 | 99.99th=[41157] 00:41:32.345 write: IOPS=496, BW=1984KiB/s (2032kB/s)(2048KiB/1032msec); 0 zone resets 00:41:32.345 slat (usec): min=7, max=32192, avg=72.02, stdev=1422.33 00:41:32.345 clat (usec): min=214, max=785, avg=259.92, stdev=31.87 00:41:32.345 lat (usec): min=222, max=32653, avg=331.93, stdev=1431.56 00:41:32.345 clat percentiles (usec): 00:41:32.345 | 1.00th=[ 221], 5.00th=[ 235], 10.00th=[ 243], 20.00th=[ 245], 00:41:32.345 | 30.00th=[ 249], 40.00th=[ 251], 50.00th=[ 258], 60.00th=[ 260], 00:41:32.345 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 281], 95.00th=[ 285], 00:41:32.345 | 99.00th=[ 388], 99.50th=[ 396], 99.90th=[ 783], 99.95th=[ 783], 00:41:32.345 | 99.99th=[ 783] 00:41:32.345 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:41:32.345 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:41:32.345 lat (usec) : 250=33.02%, 500=62.85%, 1000=0.19% 00:41:32.345 lat (msec) : 50=3.94% 00:41:32.345 cpu : usr=0.48%, sys=0.48%, ctx=536, majf=0, minf=1 00:41:32.345 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:32.345 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:32.345 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:32.345 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:32.345 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:32.345 00:41:32.345 Run status group 0 (all jobs): 00:41:32.345 READ: bw=81.4KiB/s (83.3kB/s), 81.4KiB/s-81.4KiB/s (83.3kB/s-83.3kB/s), io=84.0KiB (86.0kB), run=1032-1032msec 00:41:32.345 WRITE: bw=1984KiB/s (2032kB/s), 1984KiB/s-1984KiB/s (2032kB/s-2032kB/s), io=2048KiB (2097kB), run=1032-1032msec 00:41:32.345 00:41:32.345 Disk stats (read/write): 00:41:32.345 nvme0n1: ios=43/512, merge=0/0, ticks=1662/126, in_queue=1788, util=98.90% 00:41:32.345 16:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:41:32.345 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:41:32.345 16:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:41:32.345 16:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:41:32.345 16:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:41:32.346 16:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:32.346 16:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:41:32.346 16:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:32.346 16:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:41:32.346 16:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:41:32.346 16:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:41:32.346 16:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # nvmfcleanup 00:41:32.346 16:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:41:32.346 16:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:32.346 16:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:41:32.346 16:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:32.346 16:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:32.346 rmmod nvme_tcp 00:41:32.346 rmmod nvme_fabrics 00:41:32.346 rmmod nvme_keyring 00:41:32.604 16:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:32.604 16:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:41:32.604 16:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:41:32.604 16:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@513 -- # '[' -n 3374610 ']' 00:41:32.604 16:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@514 -- # killprocess 3374610 00:41:32.604 16:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 3374610 ']' 00:41:32.604 16:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 3374610 00:41:32.604 16:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:41:32.604 16:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:32.604 16:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3374610 00:41:32.604 16:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:41:32.604 16:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:41:32.604 16:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3374610' 00:41:32.604 killing process with pid 3374610 00:41:32.604 16:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 3374610 00:41:32.604 16:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 3374610 00:41:33.981 16:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:41:33.981 16:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:41:33.981 16:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:41:33.981 16:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:41:33.981 16:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-save 00:41:33.981 16:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:41:33.981 16:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-restore 00:41:34.238 16:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:34.238 16:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:34.238 16:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:34.238 16:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:34.238 16:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:36.142 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:36.142 00:41:36.142 real 0m11.638s 00:41:36.142 user 0m19.951s 00:41:36.142 sys 0m3.753s 00:41:36.142 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:36.142 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:36.142 ************************************ 00:41:36.142 END TEST nvmf_nmic 00:41:36.142 ************************************ 00:41:36.142 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:41:36.142 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:41:36.142 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:36.142 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:36.142 ************************************ 00:41:36.142 START TEST nvmf_fio_target 00:41:36.142 ************************************ 00:41:36.142 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:41:36.142 * Looking for test storage... 00:41:36.142 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:36.142 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:41:36.142 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:41:36.142 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:41:36.402 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:41:36.402 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:36.402 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:36.402 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:36.402 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:41:36.402 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:41:36.402 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:41:36.402 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:41:36.402 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:41:36.402 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:41:36.402 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:41:36.402 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:36.402 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:41:36.402 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:41:36.402 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:36.402 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:36.402 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:41:36.402 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:41:36.402 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:36.402 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:41:36.402 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:41:36.402 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:41:36.402 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:41:36.402 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:36.402 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:41:36.402 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:41:36.402 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:36.402 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:36.402 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:41:36.402 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:36.402 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:41:36.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:36.402 --rc genhtml_branch_coverage=1 00:41:36.402 --rc genhtml_function_coverage=1 00:41:36.402 --rc genhtml_legend=1 00:41:36.402 --rc geninfo_all_blocks=1 00:41:36.402 --rc geninfo_unexecuted_blocks=1 00:41:36.402 00:41:36.402 ' 00:41:36.402 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:41:36.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:36.402 --rc genhtml_branch_coverage=1 00:41:36.402 --rc genhtml_function_coverage=1 00:41:36.402 --rc genhtml_legend=1 00:41:36.402 --rc geninfo_all_blocks=1 00:41:36.402 --rc geninfo_unexecuted_blocks=1 00:41:36.402 00:41:36.402 ' 00:41:36.402 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:41:36.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:36.402 --rc genhtml_branch_coverage=1 00:41:36.402 --rc genhtml_function_coverage=1 00:41:36.402 --rc genhtml_legend=1 00:41:36.402 --rc geninfo_all_blocks=1 00:41:36.402 --rc geninfo_unexecuted_blocks=1 00:41:36.402 00:41:36.402 ' 00:41:36.402 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:41:36.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:36.402 --rc genhtml_branch_coverage=1 00:41:36.402 --rc genhtml_function_coverage=1 00:41:36.402 --rc genhtml_legend=1 00:41:36.402 --rc geninfo_all_blocks=1 00:41:36.402 --rc geninfo_unexecuted_blocks=1 00:41:36.402 00:41:36.402 ' 00:41:36.402 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:36.402 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:41:36.402 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:36.402 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:36.402 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:36.402 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:36.402 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:36.402 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:36.402 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:36.402 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:36.402 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:36.402 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:36.402 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:41:36.402 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:41:36.402 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:36.402 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:36.402 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:36.402 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:36.402 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:36.402 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:41:36.402 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:36.402 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:36.402 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:36.402 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:36.402 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:36.402 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:36.402 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:41:36.402 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:36.402 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:41:36.402 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:36.402 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:36.402 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:36.402 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:36.403 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:36.403 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:36.403 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:36.403 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:36.403 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:36.403 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:36.403 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:36.403 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:36.403 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:36.403 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:41:36.403 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:41:36.403 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:36.403 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:41:36.403 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:41:36.403 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:41:36.403 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:36.403 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:36.403 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:36.403 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:41:36.403 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:41:36.403 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:41:36.403 16:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:41:38.305 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:38.305 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:41:38.306 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:41:38.306 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:41:38.306 Found net devices under 0000:0a:00.0: cvl_0_0 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:41:38.306 Found net devices under 0000:0a:00.1: cvl_0_1 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # is_hw=yes 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:38.306 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:38.306 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:38.306 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:41:38.306 00:41:38.306 --- 10.0.0.2 ping statistics --- 00:41:38.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:38.307 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:41:38.307 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:38.307 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:38.307 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:41:38.307 00:41:38.307 --- 10.0.0.1 ping statistics --- 00:41:38.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:38.307 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:41:38.307 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:38.307 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # return 0 00:41:38.307 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:41:38.307 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:38.307 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:41:38.307 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:41:38.307 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:38.307 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:41:38.307 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:41:38.307 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:41:38.307 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:41:38.307 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:38.307 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:41:38.307 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@505 -- # nvmfpid=3377438 00:41:38.307 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:41:38.307 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@506 -- # waitforlisten 3377438 00:41:38.307 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 3377438 ']' 00:41:38.307 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:38.307 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:38.307 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:38.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:38.307 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:38.307 16:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:41:38.566 [2024-09-29 16:49:38.897531] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:38.566 [2024-09-29 16:49:38.900059] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:41:38.566 [2024-09-29 16:49:38.900155] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:38.566 [2024-09-29 16:49:39.049114] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:38.824 [2024-09-29 16:49:39.315378] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:38.824 [2024-09-29 16:49:39.315455] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:38.824 [2024-09-29 16:49:39.315483] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:38.824 [2024-09-29 16:49:39.315505] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:38.824 [2024-09-29 16:49:39.315527] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:38.824 [2024-09-29 16:49:39.315694] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:41:38.824 [2024-09-29 16:49:39.315742] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:41:38.824 [2024-09-29 16:49:39.315769] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:41:38.824 [2024-09-29 16:49:39.315781] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:41:39.390 [2024-09-29 16:49:39.700169] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:39.390 [2024-09-29 16:49:39.701351] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:39.390 [2024-09-29 16:49:39.702564] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:41:39.390 [2024-09-29 16:49:39.703364] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:39.390 [2024-09-29 16:49:39.703716] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:41:39.390 16:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:39.390 16:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:41:39.390 16:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:41:39.390 16:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:39.390 16:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:41:39.390 16:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:39.390 16:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:41:39.648 [2024-09-29 16:49:40.188908] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:39.906 16:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:40.164 16:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:41:40.164 16:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:40.422 16:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:41:40.422 16:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:40.988 16:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:41:40.988 16:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:41.246 16:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:41:41.246 16:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:41:41.504 16:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:41.762 16:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:41:41.762 16:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:42.328 16:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:41:42.328 16:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:42.586 16:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:41:42.586 16:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:41:42.843 16:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:41:43.409 16:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:41:43.409 16:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:43.409 16:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:41:43.409 16:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:41:43.667 16:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:43.925 [2024-09-29 16:49:44.457124] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:43.925 16:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:41:44.490 16:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:41:44.491 16:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:41:44.748 16:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:41:44.748 16:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:41:44.748 16:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:41:44.748 16:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:41:44.748 16:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:41:44.748 16:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:41:47.275 16:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:41:47.275 16:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:41:47.275 16:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:41:47.275 16:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:41:47.275 16:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:41:47.275 16:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:41:47.275 16:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:41:47.275 [global] 00:41:47.275 thread=1 00:41:47.275 invalidate=1 00:41:47.275 rw=write 00:41:47.275 time_based=1 00:41:47.275 runtime=1 00:41:47.275 ioengine=libaio 00:41:47.275 direct=1 00:41:47.275 bs=4096 00:41:47.275 iodepth=1 00:41:47.275 norandommap=0 00:41:47.275 numjobs=1 00:41:47.275 00:41:47.275 verify_dump=1 00:41:47.275 verify_backlog=512 00:41:47.275 verify_state_save=0 00:41:47.275 do_verify=1 00:41:47.275 verify=crc32c-intel 00:41:47.275 [job0] 00:41:47.275 filename=/dev/nvme0n1 00:41:47.275 [job1] 00:41:47.275 filename=/dev/nvme0n2 00:41:47.275 [job2] 00:41:47.275 filename=/dev/nvme0n3 00:41:47.275 [job3] 00:41:47.275 filename=/dev/nvme0n4 00:41:47.275 Could not set queue depth (nvme0n1) 00:41:47.275 Could not set queue depth (nvme0n2) 00:41:47.275 Could not set queue depth (nvme0n3) 00:41:47.275 Could not set queue depth (nvme0n4) 00:41:47.275 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:47.275 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:47.275 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:47.275 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:47.275 fio-3.35 00:41:47.275 Starting 4 threads 00:41:48.209 00:41:48.209 job0: (groupid=0, jobs=1): err= 0: pid=3378627: Sun Sep 29 16:49:48 2024 00:41:48.209 read: IOPS=360, BW=1443KiB/s (1477kB/s)(1444KiB/1001msec) 00:41:48.209 slat (nsec): min=5788, max=35427, avg=8110.47, stdev=5187.35 00:41:48.209 clat (usec): min=298, max=41042, avg=2245.18, stdev=8466.11 00:41:48.209 lat (usec): min=305, max=41060, avg=2253.29, stdev=8470.01 00:41:48.209 clat percentiles (usec): 00:41:48.209 | 1.00th=[ 306], 5.00th=[ 318], 10.00th=[ 326], 20.00th=[ 330], 00:41:48.209 | 30.00th=[ 334], 40.00th=[ 338], 50.00th=[ 343], 60.00th=[ 351], 00:41:48.209 | 70.00th=[ 359], 80.00th=[ 375], 90.00th=[ 461], 95.00th=[ 570], 00:41:48.209 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:41:48.209 | 99.99th=[41157] 00:41:48.209 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:41:48.209 slat (nsec): min=9071, max=72299, avg=23978.98, stdev=9521.51 00:41:48.209 clat (usec): min=221, max=590, avg=332.65, stdev=59.50 00:41:48.209 lat (usec): min=238, max=632, avg=356.63, stdev=58.57 00:41:48.209 clat percentiles (usec): 00:41:48.209 | 1.00th=[ 237], 5.00th=[ 253], 10.00th=[ 269], 20.00th=[ 289], 00:41:48.209 | 30.00th=[ 297], 40.00th=[ 306], 50.00th=[ 318], 60.00th=[ 326], 00:41:48.209 | 70.00th=[ 355], 80.00th=[ 396], 90.00th=[ 424], 95.00th=[ 441], 00:41:48.209 | 99.00th=[ 478], 99.50th=[ 498], 99.90th=[ 594], 99.95th=[ 594], 00:41:48.209 | 99.99th=[ 594] 00:41:48.209 bw ( KiB/s): min= 4087, max= 4087, per=33.39%, avg=4087.00, stdev= 0.00, samples=1 00:41:48.209 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:41:48.209 lat (usec) : 250=2.18%, 500=95.07%, 750=0.69% 00:41:48.209 lat (msec) : 10=0.11%, 50=1.95% 00:41:48.209 cpu : usr=1.60%, sys=1.40%, ctx=875, majf=0, minf=1 00:41:48.209 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:48.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.209 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.209 issued rwts: total=361,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.209 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:48.209 job1: (groupid=0, jobs=1): err= 0: pid=3378628: Sun Sep 29 16:49:48 2024 00:41:48.209 read: IOPS=1226, BW=4907KiB/s (5025kB/s)(4912KiB/1001msec) 00:41:48.209 slat (nsec): min=5069, max=71560, avg=19864.46, stdev=11550.04 00:41:48.209 clat (usec): min=289, max=41026, avg=412.04, stdev=1161.41 00:41:48.209 lat (usec): min=301, max=41034, avg=431.91, stdev=1161.45 00:41:48.209 clat percentiles (usec): 00:41:48.209 | 1.00th=[ 306], 5.00th=[ 314], 10.00th=[ 322], 20.00th=[ 330], 00:41:48.209 | 30.00th=[ 338], 40.00th=[ 351], 50.00th=[ 367], 60.00th=[ 379], 00:41:48.209 | 70.00th=[ 400], 80.00th=[ 424], 90.00th=[ 465], 95.00th=[ 482], 00:41:48.209 | 99.00th=[ 578], 99.50th=[ 586], 99.90th=[ 848], 99.95th=[41157], 00:41:48.209 | 99.99th=[41157] 00:41:48.209 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:41:48.209 slat (nsec): min=6106, max=72110, avg=16523.93, stdev=9869.40 00:41:48.209 clat (usec): min=213, max=958, avg=280.43, stdev=62.15 00:41:48.209 lat (usec): min=221, max=973, avg=296.95, stdev=66.68 00:41:48.209 clat percentiles (usec): 00:41:48.209 | 1.00th=[ 221], 5.00th=[ 227], 10.00th=[ 233], 20.00th=[ 239], 00:41:48.209 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 258], 60.00th=[ 269], 00:41:48.209 | 70.00th=[ 285], 80.00th=[ 322], 90.00th=[ 379], 95.00th=[ 396], 00:41:48.209 | 99.00th=[ 445], 99.50th=[ 461], 99.90th=[ 848], 99.95th=[ 955], 00:41:48.209 | 99.99th=[ 955] 00:41:48.209 bw ( KiB/s): min= 7089, max= 7089, per=57.92%, avg=7089.00, stdev= 0.00, samples=1 00:41:48.209 iops : min= 1772, max= 1772, avg=1772.00, stdev= 0.00, samples=1 00:41:48.209 lat (usec) : 250=23.66%, 500=75.04%, 750=1.12%, 1000=0.14% 00:41:48.209 lat (msec) : 50=0.04% 00:41:48.209 cpu : usr=2.20%, sys=5.60%, ctx=2765, majf=0, minf=1 00:41:48.209 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:48.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.209 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.209 issued rwts: total=1228,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.209 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:48.209 job2: (groupid=0, jobs=1): err= 0: pid=3378629: Sun Sep 29 16:49:48 2024 00:41:48.209 read: IOPS=40, BW=164KiB/s (168kB/s)(164KiB/1001msec) 00:41:48.209 slat (nsec): min=6339, max=35954, avg=16379.56, stdev=10733.31 00:41:48.209 clat (usec): min=356, max=41023, avg=20213.02, stdev=20488.59 00:41:48.209 lat (usec): min=363, max=41041, avg=20229.40, stdev=20496.53 00:41:48.209 clat percentiles (usec): 00:41:48.209 | 1.00th=[ 359], 5.00th=[ 437], 10.00th=[ 445], 20.00th=[ 461], 00:41:48.209 | 30.00th=[ 474], 40.00th=[ 478], 50.00th=[ 570], 60.00th=[41157], 00:41:48.209 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:48.209 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:41:48.209 | 99.99th=[41157] 00:41:48.210 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:41:48.210 slat (nsec): min=8356, max=55394, avg=22883.57, stdev=8952.18 00:41:48.210 clat (usec): min=230, max=491, avg=305.26, stdev=55.17 00:41:48.210 lat (usec): min=247, max=524, avg=328.14, stdev=54.40 00:41:48.210 clat percentiles (usec): 00:41:48.210 | 1.00th=[ 249], 5.00th=[ 258], 10.00th=[ 265], 20.00th=[ 269], 00:41:48.210 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 281], 60.00th=[ 289], 00:41:48.210 | 70.00th=[ 302], 80.00th=[ 338], 90.00th=[ 404], 95.00th=[ 429], 00:41:48.210 | 99.00th=[ 474], 99.50th=[ 482], 99.90th=[ 490], 99.95th=[ 490], 00:41:48.210 | 99.99th=[ 490] 00:41:48.210 bw ( KiB/s): min= 4087, max= 4087, per=33.39%, avg=4087.00, stdev= 0.00, samples=1 00:41:48.210 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:41:48.210 lat (usec) : 250=1.45%, 500=94.58%, 750=0.36% 00:41:48.210 lat (msec) : 50=3.62% 00:41:48.210 cpu : usr=1.40%, sys=0.90%, ctx=555, majf=0, minf=1 00:41:48.210 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:48.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.210 issued rwts: total=41,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.210 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:48.210 job3: (groupid=0, jobs=1): err= 0: pid=3378631: Sun Sep 29 16:49:48 2024 00:41:48.210 read: IOPS=21, BW=87.6KiB/s (89.8kB/s)(88.0KiB/1004msec) 00:41:48.210 slat (nsec): min=14137, max=37592, avg=26592.41, stdev=9822.62 00:41:48.210 clat (usec): min=424, max=42011, avg=37385.77, stdev=11964.50 00:41:48.210 lat (usec): min=461, max=42034, avg=37412.37, stdev=11961.10 00:41:48.210 clat percentiles (usec): 00:41:48.210 | 1.00th=[ 424], 5.00th=[ 437], 10.00th=[40633], 20.00th=[41157], 00:41:48.210 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:41:48.210 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:41:48.210 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:41:48.210 | 99.99th=[42206] 00:41:48.210 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:41:48.210 slat (nsec): min=9357, max=47476, avg=22558.78, stdev=8924.48 00:41:48.210 clat (usec): min=246, max=537, avg=323.55, stdev=54.23 00:41:48.210 lat (usec): min=257, max=581, avg=346.11, stdev=55.92 00:41:48.210 clat percentiles (usec): 00:41:48.210 | 1.00th=[ 255], 5.00th=[ 269], 10.00th=[ 273], 20.00th=[ 285], 00:41:48.210 | 30.00th=[ 293], 40.00th=[ 302], 50.00th=[ 310], 60.00th=[ 318], 00:41:48.210 | 70.00th=[ 326], 80.00th=[ 347], 90.00th=[ 412], 95.00th=[ 449], 00:41:48.210 | 99.00th=[ 506], 99.50th=[ 523], 99.90th=[ 537], 99.95th=[ 537], 00:41:48.210 | 99.99th=[ 537] 00:41:48.210 bw ( KiB/s): min= 4096, max= 4096, per=33.47%, avg=4096.00, stdev= 0.00, samples=1 00:41:48.210 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:41:48.210 lat (usec) : 250=0.75%, 500=94.38%, 750=1.12% 00:41:48.210 lat (msec) : 50=3.75% 00:41:48.210 cpu : usr=1.00%, sys=1.20%, ctx=535, majf=0, minf=1 00:41:48.210 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:48.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.210 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.210 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:48.210 00:41:48.210 Run status group 0 (all jobs): 00:41:48.210 READ: bw=6582KiB/s (6740kB/s), 87.6KiB/s-4907KiB/s (89.8kB/s-5025kB/s), io=6608KiB (6767kB), run=1001-1004msec 00:41:48.210 WRITE: bw=12.0MiB/s (12.5MB/s), 2040KiB/s-6138KiB/s (2089kB/s-6285kB/s), io=12.0MiB (12.6MB), run=1001-1004msec 00:41:48.210 00:41:48.210 Disk stats (read/write): 00:41:48.210 nvme0n1: ios=95/512, merge=0/0, ticks=1527/166, in_queue=1693, util=85.77% 00:41:48.210 nvme0n2: ios=1073/1418, merge=0/0, ticks=538/392, in_queue=930, util=89.84% 00:41:48.210 nvme0n3: ios=94/512, merge=0/0, ticks=1568/143, in_queue=1711, util=93.64% 00:41:48.210 nvme0n4: ios=77/512, merge=0/0, ticks=873/168, in_queue=1041, util=96.01% 00:41:48.210 16:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:41:48.210 [global] 00:41:48.210 thread=1 00:41:48.210 invalidate=1 00:41:48.210 rw=randwrite 00:41:48.210 time_based=1 00:41:48.210 runtime=1 00:41:48.210 ioengine=libaio 00:41:48.210 direct=1 00:41:48.210 bs=4096 00:41:48.210 iodepth=1 00:41:48.210 norandommap=0 00:41:48.210 numjobs=1 00:41:48.210 00:41:48.210 verify_dump=1 00:41:48.210 verify_backlog=512 00:41:48.210 verify_state_save=0 00:41:48.210 do_verify=1 00:41:48.210 verify=crc32c-intel 00:41:48.210 [job0] 00:41:48.210 filename=/dev/nvme0n1 00:41:48.210 [job1] 00:41:48.210 filename=/dev/nvme0n2 00:41:48.210 [job2] 00:41:48.210 filename=/dev/nvme0n3 00:41:48.210 [job3] 00:41:48.210 filename=/dev/nvme0n4 00:41:48.210 Could not set queue depth (nvme0n1) 00:41:48.210 Could not set queue depth (nvme0n2) 00:41:48.210 Could not set queue depth (nvme0n3) 00:41:48.210 Could not set queue depth (nvme0n4) 00:41:48.468 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:48.468 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:48.468 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:48.468 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:48.468 fio-3.35 00:41:48.468 Starting 4 threads 00:41:49.840 00:41:49.840 job0: (groupid=0, jobs=1): err= 0: pid=3378862: Sun Sep 29 16:49:50 2024 00:41:49.840 read: IOPS=19, BW=79.1KiB/s (80.9kB/s)(80.0KiB/1012msec) 00:41:49.840 slat (nsec): min=12620, max=34227, avg=18424.10, stdev=8668.62 00:41:49.840 clat (usec): min=40676, max=42003, avg=41007.46, stdev=244.95 00:41:49.840 lat (usec): min=40689, max=42018, avg=41025.89, stdev=244.23 00:41:49.840 clat percentiles (usec): 00:41:49.840 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:41:49.840 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:41:49.840 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:49.840 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:41:49.840 | 99.99th=[42206] 00:41:49.840 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:41:49.840 slat (nsec): min=5849, max=31208, avg=12614.38, stdev=4250.36 00:41:49.840 clat (usec): min=224, max=1048, avg=356.81, stdev=71.98 00:41:49.840 lat (usec): min=234, max=1056, avg=369.43, stdev=70.98 00:41:49.840 clat percentiles (usec): 00:41:49.840 | 1.00th=[ 235], 5.00th=[ 251], 10.00th=[ 260], 20.00th=[ 297], 00:41:49.840 | 30.00th=[ 322], 40.00th=[ 343], 50.00th=[ 371], 60.00th=[ 383], 00:41:49.840 | 70.00th=[ 388], 80.00th=[ 396], 90.00th=[ 433], 95.00th=[ 465], 00:41:49.840 | 99.00th=[ 529], 99.50th=[ 545], 99.90th=[ 1045], 99.95th=[ 1045], 00:41:49.840 | 99.99th=[ 1045] 00:41:49.840 bw ( KiB/s): min= 4096, max= 4096, per=46.68%, avg=4096.00, stdev= 0.00, samples=1 00:41:49.840 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:41:49.840 lat (usec) : 250=4.51%, 500=89.47%, 750=2.07% 00:41:49.840 lat (msec) : 2=0.19%, 50=3.76% 00:41:49.840 cpu : usr=0.10%, sys=0.89%, ctx=533, majf=0, minf=1 00:41:49.840 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:49.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.840 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.840 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:49.840 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:49.840 job1: (groupid=0, jobs=1): err= 0: pid=3378863: Sun Sep 29 16:49:50 2024 00:41:49.840 read: IOPS=120, BW=480KiB/s (492kB/s)(496KiB/1033msec) 00:41:49.840 slat (nsec): min=7229, max=58148, avg=24128.60, stdev=9497.83 00:41:49.840 clat (usec): min=298, max=41164, avg=7244.25, stdev=15301.92 00:41:49.840 lat (usec): min=311, max=41180, avg=7268.37, stdev=15299.02 00:41:49.840 clat percentiles (usec): 00:41:49.840 | 1.00th=[ 302], 5.00th=[ 306], 10.00th=[ 314], 20.00th=[ 326], 00:41:49.840 | 30.00th=[ 330], 40.00th=[ 343], 50.00th=[ 379], 60.00th=[ 404], 00:41:49.840 | 70.00th=[ 416], 80.00th=[ 457], 90.00th=[41157], 95.00th=[41157], 00:41:49.840 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:41:49.840 | 99.99th=[41157] 00:41:49.840 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:41:49.840 slat (nsec): min=6328, max=31714, avg=8717.97, stdev=4034.15 00:41:49.840 clat (usec): min=206, max=459, avg=244.01, stdev=26.00 00:41:49.840 lat (usec): min=213, max=468, avg=252.73, stdev=26.81 00:41:49.840 clat percentiles (usec): 00:41:49.840 | 1.00th=[ 208], 5.00th=[ 217], 10.00th=[ 223], 20.00th=[ 227], 00:41:49.840 | 30.00th=[ 231], 40.00th=[ 237], 50.00th=[ 239], 60.00th=[ 245], 00:41:49.840 | 70.00th=[ 251], 80.00th=[ 258], 90.00th=[ 269], 95.00th=[ 277], 00:41:49.841 | 99.00th=[ 334], 99.50th=[ 424], 99.90th=[ 461], 99.95th=[ 461], 00:41:49.841 | 99.99th=[ 461] 00:41:49.841 bw ( KiB/s): min= 4096, max= 4096, per=46.68%, avg=4096.00, stdev= 0.00, samples=1 00:41:49.841 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:41:49.841 lat (usec) : 250=55.97%, 500=40.72% 00:41:49.841 lat (msec) : 50=3.30% 00:41:49.841 cpu : usr=0.19%, sys=0.78%, ctx=637, majf=0, minf=2 00:41:49.841 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:49.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.841 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.841 issued rwts: total=124,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:49.841 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:49.841 job2: (groupid=0, jobs=1): err= 0: pid=3378864: Sun Sep 29 16:49:50 2024 00:41:49.841 read: IOPS=368, BW=1475KiB/s (1510kB/s)(1476KiB/1001msec) 00:41:49.841 slat (nsec): min=5406, max=36362, avg=16399.23, stdev=6447.99 00:41:49.841 clat (usec): min=326, max=41194, avg=2167.01, stdev=8278.29 00:41:49.841 lat (usec): min=343, max=41229, avg=2183.41, stdev=8278.42 00:41:49.841 clat percentiles (usec): 00:41:49.841 | 1.00th=[ 330], 5.00th=[ 338], 10.00th=[ 338], 20.00th=[ 347], 00:41:49.841 | 30.00th=[ 359], 40.00th=[ 392], 50.00th=[ 408], 60.00th=[ 429], 00:41:49.841 | 70.00th=[ 449], 80.00th=[ 461], 90.00th=[ 494], 95.00th=[ 553], 00:41:49.841 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:41:49.841 | 99.99th=[41157] 00:41:49.841 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:41:49.841 slat (nsec): min=6965, max=37656, avg=14335.67, stdev=4614.76 00:41:49.841 clat (usec): min=230, max=1646, avg=358.61, stdev=91.72 00:41:49.841 lat (usec): min=246, max=1668, avg=372.95, stdev=91.32 00:41:49.841 clat percentiles (usec): 00:41:49.841 | 1.00th=[ 235], 5.00th=[ 245], 10.00th=[ 255], 20.00th=[ 281], 00:41:49.841 | 30.00th=[ 314], 40.00th=[ 351], 50.00th=[ 383], 60.00th=[ 388], 00:41:49.841 | 70.00th=[ 392], 80.00th=[ 396], 90.00th=[ 412], 95.00th=[ 469], 00:41:49.841 | 99.00th=[ 570], 99.50th=[ 603], 99.90th=[ 1647], 99.95th=[ 1647], 00:41:49.841 | 99.99th=[ 1647] 00:41:49.841 bw ( KiB/s): min= 4096, max= 4096, per=46.68%, avg=4096.00, stdev= 0.00, samples=1 00:41:49.841 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:41:49.841 lat (usec) : 250=4.20%, 500=90.24%, 750=3.52%, 1000=0.11% 00:41:49.841 lat (msec) : 2=0.11%, 50=1.82% 00:41:49.841 cpu : usr=0.60%, sys=1.70%, ctx=882, majf=0, minf=1 00:41:49.841 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:49.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.841 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.841 issued rwts: total=369,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:49.841 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:49.841 job3: (groupid=0, jobs=1): err= 0: pid=3378865: Sun Sep 29 16:49:50 2024 00:41:49.841 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:41:49.841 slat (nsec): min=4728, max=57464, avg=14228.73, stdev=6137.31 00:41:49.841 clat (usec): min=272, max=41232, avg=1492.76, stdev=6633.66 00:41:49.841 lat (usec): min=277, max=41265, avg=1506.99, stdev=6634.00 00:41:49.841 clat percentiles (usec): 00:41:49.841 | 1.00th=[ 285], 5.00th=[ 297], 10.00th=[ 310], 20.00th=[ 326], 00:41:49.841 | 30.00th=[ 347], 40.00th=[ 363], 50.00th=[ 383], 60.00th=[ 388], 00:41:49.841 | 70.00th=[ 392], 80.00th=[ 408], 90.00th=[ 486], 95.00th=[ 562], 00:41:49.841 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:41:49.841 | 99.99th=[41157] 00:41:49.841 write: IOPS=729, BW=2917KiB/s (2987kB/s)(2920KiB/1001msec); 0 zone resets 00:41:49.841 slat (nsec): min=6355, max=51028, avg=15285.61, stdev=7522.06 00:41:49.841 clat (usec): min=198, max=1447, avg=290.86, stdev=77.71 00:41:49.841 lat (usec): min=205, max=1465, avg=306.15, stdev=78.77 00:41:49.841 clat percentiles (usec): 00:41:49.841 | 1.00th=[ 208], 5.00th=[ 219], 10.00th=[ 227], 20.00th=[ 243], 00:41:49.841 | 30.00th=[ 251], 40.00th=[ 262], 50.00th=[ 273], 60.00th=[ 281], 00:41:49.841 | 70.00th=[ 293], 80.00th=[ 326], 90.00th=[ 388], 95.00th=[ 420], 00:41:49.841 | 99.00th=[ 519], 99.50th=[ 529], 99.90th=[ 1450], 99.95th=[ 1450], 00:41:49.841 | 99.99th=[ 1450] 00:41:49.841 bw ( KiB/s): min= 4096, max= 4096, per=46.68%, avg=4096.00, stdev= 0.00, samples=1 00:41:49.841 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:41:49.841 lat (usec) : 250=16.51%, 500=78.50%, 750=3.38%, 1000=0.32% 00:41:49.841 lat (msec) : 2=0.16%, 50=1.13% 00:41:49.841 cpu : usr=0.90%, sys=2.30%, ctx=1244, majf=0, minf=1 00:41:49.841 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:49.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.841 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.841 issued rwts: total=512,730,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:49.841 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:49.841 00:41:49.841 Run status group 0 (all jobs): 00:41:49.841 READ: bw=3969KiB/s (4064kB/s), 79.1KiB/s-2046KiB/s (80.9kB/s-2095kB/s), io=4100KiB (4198kB), run=1001-1033msec 00:41:49.841 WRITE: bw=8774KiB/s (8985kB/s), 1983KiB/s-2917KiB/s (2030kB/s-2987kB/s), io=9064KiB (9282kB), run=1001-1033msec 00:41:49.841 00:41:49.841 Disk stats (read/write): 00:41:49.841 nvme0n1: ios=66/512, merge=0/0, ticks=691/182, in_queue=873, util=87.37% 00:41:49.841 nvme0n2: ios=146/512, merge=0/0, ticks=964/126, in_queue=1090, util=90.25% 00:41:49.841 nvme0n3: ios=291/512, merge=0/0, ticks=725/177, in_queue=902, util=95.21% 00:41:49.841 nvme0n4: ios=473/512, merge=0/0, ticks=827/152, in_queue=979, util=95.81% 00:41:49.841 16:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:41:49.841 [global] 00:41:49.841 thread=1 00:41:49.841 invalidate=1 00:41:49.841 rw=write 00:41:49.841 time_based=1 00:41:49.841 runtime=1 00:41:49.841 ioengine=libaio 00:41:49.841 direct=1 00:41:49.841 bs=4096 00:41:49.841 iodepth=128 00:41:49.841 norandommap=0 00:41:49.841 numjobs=1 00:41:49.841 00:41:49.841 verify_dump=1 00:41:49.841 verify_backlog=512 00:41:49.841 verify_state_save=0 00:41:49.841 do_verify=1 00:41:49.841 verify=crc32c-intel 00:41:49.841 [job0] 00:41:49.841 filename=/dev/nvme0n1 00:41:49.841 [job1] 00:41:49.841 filename=/dev/nvme0n2 00:41:49.841 [job2] 00:41:49.841 filename=/dev/nvme0n3 00:41:49.841 [job3] 00:41:49.841 filename=/dev/nvme0n4 00:41:49.841 Could not set queue depth (nvme0n1) 00:41:49.841 Could not set queue depth (nvme0n2) 00:41:49.841 Could not set queue depth (nvme0n3) 00:41:49.841 Could not set queue depth (nvme0n4) 00:41:49.841 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:49.841 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:49.841 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:49.841 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:49.841 fio-3.35 00:41:49.841 Starting 4 threads 00:41:51.215 00:41:51.215 job0: (groupid=0, jobs=1): err= 0: pid=3379085: Sun Sep 29 16:49:51 2024 00:41:51.215 read: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec) 00:41:51.215 slat (usec): min=2, max=18540, avg=119.39, stdev=868.72 00:41:51.215 clat (usec): min=1582, max=40971, avg=15185.49, stdev=5820.70 00:41:51.215 lat (usec): min=1703, max=40986, avg=15304.88, stdev=5884.23 00:41:51.215 clat percentiles (usec): 00:41:51.215 | 1.00th=[ 2147], 5.00th=[ 6783], 10.00th=[10028], 20.00th=[12387], 00:41:51.215 | 30.00th=[12518], 40.00th=[12911], 50.00th=[13173], 60.00th=[14222], 00:41:51.215 | 70.00th=[16450], 80.00th=[19268], 90.00th=[22938], 95.00th=[26346], 00:41:51.215 | 99.00th=[33424], 99.50th=[34866], 99.90th=[40109], 99.95th=[40109], 00:41:51.215 | 99.99th=[41157] 00:41:51.215 write: IOPS=4570, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1008msec); 0 zone resets 00:41:51.215 slat (usec): min=3, max=14236, avg=83.04, stdev=642.81 00:41:51.215 clat (usec): min=407, max=43498, avg=14268.03, stdev=7380.81 00:41:51.215 lat (usec): min=451, max=43514, avg=14351.07, stdev=7433.78 00:41:51.215 clat percentiles (usec): 00:41:51.215 | 1.00th=[ 1729], 5.00th=[ 3326], 10.00th=[ 4293], 20.00th=[ 9110], 00:41:51.215 | 30.00th=[11600], 40.00th=[12649], 50.00th=[13435], 60.00th=[13960], 00:41:51.215 | 70.00th=[14091], 80.00th=[20841], 90.00th=[26608], 95.00th=[28181], 00:41:51.215 | 99.00th=[34341], 99.50th=[34341], 99.90th=[36963], 99.95th=[40633], 00:41:51.215 | 99.99th=[43254] 00:41:51.215 bw ( KiB/s): min=15352, max=20480, per=31.97%, avg=17916.00, stdev=3626.04, samples=2 00:41:51.215 iops : min= 3838, max= 5120, avg=4479.00, stdev=906.51, samples=2 00:41:51.215 lat (usec) : 500=0.01%, 750=0.06%, 1000=0.15% 00:41:51.215 lat (msec) : 2=0.70%, 4=5.16%, 10=10.28%, 20=63.89%, 50=19.75% 00:41:51.215 cpu : usr=3.47%, sys=5.36%, ctx=392, majf=0, minf=1 00:41:51.215 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:41:51.215 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:51.215 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:51.215 issued rwts: total=4096,4607,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:51.215 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:51.215 job1: (groupid=0, jobs=1): err= 0: pid=3379086: Sun Sep 29 16:49:51 2024 00:41:51.215 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:41:51.215 slat (usec): min=2, max=8553, avg=116.34, stdev=541.80 00:41:51.215 clat (usec): min=6883, max=29317, avg=14769.67, stdev=3110.08 00:41:51.215 lat (usec): min=6894, max=29320, avg=14886.01, stdev=3112.58 00:41:51.215 clat percentiles (usec): 00:41:51.215 | 1.00th=[10945], 5.00th=[11994], 10.00th=[12518], 20.00th=[13042], 00:41:51.215 | 30.00th=[13435], 40.00th=[13698], 50.00th=[13960], 60.00th=[14091], 00:41:51.215 | 70.00th=[14746], 80.00th=[15795], 90.00th=[17433], 95.00th=[21103], 00:41:51.215 | 99.00th=[28967], 99.50th=[29230], 99.90th=[29230], 99.95th=[29230], 00:41:51.215 | 99.99th=[29230] 00:41:51.215 write: IOPS=4135, BW=16.2MiB/s (16.9MB/s)(16.2MiB/1004msec); 0 zone resets 00:41:51.215 slat (usec): min=3, max=19910, avg=120.47, stdev=700.78 00:41:51.215 clat (usec): min=2785, max=44003, avg=15919.55, stdev=5813.88 00:41:51.215 lat (usec): min=3604, max=49801, avg=16040.03, stdev=5851.50 00:41:51.215 clat percentiles (usec): 00:41:51.215 | 1.00th=[ 6783], 5.00th=[11600], 10.00th=[13042], 20.00th=[13435], 00:41:51.215 | 30.00th=[13698], 40.00th=[13960], 50.00th=[14091], 60.00th=[14353], 00:41:51.215 | 70.00th=[14615], 80.00th=[15926], 90.00th=[22152], 95.00th=[31589], 00:41:51.215 | 99.00th=[36963], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:41:51.215 | 99.99th=[43779] 00:41:51.216 bw ( KiB/s): min=16384, max=16416, per=29.27%, avg=16400.00, stdev=22.63, samples=2 00:41:51.216 iops : min= 4096, max= 4104, avg=4100.00, stdev= 5.66, samples=2 00:41:51.216 lat (msec) : 4=0.13%, 10=1.25%, 20=89.71%, 50=8.91% 00:41:51.216 cpu : usr=2.99%, sys=4.39%, ctx=539, majf=0, minf=1 00:41:51.216 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:41:51.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:51.216 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:51.216 issued rwts: total=4096,4152,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:51.216 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:51.216 job2: (groupid=0, jobs=1): err= 0: pid=3379090: Sun Sep 29 16:49:51 2024 00:41:51.216 read: IOPS=2096, BW=8386KiB/s (8588kB/s)(8420KiB/1004msec) 00:41:51.216 slat (usec): min=2, max=15653, avg=197.24, stdev=1232.14 00:41:51.216 clat (usec): min=2966, max=55815, avg=24138.16, stdev=8913.24 00:41:51.216 lat (usec): min=8277, max=60531, avg=24335.40, stdev=9004.15 00:41:51.216 clat percentiles (usec): 00:41:51.216 | 1.00th=[ 9765], 5.00th=[11863], 10.00th=[13829], 20.00th=[15008], 00:41:51.216 | 30.00th=[17433], 40.00th=[20055], 50.00th=[25560], 60.00th=[27919], 00:41:51.216 | 70.00th=[29230], 80.00th=[30016], 90.00th=[35914], 95.00th=[38011], 00:41:51.216 | 99.00th=[49021], 99.50th=[52167], 99.90th=[55837], 99.95th=[55837], 00:41:51.216 | 99.99th=[55837] 00:41:51.216 write: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec); 0 zone resets 00:41:51.216 slat (usec): min=3, max=20015, avg=222.92, stdev=1177.00 00:41:51.216 clat (usec): min=1314, max=120595, avg=29636.24, stdev=24569.93 00:41:51.216 lat (usec): min=1333, max=120632, avg=29859.15, stdev=24742.15 00:41:51.216 clat percentiles (msec): 00:41:51.216 | 1.00th=[ 6], 5.00th=[ 11], 10.00th=[ 12], 20.00th=[ 15], 00:41:51.216 | 30.00th=[ 16], 40.00th=[ 17], 50.00th=[ 18], 60.00th=[ 29], 00:41:51.216 | 70.00th=[ 30], 80.00th=[ 41], 90.00th=[ 61], 95.00th=[ 90], 00:41:51.216 | 99.00th=[ 116], 99.50th=[ 121], 99.90th=[ 122], 99.95th=[ 122], 00:41:51.216 | 99.99th=[ 122] 00:41:51.216 bw ( KiB/s): min= 8192, max=11720, per=17.77%, avg=9956.00, stdev=2494.67, samples=2 00:41:51.216 iops : min= 2048, max= 2930, avg=2489.00, stdev=623.67, samples=2 00:41:51.216 lat (msec) : 2=0.26%, 4=0.02%, 10=2.81%, 20=44.78%, 50=42.23% 00:41:51.216 lat (msec) : 100=7.76%, 250=2.14% 00:41:51.216 cpu : usr=1.99%, sys=2.69%, ctx=304, majf=0, minf=1 00:41:51.216 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:41:51.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:51.216 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:51.216 issued rwts: total=2105,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:51.216 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:51.216 job3: (groupid=0, jobs=1): err= 0: pid=3379094: Sun Sep 29 16:49:51 2024 00:41:51.216 read: IOPS=2539, BW=9.92MiB/s (10.4MB/s)(10.0MiB/1008msec) 00:41:51.216 slat (usec): min=3, max=19780, avg=188.71, stdev=1147.88 00:41:51.216 clat (usec): min=12105, max=60511, avg=24420.63, stdev=8059.44 00:41:51.216 lat (usec): min=12123, max=60529, avg=24609.34, stdev=8157.95 00:41:51.216 clat percentiles (usec): 00:41:51.216 | 1.00th=[13304], 5.00th=[14091], 10.00th=[15008], 20.00th=[17433], 00:41:51.216 | 30.00th=[18482], 40.00th=[20579], 50.00th=[22152], 60.00th=[26084], 00:41:51.216 | 70.00th=[29230], 80.00th=[30802], 90.00th=[35390], 95.00th=[38011], 00:41:51.216 | 99.00th=[52167], 99.50th=[52167], 99.90th=[53740], 99.95th=[54789], 00:41:51.216 | 99.99th=[60556] 00:41:51.216 write: IOPS=2778, BW=10.9MiB/s (11.4MB/s)(10.9MiB/1008msec); 0 zone resets 00:41:51.216 slat (usec): min=4, max=10679, avg=177.67, stdev=1068.24 00:41:51.216 clat (usec): min=3609, max=44067, avg=23128.56, stdev=5483.37 00:41:51.216 lat (usec): min=10588, max=44076, avg=23306.24, stdev=5584.29 00:41:51.216 clat percentiles (usec): 00:41:51.216 | 1.00th=[12387], 5.00th=[14746], 10.00th=[15926], 20.00th=[18482], 00:41:51.216 | 30.00th=[20055], 40.00th=[20579], 50.00th=[22676], 60.00th=[25035], 00:41:51.216 | 70.00th=[26870], 80.00th=[28705], 90.00th=[29492], 95.00th=[30278], 00:41:51.216 | 99.00th=[37487], 99.50th=[38011], 99.90th=[44303], 99.95th=[44303], 00:41:51.216 | 99.99th=[44303] 00:41:51.216 bw ( KiB/s): min= 9096, max=12288, per=19.08%, avg=10692.00, stdev=2257.08, samples=2 00:41:51.216 iops : min= 2274, max= 3072, avg=2673.00, stdev=564.27, samples=2 00:41:51.216 lat (msec) : 4=0.02%, 20=32.53%, 50=66.61%, 100=0.84% 00:41:51.216 cpu : usr=3.08%, sys=4.57%, ctx=211, majf=0, minf=1 00:41:51.216 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:41:51.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:51.216 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:51.216 issued rwts: total=2560,2801,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:51.216 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:51.216 00:41:51.216 Run status group 0 (all jobs): 00:41:51.216 READ: bw=49.8MiB/s (52.2MB/s), 8386KiB/s-15.9MiB/s (8588kB/s-16.7MB/s), io=50.2MiB (52.7MB), run=1004-1008msec 00:41:51.216 WRITE: bw=54.7MiB/s (57.4MB/s), 9.96MiB/s-17.9MiB/s (10.4MB/s-18.7MB/s), io=55.2MiB (57.8MB), run=1004-1008msec 00:41:51.216 00:41:51.216 Disk stats (read/write): 00:41:51.216 nvme0n1: ios=3793/4096, merge=0/0, ticks=47636/44841, in_queue=92477, util=97.80% 00:41:51.216 nvme0n2: ios=3339/3584, merge=0/0, ticks=14645/18019, in_queue=32664, util=97.66% 00:41:51.216 nvme0n3: ios=1642/2048, merge=0/0, ticks=14990/26834, in_queue=41824, util=97.50% 00:41:51.216 nvme0n4: ios=2069/2453, merge=0/0, ticks=19971/19955, in_queue=39926, util=97.69% 00:41:51.216 16:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:41:51.216 [global] 00:41:51.216 thread=1 00:41:51.216 invalidate=1 00:41:51.216 rw=randwrite 00:41:51.216 time_based=1 00:41:51.216 runtime=1 00:41:51.216 ioengine=libaio 00:41:51.216 direct=1 00:41:51.216 bs=4096 00:41:51.216 iodepth=128 00:41:51.216 norandommap=0 00:41:51.216 numjobs=1 00:41:51.216 00:41:51.216 verify_dump=1 00:41:51.216 verify_backlog=512 00:41:51.216 verify_state_save=0 00:41:51.216 do_verify=1 00:41:51.216 verify=crc32c-intel 00:41:51.216 [job0] 00:41:51.216 filename=/dev/nvme0n1 00:41:51.216 [job1] 00:41:51.216 filename=/dev/nvme0n2 00:41:51.216 [job2] 00:41:51.216 filename=/dev/nvme0n3 00:41:51.216 [job3] 00:41:51.216 filename=/dev/nvme0n4 00:41:51.216 Could not set queue depth (nvme0n1) 00:41:51.216 Could not set queue depth (nvme0n2) 00:41:51.216 Could not set queue depth (nvme0n3) 00:41:51.216 Could not set queue depth (nvme0n4) 00:41:51.474 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:51.474 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:51.474 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:51.474 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:51.474 fio-3.35 00:41:51.474 Starting 4 threads 00:41:52.851 00:41:52.851 job0: (groupid=0, jobs=1): err= 0: pid=3379437: Sun Sep 29 16:49:52 2024 00:41:52.851 read: IOPS=4000, BW=15.6MiB/s (16.4MB/s)(16.0MiB/1024msec) 00:41:52.851 slat (usec): min=2, max=11946, avg=94.25, stdev=717.55 00:41:52.851 clat (usec): min=3483, max=43694, avg=13591.20, stdev=5015.67 00:41:52.851 lat (usec): min=3495, max=43707, avg=13685.46, stdev=5049.94 00:41:52.851 clat percentiles (usec): 00:41:52.851 | 1.00th=[ 4948], 5.00th=[ 7439], 10.00th=[ 8717], 20.00th=[10683], 00:41:52.851 | 30.00th=[11731], 40.00th=[12256], 50.00th=[12518], 60.00th=[12780], 00:41:52.851 | 70.00th=[14353], 80.00th=[15795], 90.00th=[19268], 95.00th=[22676], 00:41:52.851 | 99.00th=[34341], 99.50th=[34341], 99.90th=[34341], 99.95th=[41157], 00:41:52.851 | 99.99th=[43779] 00:41:52.851 write: IOPS=4497, BW=17.6MiB/s (18.4MB/s)(18.0MiB/1024msec); 0 zone resets 00:41:52.851 slat (usec): min=3, max=29400, avg=108.02, stdev=729.17 00:41:52.851 clat (usec): min=816, max=83275, avg=16038.53, stdev=11700.87 00:41:52.851 lat (usec): min=825, max=83285, avg=16146.55, stdev=11775.96 00:41:52.851 clat percentiles (usec): 00:41:52.851 | 1.00th=[ 4359], 5.00th=[ 6456], 10.00th=[ 8291], 20.00th=[10814], 00:41:52.851 | 30.00th=[12256], 40.00th=[12911], 50.00th=[13698], 60.00th=[13829], 00:41:52.851 | 70.00th=[14091], 80.00th=[15664], 90.00th=[24773], 95.00th=[39584], 00:41:52.851 | 99.00th=[74974], 99.50th=[79168], 99.90th=[83362], 99.95th=[83362], 00:41:52.851 | 99.99th=[83362] 00:41:52.851 bw ( KiB/s): min=15344, max=20480, per=32.64%, avg=17912.00, stdev=3631.70, samples=2 00:41:52.851 iops : min= 3836, max= 5120, avg=4478.00, stdev=907.93, samples=2 00:41:52.851 lat (usec) : 1000=0.06% 00:41:52.851 lat (msec) : 4=0.54%, 10=14.65%, 20=74.15%, 50=8.90%, 100=1.70% 00:41:52.851 cpu : usr=5.08%, sys=8.60%, ctx=457, majf=0, minf=2 00:41:52.851 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:41:52.851 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:52.851 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:52.851 issued rwts: total=4096,4605,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:52.851 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:52.851 job1: (groupid=0, jobs=1): err= 0: pid=3379438: Sun Sep 29 16:49:52 2024 00:41:52.851 read: IOPS=4000, BW=15.6MiB/s (16.4MB/s)(16.0MiB/1024msec) 00:41:52.851 slat (usec): min=2, max=13441, avg=103.21, stdev=785.85 00:41:52.851 clat (usec): min=6517, max=28221, avg=13609.36, stdev=3499.63 00:41:52.851 lat (usec): min=6529, max=28238, avg=13712.57, stdev=3543.58 00:41:52.851 clat percentiles (usec): 00:41:52.851 | 1.00th=[ 7504], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[11076], 00:41:52.851 | 30.00th=[11863], 40.00th=[12256], 50.00th=[12649], 60.00th=[13304], 00:41:52.851 | 70.00th=[14222], 80.00th=[16909], 90.00th=[19268], 95.00th=[20317], 00:41:52.851 | 99.00th=[23200], 99.50th=[23987], 99.90th=[25035], 99.95th=[25035], 00:41:52.851 | 99.99th=[28181] 00:41:52.851 write: IOPS=4412, BW=17.2MiB/s (18.1MB/s)(17.6MiB/1024msec); 0 zone resets 00:41:52.851 slat (usec): min=3, max=25333, avg=117.87, stdev=940.88 00:41:52.851 clat (usec): min=4686, max=48376, avg=16408.71, stdev=7350.33 00:41:52.851 lat (usec): min=4692, max=48393, avg=16526.58, stdev=7397.70 00:41:52.851 clat percentiles (usec): 00:41:52.851 | 1.00th=[ 6587], 5.00th=[ 9634], 10.00th=[10028], 20.00th=[10945], 00:41:52.851 | 30.00th=[12256], 40.00th=[13566], 50.00th=[14484], 60.00th=[14877], 00:41:52.851 | 70.00th=[17433], 80.00th=[19006], 90.00th=[26346], 95.00th=[28705], 00:41:52.851 | 99.00th=[42206], 99.50th=[44303], 99.90th=[47449], 99.95th=[47449], 00:41:52.851 | 99.99th=[48497] 00:41:52.852 bw ( KiB/s): min=16416, max=18736, per=32.03%, avg=17576.00, stdev=1640.49, samples=2 00:41:52.852 iops : min= 4104, max= 4684, avg=4394.00, stdev=410.12, samples=2 00:41:52.852 lat (msec) : 10=11.12%, 20=75.67%, 50=13.21% 00:41:52.852 cpu : usr=5.57%, sys=8.21%, ctx=295, majf=0, minf=1 00:41:52.852 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:41:52.852 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:52.852 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:52.852 issued rwts: total=4096,4518,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:52.852 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:52.852 job2: (groupid=0, jobs=1): err= 0: pid=3379439: Sun Sep 29 16:49:52 2024 00:41:52.852 read: IOPS=1497, BW=5988KiB/s (6132kB/s)(6144KiB/1026msec) 00:41:52.852 slat (usec): min=2, max=25624, avg=278.74, stdev=2082.64 00:41:52.852 clat (msec): min=10, max=102, avg=35.25, stdev=19.36 00:41:52.852 lat (msec): min=10, max=102, avg=35.53, stdev=19.51 00:41:52.852 clat percentiles (msec): 00:41:52.852 | 1.00th=[ 13], 5.00th=[ 14], 10.00th=[ 16], 20.00th=[ 18], 00:41:52.852 | 30.00th=[ 26], 40.00th=[ 29], 50.00th=[ 30], 60.00th=[ 33], 00:41:52.852 | 70.00th=[ 43], 80.00th=[ 47], 90.00th=[ 55], 95.00th=[ 86], 00:41:52.852 | 99.00th=[ 100], 99.50th=[ 102], 99.90th=[ 103], 99.95th=[ 103], 00:41:52.852 | 99.99th=[ 103] 00:41:52.852 write: IOPS=1969, BW=7879KiB/s (8068kB/s)(8084KiB/1026msec); 0 zone resets 00:41:52.852 slat (usec): min=3, max=23851, avg=267.39, stdev=1667.87 00:41:52.852 clat (msec): min=6, max=159, avg=37.65, stdev=26.42 00:41:52.852 lat (msec): min=6, max=159, avg=37.92, stdev=26.57 00:41:52.852 clat percentiles (msec): 00:41:52.852 | 1.00th=[ 11], 5.00th=[ 18], 10.00th=[ 21], 20.00th=[ 23], 00:41:52.852 | 30.00th=[ 28], 40.00th=[ 29], 50.00th=[ 30], 60.00th=[ 34], 00:41:52.852 | 70.00th=[ 34], 80.00th=[ 39], 90.00th=[ 65], 95.00th=[ 106], 00:41:52.852 | 99.00th=[ 148], 99.50th=[ 159], 99.90th=[ 161], 99.95th=[ 161], 00:41:52.852 | 99.99th=[ 161] 00:41:52.852 bw ( KiB/s): min= 6960, max= 8192, per=13.80%, avg=7576.00, stdev=871.16, samples=2 00:41:52.852 iops : min= 1740, max= 2048, avg=1894.00, stdev=217.79, samples=2 00:41:52.852 lat (msec) : 10=0.62%, 20=13.83%, 50=71.30%, 100=10.94%, 250=3.32% 00:41:52.852 cpu : usr=2.73%, sys=2.63%, ctx=163, majf=0, minf=1 00:41:52.852 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.2% 00:41:52.852 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:52.852 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:52.852 issued rwts: total=1536,2021,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:52.852 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:52.852 job3: (groupid=0, jobs=1): err= 0: pid=3379440: Sun Sep 29 16:49:52 2024 00:41:52.852 read: IOPS=2495, BW=9981KiB/s (10.2MB/s)(10.0MiB/1026msec) 00:41:52.852 slat (usec): min=3, max=22808, avg=161.97, stdev=1257.53 00:41:52.852 clat (usec): min=8518, max=52206, avg=21339.89, stdev=9036.25 00:41:52.852 lat (usec): min=8524, max=52269, avg=21501.87, stdev=9115.60 00:41:52.852 clat percentiles (usec): 00:41:52.852 | 1.00th=[ 8586], 5.00th=[10028], 10.00th=[11207], 20.00th=[13566], 00:41:52.852 | 30.00th=[14877], 40.00th=[16909], 50.00th=[19006], 60.00th=[21890], 00:41:52.852 | 70.00th=[25560], 80.00th=[28705], 90.00th=[34866], 95.00th=[40633], 00:41:52.852 | 99.00th=[45876], 99.50th=[45876], 99.90th=[49021], 99.95th=[51119], 00:41:52.852 | 99.99th=[52167] 00:41:52.852 write: IOPS=2858, BW=11.2MiB/s (11.7MB/s)(11.5MiB/1026msec); 0 zone resets 00:41:52.852 slat (usec): min=4, max=25331, avg=190.15, stdev=1388.34 00:41:52.852 clat (usec): min=7863, max=72704, avg=25765.87, stdev=12500.33 00:41:52.852 lat (usec): min=7872, max=72723, avg=25956.02, stdev=12588.51 00:41:52.852 clat percentiles (usec): 00:41:52.852 | 1.00th=[ 9110], 5.00th=[10028], 10.00th=[10814], 20.00th=[14877], 00:41:52.852 | 30.00th=[16712], 40.00th=[21627], 50.00th=[23987], 60.00th=[27395], 00:41:52.852 | 70.00th=[30540], 80.00th=[33424], 90.00th=[40633], 95.00th=[48497], 00:41:52.852 | 99.00th=[68682], 99.50th=[70779], 99.90th=[72877], 99.95th=[72877], 00:41:52.852 | 99.99th=[72877] 00:41:52.852 bw ( KiB/s): min=10704, max=11744, per=20.45%, avg=11224.00, stdev=735.39, samples=2 00:41:52.852 iops : min= 2676, max= 2936, avg=2806.00, stdev=183.85, samples=2 00:41:52.852 lat (msec) : 10=5.21%, 20=38.07%, 50=54.11%, 100=2.62% 00:41:52.852 cpu : usr=3.90%, sys=5.07%, ctx=185, majf=0, minf=1 00:41:52.852 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:41:52.852 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:52.852 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:52.852 issued rwts: total=2560,2933,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:52.852 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:52.852 00:41:52.852 Run status group 0 (all jobs): 00:41:52.852 READ: bw=46.8MiB/s (49.1MB/s), 5988KiB/s-15.6MiB/s (6132kB/s-16.4MB/s), io=48.0MiB (50.3MB), run=1024-1026msec 00:41:52.852 WRITE: bw=53.6MiB/s (56.2MB/s), 7879KiB/s-17.6MiB/s (8068kB/s-18.4MB/s), io=55.0MiB (57.7MB), run=1024-1026msec 00:41:52.852 00:41:52.852 Disk stats (read/write): 00:41:52.852 nvme0n1: ios=4063/4096, merge=0/0, ticks=51396/51277, in_queue=102673, util=96.69% 00:41:52.852 nvme0n2: ios=3633/3783, merge=0/0, ticks=46749/56860, in_queue=103609, util=88.43% 00:41:52.852 nvme0n3: ios=1593/1582, merge=0/0, ticks=53209/46546, in_queue=99755, util=91.78% 00:41:52.852 nvme0n4: ios=2102/2231, merge=0/0, ticks=46563/58525, in_queue=105088, util=96.86% 00:41:52.852 16:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:41:52.852 16:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3379572 00:41:52.852 16:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:41:52.852 16:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:41:52.852 [global] 00:41:52.852 thread=1 00:41:52.852 invalidate=1 00:41:52.852 rw=read 00:41:52.852 time_based=1 00:41:52.852 runtime=10 00:41:52.852 ioengine=libaio 00:41:52.852 direct=1 00:41:52.852 bs=4096 00:41:52.852 iodepth=1 00:41:52.852 norandommap=1 00:41:52.852 numjobs=1 00:41:52.852 00:41:52.852 [job0] 00:41:52.852 filename=/dev/nvme0n1 00:41:52.852 [job1] 00:41:52.852 filename=/dev/nvme0n2 00:41:52.852 [job2] 00:41:52.852 filename=/dev/nvme0n3 00:41:52.852 [job3] 00:41:52.852 filename=/dev/nvme0n4 00:41:52.852 Could not set queue depth (nvme0n1) 00:41:52.852 Could not set queue depth (nvme0n2) 00:41:52.852 Could not set queue depth (nvme0n3) 00:41:52.852 Could not set queue depth (nvme0n4) 00:41:52.852 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:52.852 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:52.852 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:52.852 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:52.852 fio-3.35 00:41:52.852 Starting 4 threads 00:41:56.134 16:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:41:56.134 16:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:41:56.134 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=16429056, buflen=4096 00:41:56.134 fio: pid=3379672, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:41:56.134 16:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:56.134 16:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:41:56.134 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=839680, buflen=4096 00:41:56.134 fio: pid=3379671, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:41:56.394 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=26173440, buflen=4096 00:41:56.394 fio: pid=3379669, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:41:56.655 16:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:56.655 16:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:41:56.966 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=38977536, buflen=4096 00:41:56.966 fio: pid=3379670, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:41:56.966 00:41:56.966 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3379669: Sun Sep 29 16:49:57 2024 00:41:56.966 read: IOPS=1814, BW=7255KiB/s (7429kB/s)(25.0MiB/3523msec) 00:41:56.966 slat (usec): min=4, max=14409, avg=17.24, stdev=247.12 00:41:56.966 clat (usec): min=286, max=41286, avg=527.51, stdev=2434.19 00:41:56.966 lat (usec): min=293, max=41294, avg=544.75, stdev=2446.66 00:41:56.966 clat percentiles (usec): 00:41:56.966 | 1.00th=[ 293], 5.00th=[ 302], 10.00th=[ 310], 20.00th=[ 322], 00:41:56.966 | 30.00th=[ 326], 40.00th=[ 330], 50.00th=[ 334], 60.00th=[ 351], 00:41:56.966 | 70.00th=[ 383], 80.00th=[ 474], 90.00th=[ 523], 95.00th=[ 578], 00:41:56.966 | 99.00th=[ 693], 99.50th=[ 824], 99.90th=[41157], 99.95th=[41157], 00:41:56.966 | 99.99th=[41157] 00:41:56.966 bw ( KiB/s): min= 104, max=11344, per=31.57%, avg=6592.00, stdev=4720.49, samples=6 00:41:56.966 iops : min= 26, max= 2836, avg=1648.00, stdev=1180.12, samples=6 00:41:56.966 lat (usec) : 500=85.56%, 750=13.68%, 1000=0.34% 00:41:56.966 lat (msec) : 2=0.05%, 50=0.36% 00:41:56.966 cpu : usr=1.50%, sys=2.95%, ctx=6397, majf=0, minf=2 00:41:56.966 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:56.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:56.966 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:56.966 issued rwts: total=6391,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:56.966 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:56.966 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3379670: Sun Sep 29 16:49:57 2024 00:41:56.966 read: IOPS=2468, BW=9874KiB/s (10.1MB/s)(37.2MiB/3855msec) 00:41:56.966 slat (usec): min=4, max=29520, avg=16.01, stdev=367.16 00:41:56.966 clat (usec): min=263, max=41814, avg=384.11, stdev=1321.73 00:41:56.966 lat (usec): min=268, max=41819, avg=400.12, stdev=1372.13 00:41:56.966 clat percentiles (usec): 00:41:56.966 | 1.00th=[ 273], 5.00th=[ 281], 10.00th=[ 289], 20.00th=[ 297], 00:41:56.966 | 30.00th=[ 310], 40.00th=[ 318], 50.00th=[ 326], 60.00th=[ 334], 00:41:56.966 | 70.00th=[ 355], 80.00th=[ 375], 90.00th=[ 416], 95.00th=[ 445], 00:41:56.966 | 99.00th=[ 529], 99.50th=[ 603], 99.90th=[40633], 99.95th=[41157], 00:41:56.966 | 99.99th=[41681] 00:41:56.966 bw ( KiB/s): min= 7312, max=12498, per=46.47%, avg=9702.00, stdev=1794.67, samples=7 00:41:56.966 iops : min= 1828, max= 3124, avg=2425.43, stdev=448.54, samples=7 00:41:56.966 lat (usec) : 500=98.10%, 750=1.49%, 1000=0.28% 00:41:56.966 lat (msec) : 2=0.01%, 50=0.11% 00:41:56.966 cpu : usr=1.38%, sys=3.48%, ctx=9521, majf=0, minf=1 00:41:56.966 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:56.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:56.966 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:56.966 issued rwts: total=9517,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:56.966 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:56.966 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3379671: Sun Sep 29 16:49:57 2024 00:41:56.966 read: IOPS=63, BW=253KiB/s (259kB/s)(820KiB/3241msec) 00:41:56.966 slat (usec): min=5, max=9891, avg=66.06, stdev=687.99 00:41:56.966 clat (usec): min=351, max=42143, avg=15626.27, stdev=19624.45 00:41:56.966 lat (usec): min=358, max=52035, avg=15692.58, stdev=19702.16 00:41:56.966 clat percentiles (usec): 00:41:56.966 | 1.00th=[ 359], 5.00th=[ 412], 10.00th=[ 429], 20.00th=[ 453], 00:41:56.966 | 30.00th=[ 490], 40.00th=[ 523], 50.00th=[ 562], 60.00th=[ 603], 00:41:56.966 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:56.966 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:41:56.966 | 99.99th=[42206] 00:41:56.966 bw ( KiB/s): min= 96, max= 808, per=1.21%, avg=252.00, stdev=278.59, samples=6 00:41:56.966 iops : min= 24, max= 202, avg=63.00, stdev=69.65, samples=6 00:41:56.966 lat (usec) : 500=33.98%, 750=28.16% 00:41:56.966 lat (msec) : 20=0.49%, 50=36.89% 00:41:56.966 cpu : usr=0.06%, sys=0.09%, ctx=210, majf=0, minf=1 00:41:56.966 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:56.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:56.966 complete : 0=0.5%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:56.966 issued rwts: total=206,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:56.966 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:56.966 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3379672: Sun Sep 29 16:49:57 2024 00:41:56.966 read: IOPS=1360, BW=5439KiB/s (5569kB/s)(15.7MiB/2950msec) 00:41:56.966 slat (nsec): min=4414, max=74562, avg=17301.79, stdev=10428.62 00:41:56.966 clat (usec): min=264, max=41354, avg=708.61, stdev=3379.51 00:41:56.966 lat (usec): min=278, max=41367, avg=725.91, stdev=3379.68 00:41:56.966 clat percentiles (usec): 00:41:56.966 | 1.00th=[ 306], 5.00th=[ 322], 10.00th=[ 334], 20.00th=[ 367], 00:41:56.966 | 30.00th=[ 383], 40.00th=[ 396], 50.00th=[ 412], 60.00th=[ 441], 00:41:56.967 | 70.00th=[ 461], 80.00th=[ 486], 90.00th=[ 529], 95.00th=[ 570], 00:41:56.967 | 99.00th=[ 676], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:41:56.967 | 99.99th=[41157] 00:41:56.967 bw ( KiB/s): min= 96, max= 9016, per=30.65%, avg=6400.00, stdev=3686.72, samples=5 00:41:56.967 iops : min= 24, max= 2254, avg=1600.00, stdev=921.68, samples=5 00:41:56.967 lat (usec) : 500=84.50%, 750=14.73%, 1000=0.05% 00:41:56.967 lat (msec) : 50=0.70% 00:41:56.967 cpu : usr=0.88%, sys=2.75%, ctx=4013, majf=0, minf=2 00:41:56.967 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:56.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:56.967 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:56.967 issued rwts: total=4012,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:56.967 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:56.967 00:41:56.967 Run status group 0 (all jobs): 00:41:56.967 READ: bw=20.4MiB/s (21.4MB/s), 253KiB/s-9874KiB/s (259kB/s-10.1MB/s), io=78.6MiB (82.4MB), run=2950-3855msec 00:41:56.967 00:41:56.967 Disk stats (read/write): 00:41:56.967 nvme0n1: ios=6024/0, merge=0/0, ticks=3918/0, in_queue=3918, util=99.23% 00:41:56.967 nvme0n2: ios=9517/0, merge=0/0, ticks=3522/0, in_queue=3522, util=94.98% 00:41:56.967 nvme0n3: ios=252/0, merge=0/0, ticks=4173/0, in_queue=4173, util=99.59% 00:41:56.967 nvme0n4: ios=4061/0, merge=0/0, ticks=3854/0, in_queue=3854, util=99.83% 00:41:56.967 16:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:56.967 16:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:41:57.249 16:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:57.249 16:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:41:57.508 16:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:57.508 16:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:41:57.766 16:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:57.766 16:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:41:58.332 16:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:58.332 16:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:41:58.590 16:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:41:58.590 16:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 3379572 00:41:58.590 16:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:41:58.590 16:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:41:59.525 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:41:59.525 16:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:41:59.525 16:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:41:59.525 16:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:41:59.525 16:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:59.525 16:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:41:59.525 16:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:59.525 16:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:41:59.525 16:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:41:59.525 16:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:41:59.525 nvmf hotplug test: fio failed as expected 00:41:59.525 16:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:59.783 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:41:59.783 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:41:59.783 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:41:59.783 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:41:59.784 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:41:59.784 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:41:59.784 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:41:59.784 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:59.784 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:41:59.784 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:59.784 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:59.784 rmmod nvme_tcp 00:41:59.784 rmmod nvme_fabrics 00:41:59.784 rmmod nvme_keyring 00:41:59.784 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:59.784 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:41:59.784 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:41:59.784 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@513 -- # '[' -n 3377438 ']' 00:41:59.784 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@514 -- # killprocess 3377438 00:41:59.784 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 3377438 ']' 00:41:59.784 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 3377438 00:41:59.784 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:41:59.784 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:59.784 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3377438 00:41:59.784 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:41:59.784 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:41:59.784 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3377438' 00:41:59.784 killing process with pid 3377438 00:41:59.784 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 3377438 00:41:59.784 16:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 3377438 00:42:01.159 16:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:42:01.159 16:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:42:01.159 16:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:42:01.159 16:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:42:01.159 16:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-save 00:42:01.159 16:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:42:01.159 16:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-restore 00:42:01.159 16:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:01.159 16:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:01.159 16:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:01.159 16:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:01.159 16:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:03.066 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:03.066 00:42:03.066 real 0m26.985s 00:42:03.066 user 1m13.263s 00:42:03.066 sys 0m10.411s 00:42:03.066 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:03.066 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:42:03.066 ************************************ 00:42:03.066 END TEST nvmf_fio_target 00:42:03.066 ************************************ 00:42:03.325 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:42:03.325 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:42:03.325 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:03.325 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:03.325 ************************************ 00:42:03.325 START TEST nvmf_bdevio 00:42:03.325 ************************************ 00:42:03.325 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:42:03.325 * Looking for test storage... 00:42:03.325 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:03.325 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:42:03.325 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:42:03.325 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:42:03.325 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:42:03.325 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:03.325 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:03.325 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:03.325 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:42:03.325 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:42:03.325 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:42:03.325 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:42:03.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:03.326 --rc genhtml_branch_coverage=1 00:42:03.326 --rc genhtml_function_coverage=1 00:42:03.326 --rc genhtml_legend=1 00:42:03.326 --rc geninfo_all_blocks=1 00:42:03.326 --rc geninfo_unexecuted_blocks=1 00:42:03.326 00:42:03.326 ' 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:42:03.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:03.326 --rc genhtml_branch_coverage=1 00:42:03.326 --rc genhtml_function_coverage=1 00:42:03.326 --rc genhtml_legend=1 00:42:03.326 --rc geninfo_all_blocks=1 00:42:03.326 --rc geninfo_unexecuted_blocks=1 00:42:03.326 00:42:03.326 ' 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:42:03.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:03.326 --rc genhtml_branch_coverage=1 00:42:03.326 --rc genhtml_function_coverage=1 00:42:03.326 --rc genhtml_legend=1 00:42:03.326 --rc geninfo_all_blocks=1 00:42:03.326 --rc geninfo_unexecuted_blocks=1 00:42:03.326 00:42:03.326 ' 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:42:03.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:03.326 --rc genhtml_branch_coverage=1 00:42:03.326 --rc genhtml_function_coverage=1 00:42:03.326 --rc genhtml_legend=1 00:42:03.326 --rc geninfo_all_blocks=1 00:42:03.326 --rc geninfo_unexecuted_blocks=1 00:42:03.326 00:42:03.326 ' 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@472 -- # prepare_net_devs 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@434 -- # local -g is_hw=no 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@436 -- # remove_spdk_ns 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:03.326 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:03.327 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:03.327 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:42:03.327 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:42:03.327 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:42:03.327 16:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:05.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:05.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:42:05.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:05.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:05.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:05.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:05.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:05.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:42:05.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:05.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:42:05.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:42:05.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:42:05.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:42:05.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:42:05.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:42:05.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:05.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:05.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:05.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:05.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:05.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:05.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:05.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:05.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:05.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:05.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:05.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:42:05.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:42:05.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:42:05.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:42:05.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:42:05.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:42:05.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:42:05.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:42:05.859 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:42:05.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:42:05.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:42:05.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:05.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:05.859 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:42:05.860 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:42:05.860 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:42:05.860 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:42:05.860 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:42:05.860 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:42:05.860 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:05.860 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:05.860 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:42:05.860 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:42:05.860 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:42:05.860 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:42:05.860 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:42:05.860 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:05.860 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:42:05.860 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:05.860 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ up == up ]] 00:42:05.860 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:42:05.860 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:05.860 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:42:05.860 Found net devices under 0000:0a:00.0: cvl_0_0 00:42:05.860 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:42:05.860 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:42:05.860 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:05.860 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:42:05.860 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:05.860 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ up == up ]] 00:42:05.860 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:42:05.860 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:05.860 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:42:05.860 Found net devices under 0000:0a:00.1: cvl_0_1 00:42:05.860 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:42:05.860 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:42:05.860 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # is_hw=yes 00:42:05.860 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:42:05.860 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:42:05.860 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:42:05.860 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:05.860 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:05.860 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:05.860 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:05.860 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:05.860 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:05.860 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:05.860 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:05.860 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:05.860 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:05.860 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:05.860 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:05.860 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:05.860 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:05.860 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:05.860 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:05.860 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:05.860 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:05.860 16:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:05.860 16:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:05.860 16:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:05.860 16:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:05.860 16:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:05.860 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:05.860 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:42:05.860 00:42:05.860 --- 10.0.0.2 ping statistics --- 00:42:05.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:05.860 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:42:05.860 16:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:05.860 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:05.860 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:42:05.860 00:42:05.860 --- 10.0.0.1 ping statistics --- 00:42:05.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:05.860 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:42:05.860 16:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:05.860 16:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # return 0 00:42:05.860 16:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:42:05.860 16:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:05.860 16:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:42:05.860 16:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:42:05.860 16:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:05.860 16:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:42:05.860 16:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:42:05.860 16:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:42:05.860 16:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:42:05.860 16:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:05.860 16:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:05.860 16:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@505 -- # nvmfpid=3382558 00:42:05.860 16:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:42:05.860 16:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@506 -- # waitforlisten 3382558 00:42:05.860 16:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 3382558 ']' 00:42:05.860 16:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:05.860 16:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:05.860 16:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:05.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:05.860 16:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:05.860 16:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:05.860 [2024-09-29 16:50:06.134889] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:05.860 [2024-09-29 16:50:06.137446] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:42:05.860 [2024-09-29 16:50:06.137551] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:05.860 [2024-09-29 16:50:06.273824] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:06.118 [2024-09-29 16:50:06.501860] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:06.118 [2024-09-29 16:50:06.501932] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:06.118 [2024-09-29 16:50:06.501972] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:06.118 [2024-09-29 16:50:06.501992] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:06.118 [2024-09-29 16:50:06.502026] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:06.118 [2024-09-29 16:50:06.502171] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:42:06.118 [2024-09-29 16:50:06.502239] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:42:06.118 [2024-09-29 16:50:06.502273] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:42:06.118 [2024-09-29 16:50:06.502284] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:42:06.377 [2024-09-29 16:50:06.860638] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:06.377 [2024-09-29 16:50:06.861795] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:42:06.377 [2024-09-29 16:50:06.862804] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:42:06.377 [2024-09-29 16:50:06.863533] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:06.377 [2024-09-29 16:50:06.863861] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:42:06.636 16:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:06.636 16:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:42:06.636 16:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:42:06.636 16:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:06.636 16:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:06.636 16:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:06.636 16:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:06.636 16:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:06.636 16:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:06.636 [2024-09-29 16:50:07.167335] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:06.636 16:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:06.636 16:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:42:06.636 16:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:06.636 16:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:06.895 Malloc0 00:42:06.895 16:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:06.895 16:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:42:06.895 16:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:06.895 16:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:06.895 16:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:06.895 16:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:42:06.895 16:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:06.895 16:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:06.895 16:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:06.895 16:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:06.895 16:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:06.895 16:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:06.895 [2024-09-29 16:50:07.291592] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:06.895 16:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:06.895 16:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:42:06.895 16:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:42:06.895 16:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@556 -- # config=() 00:42:06.895 16:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@556 -- # local subsystem config 00:42:06.895 16:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:42:06.895 16:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:42:06.895 { 00:42:06.895 "params": { 00:42:06.895 "name": "Nvme$subsystem", 00:42:06.895 "trtype": "$TEST_TRANSPORT", 00:42:06.895 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:06.895 "adrfam": "ipv4", 00:42:06.895 "trsvcid": "$NVMF_PORT", 00:42:06.895 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:06.895 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:06.895 "hdgst": ${hdgst:-false}, 00:42:06.895 "ddgst": ${ddgst:-false} 00:42:06.895 }, 00:42:06.895 "method": "bdev_nvme_attach_controller" 00:42:06.895 } 00:42:06.895 EOF 00:42:06.895 )") 00:42:06.895 16:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@578 -- # cat 00:42:06.895 16:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # jq . 00:42:06.895 16:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@581 -- # IFS=, 00:42:06.895 16:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:42:06.895 "params": { 00:42:06.895 "name": "Nvme1", 00:42:06.895 "trtype": "tcp", 00:42:06.895 "traddr": "10.0.0.2", 00:42:06.895 "adrfam": "ipv4", 00:42:06.895 "trsvcid": "4420", 00:42:06.895 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:06.895 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:06.895 "hdgst": false, 00:42:06.895 "ddgst": false 00:42:06.895 }, 00:42:06.895 "method": "bdev_nvme_attach_controller" 00:42:06.895 }' 00:42:06.895 [2024-09-29 16:50:07.376969] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:42:06.895 [2024-09-29 16:50:07.377111] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3382712 ] 00:42:07.153 [2024-09-29 16:50:07.505469] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:42:07.411 [2024-09-29 16:50:07.751520] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:42:07.411 [2024-09-29 16:50:07.751566] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:42:07.411 [2024-09-29 16:50:07.751572] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:42:07.977 I/O targets: 00:42:07.977 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:42:07.977 00:42:07.977 00:42:07.977 CUnit - A unit testing framework for C - Version 2.1-3 00:42:07.977 http://cunit.sourceforge.net/ 00:42:07.977 00:42:07.977 00:42:07.977 Suite: bdevio tests on: Nvme1n1 00:42:07.977 Test: blockdev write read block ...passed 00:42:07.977 Test: blockdev write zeroes read block ...passed 00:42:07.977 Test: blockdev write zeroes read no split ...passed 00:42:07.977 Test: blockdev write zeroes read split ...passed 00:42:07.977 Test: blockdev write zeroes read split partial ...passed 00:42:07.977 Test: blockdev reset ...[2024-09-29 16:50:08.446234] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:07.977 [2024-09-29 16:50:08.446400] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2f00 (9): Bad file descriptor 00:42:07.977 [2024-09-29 16:50:08.454398] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:42:07.977 passed 00:42:07.977 Test: blockdev write read 8 blocks ...passed 00:42:07.977 Test: blockdev write read size > 128k ...passed 00:42:07.977 Test: blockdev write read invalid size ...passed 00:42:08.235 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:42:08.235 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:42:08.235 Test: blockdev write read max offset ...passed 00:42:08.235 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:42:08.235 Test: blockdev writev readv 8 blocks ...passed 00:42:08.235 Test: blockdev writev readv 30 x 1block ...passed 00:42:08.235 Test: blockdev writev readv block ...passed 00:42:08.235 Test: blockdev writev readv size > 128k ...passed 00:42:08.235 Test: blockdev writev readv size > 128k in two iovs ...passed 00:42:08.235 Test: blockdev comparev and writev ...[2024-09-29 16:50:08.711355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:08.235 [2024-09-29 16:50:08.711416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:08.235 [2024-09-29 16:50:08.711465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:08.235 [2024-09-29 16:50:08.711493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:08.235 [2024-09-29 16:50:08.712112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:08.235 [2024-09-29 16:50:08.712154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:42:08.235 [2024-09-29 16:50:08.712202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:08.235 [2024-09-29 16:50:08.712229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:42:08.235 [2024-09-29 16:50:08.712804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:08.235 [2024-09-29 16:50:08.712837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:42:08.235 [2024-09-29 16:50:08.712870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:08.235 [2024-09-29 16:50:08.712900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:42:08.235 [2024-09-29 16:50:08.713489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:08.235 [2024-09-29 16:50:08.713521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:42:08.235 [2024-09-29 16:50:08.713554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:08.235 [2024-09-29 16:50:08.713579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:42:08.235 passed 00:42:08.235 Test: blockdev nvme passthru rw ...passed 00:42:08.235 Test: blockdev nvme passthru vendor specific ...[2024-09-29 16:50:08.796163] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:42:08.235 [2024-09-29 16:50:08.796233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:42:08.235 [2024-09-29 16:50:08.796529] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:42:08.235 [2024-09-29 16:50:08.796565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:42:08.235 [2024-09-29 16:50:08.796832] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:42:08.235 [2024-09-29 16:50:08.796865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:42:08.235 [2024-09-29 16:50:08.797101] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:42:08.235 [2024-09-29 16:50:08.797134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:42:08.235 passed 00:42:08.493 Test: blockdev nvme admin passthru ...passed 00:42:08.493 Test: blockdev copy ...passed 00:42:08.493 00:42:08.493 Run Summary: Type Total Ran Passed Failed Inactive 00:42:08.493 suites 1 1 n/a 0 0 00:42:08.493 tests 23 23 23 0 0 00:42:08.493 asserts 152 152 152 0 n/a 00:42:08.493 00:42:08.493 Elapsed time = 1.243 seconds 00:42:09.428 16:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:09.428 16:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:09.428 16:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:09.428 16:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:09.428 16:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:42:09.428 16:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:42:09.428 16:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # nvmfcleanup 00:42:09.428 16:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:42:09.428 16:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:09.428 16:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:42:09.428 16:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:09.428 16:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:09.428 rmmod nvme_tcp 00:42:09.428 rmmod nvme_fabrics 00:42:09.428 rmmod nvme_keyring 00:42:09.428 16:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:09.428 16:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:42:09.428 16:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:42:09.428 16:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@513 -- # '[' -n 3382558 ']' 00:42:09.428 16:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@514 -- # killprocess 3382558 00:42:09.428 16:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 3382558 ']' 00:42:09.428 16:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 3382558 00:42:09.428 16:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:42:09.428 16:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:09.428 16:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3382558 00:42:09.428 16:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:42:09.428 16:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:42:09.428 16:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3382558' 00:42:09.428 killing process with pid 3382558 00:42:09.428 16:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 3382558 00:42:09.428 16:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 3382558 00:42:10.800 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:42:10.800 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:42:10.800 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:42:10.800 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:42:10.800 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-save 00:42:10.800 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:42:10.800 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-restore 00:42:10.800 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:10.800 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:10.800 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:10.801 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:10.801 16:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:13.329 16:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:13.329 00:42:13.329 real 0m9.728s 00:42:13.329 user 0m18.146s 00:42:13.329 sys 0m3.228s 00:42:13.329 16:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:13.329 16:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:13.329 ************************************ 00:42:13.329 END TEST nvmf_bdevio 00:42:13.329 ************************************ 00:42:13.329 16:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:42:13.329 00:42:13.330 real 4m33.229s 00:42:13.330 user 9m59.960s 00:42:13.330 sys 1m30.634s 00:42:13.330 16:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:13.330 16:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:13.330 ************************************ 00:42:13.330 END TEST nvmf_target_core_interrupt_mode 00:42:13.330 ************************************ 00:42:13.330 16:50:13 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:42:13.330 16:50:13 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:42:13.330 16:50:13 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:13.330 16:50:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:13.330 ************************************ 00:42:13.330 START TEST nvmf_interrupt 00:42:13.330 ************************************ 00:42:13.330 16:50:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:42:13.330 * Looking for test storage... 00:42:13.330 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:13.330 16:50:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:42:13.330 16:50:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # lcov --version 00:42:13.330 16:50:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:42:13.330 16:50:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:42:13.330 16:50:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:13.330 16:50:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:13.330 16:50:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:13.330 16:50:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:42:13.330 16:50:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:42:13.330 16:50:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:42:13.330 16:50:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:42:13.330 16:50:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:42:13.330 16:50:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:42:13.330 16:50:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:42:13.330 16:50:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:13.330 16:50:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:42:13.330 16:50:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:42:13.330 16:50:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:13.330 16:50:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:13.330 16:50:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:42:13.330 16:50:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:42:13.330 16:50:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:13.330 16:50:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:42:13.330 16:50:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:42:13.330 16:50:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:42:13.330 16:50:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:42:13.330 16:50:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:13.330 16:50:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:42:13.330 16:50:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:42:13.330 16:50:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:13.330 16:50:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:13.330 16:50:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:42:13.330 16:50:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:13.330 16:50:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:42:13.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:13.330 --rc genhtml_branch_coverage=1 00:42:13.330 --rc genhtml_function_coverage=1 00:42:13.330 --rc genhtml_legend=1 00:42:13.330 --rc geninfo_all_blocks=1 00:42:13.330 --rc geninfo_unexecuted_blocks=1 00:42:13.330 00:42:13.330 ' 00:42:13.330 16:50:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:42:13.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:13.330 --rc genhtml_branch_coverage=1 00:42:13.330 --rc genhtml_function_coverage=1 00:42:13.330 --rc genhtml_legend=1 00:42:13.330 --rc geninfo_all_blocks=1 00:42:13.330 --rc geninfo_unexecuted_blocks=1 00:42:13.330 00:42:13.330 ' 00:42:13.330 16:50:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:42:13.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:13.330 --rc genhtml_branch_coverage=1 00:42:13.330 --rc genhtml_function_coverage=1 00:42:13.330 --rc genhtml_legend=1 00:42:13.330 --rc geninfo_all_blocks=1 00:42:13.330 --rc geninfo_unexecuted_blocks=1 00:42:13.330 00:42:13.330 ' 00:42:13.330 16:50:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:42:13.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:13.330 --rc genhtml_branch_coverage=1 00:42:13.330 --rc genhtml_function_coverage=1 00:42:13.330 --rc genhtml_legend=1 00:42:13.330 --rc geninfo_all_blocks=1 00:42:13.330 --rc geninfo_unexecuted_blocks=1 00:42:13.330 00:42:13.330 ' 00:42:13.330 16:50:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:13.330 16:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:42:13.330 16:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:13.330 16:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:13.330 16:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:13.330 16:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:13.330 16:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:13.330 16:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:13.330 16:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:13.330 16:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:13.330 16:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:13.330 16:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:13.330 16:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:13.330 16:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:42:13.330 16:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:13.330 16:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:13.330 16:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:13.330 16:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:13.330 16:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:13.330 16:50:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:42:13.330 16:50:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:13.330 16:50:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:13.330 16:50:13 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:13.330 16:50:13 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:13.330 16:50:13 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:13.330 16:50:13 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:13.330 16:50:13 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:42:13.330 16:50:13 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:13.330 16:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:42:13.330 16:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:13.330 16:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:13.331 16:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:13.331 16:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:13.331 16:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:13.331 16:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:13.331 16:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:13.331 16:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:13.331 16:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:13.331 16:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:13.331 16:50:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:42:13.331 16:50:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:42:13.331 16:50:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:42:13.331 16:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:42:13.331 16:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:13.331 16:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@472 -- # prepare_net_devs 00:42:13.331 16:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@434 -- # local -g is_hw=no 00:42:13.331 16:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@436 -- # remove_spdk_ns 00:42:13.331 16:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:13.331 16:50:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:42:13.331 16:50:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:13.331 16:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:42:13.331 16:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:42:13.331 16:50:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:42:13.331 16:50:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:15.230 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:15.230 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:42:15.230 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:15.230 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:15.230 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:15.230 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:15.230 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:15.230 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:42:15.230 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:15.230 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:42:15.230 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:42:15.230 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:42:15.230 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:42:15.230 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:42:15.230 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:42:15.230 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:15.230 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:15.230 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:15.230 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:15.230 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:15.230 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:15.230 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:15.230 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:15.230 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:15.230 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:15.230 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:15.230 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:42:15.230 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:42:15.230 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:42:15.230 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:42:15.230 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:42:15.230 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:42:15.230 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:42:15.230 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:42:15.230 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:42:15.230 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:42:15.230 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:42:15.230 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:15.230 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:15.230 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:42:15.231 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:42:15.231 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:42:15.231 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:42:15.231 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:42:15.231 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:42:15.231 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:15.231 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:15.231 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:42:15.231 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:42:15.231 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:42:15.231 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:42:15.231 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:42:15.231 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:15.231 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:42:15.231 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:15.231 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ up == up ]] 00:42:15.231 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:42:15.231 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:15.231 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:42:15.231 Found net devices under 0000:0a:00.0: cvl_0_0 00:42:15.231 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:42:15.231 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:42:15.231 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:15.231 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:42:15.231 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:15.231 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ up == up ]] 00:42:15.231 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:42:15.231 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:15.231 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:42:15.231 Found net devices under 0000:0a:00.1: cvl_0_1 00:42:15.231 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:42:15.231 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:42:15.231 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # is_hw=yes 00:42:15.231 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:42:15.231 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:42:15.231 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:42:15.231 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:15.231 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:15.231 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:15.231 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:15.231 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:15.231 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:15.231 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:15.231 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:15.231 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:15.231 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:15.231 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:15.231 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:15.231 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:15.231 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:15.231 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:15.231 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:15.231 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:15.231 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:15.231 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:15.231 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:15.231 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:15.231 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:15.231 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:15.231 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:15.231 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:42:15.231 00:42:15.231 --- 10.0.0.2 ping statistics --- 00:42:15.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:15.231 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:42:15.231 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:15.231 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:15.231 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:42:15.231 00:42:15.231 --- 10.0.0.1 ping statistics --- 00:42:15.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:15.231 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:42:15.231 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:15.231 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # return 0 00:42:15.231 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:42:15.231 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:15.231 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:42:15.231 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:42:15.231 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:15.231 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:42:15.231 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:42:15.489 16:50:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:42:15.489 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:42:15.489 16:50:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:15.489 16:50:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:15.489 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@505 -- # nvmfpid=3385077 00:42:15.489 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:42:15.489 16:50:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@506 -- # waitforlisten 3385077 00:42:15.489 16:50:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@831 -- # '[' -z 3385077 ']' 00:42:15.489 16:50:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:15.489 16:50:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:15.489 16:50:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:15.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:15.489 16:50:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:15.489 16:50:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:15.489 [2024-09-29 16:50:15.894534] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:15.489 [2024-09-29 16:50:15.897122] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:42:15.489 [2024-09-29 16:50:15.897229] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:15.489 [2024-09-29 16:50:16.030971] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:42:15.747 [2024-09-29 16:50:16.280420] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:15.747 [2024-09-29 16:50:16.280496] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:15.747 [2024-09-29 16:50:16.280526] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:15.747 [2024-09-29 16:50:16.280552] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:15.747 [2024-09-29 16:50:16.280575] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:15.747 [2024-09-29 16:50:16.280709] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:42:15.747 [2024-09-29 16:50:16.280715] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:42:16.313 [2024-09-29 16:50:16.659351] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:16.313 [2024-09-29 16:50:16.660046] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:16.313 [2024-09-29 16:50:16.660353] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:42:16.313 16:50:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:16.313 16:50:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # return 0 00:42:16.313 16:50:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:42:16.313 16:50:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:16.313 16:50:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:16.572 16:50:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:16.572 16:50:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:42:16.572 16:50:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:42:16.572 16:50:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:42:16.572 16:50:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:42:16.572 5000+0 records in 00:42:16.572 5000+0 records out 00:42:16.572 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0142454 s, 719 MB/s 00:42:16.572 16:50:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:42:16.572 16:50:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:16.572 16:50:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:16.572 AIO0 00:42:16.572 16:50:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:16.572 16:50:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:42:16.572 16:50:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:16.572 16:50:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:16.572 [2024-09-29 16:50:16.961776] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:16.572 16:50:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:16.572 16:50:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:42:16.572 16:50:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:16.572 16:50:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:16.572 16:50:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:16.572 16:50:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:42:16.572 16:50:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:16.572 16:50:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:16.572 16:50:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:16.572 16:50:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:16.572 16:50:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:16.572 16:50:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:16.572 [2024-09-29 16:50:16.990048] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:16.572 16:50:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:16.572 16:50:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:42:16.573 16:50:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3385077 0 00:42:16.573 16:50:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3385077 0 idle 00:42:16.573 16:50:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3385077 00:42:16.573 16:50:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:42:16.573 16:50:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:16.573 16:50:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:42:16.573 16:50:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:16.573 16:50:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:42:16.573 16:50:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:42:16.573 16:50:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:16.573 16:50:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:16.573 16:50:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:16.573 16:50:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3385077 -w 256 00:42:16.573 16:50:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:42:16.831 16:50:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3385077 root 20 0 20.1t 202752 107136 S 0.0 0.3 0:00.84 reactor_0' 00:42:16.831 16:50:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3385077 root 20 0 20.1t 202752 107136 S 0.0 0.3 0:00.84 reactor_0 00:42:16.831 16:50:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:16.831 16:50:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:16.831 16:50:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:16.831 16:50:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:16.831 16:50:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:42:16.831 16:50:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:42:16.831 16:50:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:42:16.831 16:50:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:16.831 16:50:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:42:16.831 16:50:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3385077 1 00:42:16.831 16:50:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3385077 1 idle 00:42:16.831 16:50:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3385077 00:42:16.831 16:50:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:42:16.831 16:50:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:16.831 16:50:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:42:16.831 16:50:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:16.831 16:50:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:42:16.831 16:50:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:42:16.831 16:50:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:16.831 16:50:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:16.831 16:50:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:16.831 16:50:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3385077 -w 256 00:42:16.831 16:50:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:42:16.831 16:50:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3385181 root 20 0 20.1t 202752 107136 S 0.0 0.3 0:00.00 reactor_1' 00:42:16.831 16:50:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3385181 root 20 0 20.1t 202752 107136 S 0.0 0.3 0:00.00 reactor_1 00:42:16.831 16:50:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:16.831 16:50:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:16.831 16:50:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:16.831 16:50:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:16.831 16:50:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:42:16.831 16:50:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:42:16.831 16:50:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:42:16.831 16:50:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:16.831 16:50:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:42:16.831 16:50:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=3385352 00:42:16.831 16:50:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:42:16.831 16:50:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:42:16.831 16:50:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:42:16.831 16:50:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3385077 0 00:42:16.831 16:50:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3385077 0 busy 00:42:16.831 16:50:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3385077 00:42:16.831 16:50:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:42:16.831 16:50:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:42:16.831 16:50:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:42:16.831 16:50:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:16.831 16:50:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:42:16.831 16:50:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:16.831 16:50:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:16.831 16:50:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:16.831 16:50:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3385077 -w 256 00:42:16.831 16:50:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:42:17.089 16:50:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3385077 root 20 0 20.1t 205440 107904 R 13.3 0.3 0:00.86 reactor_0' 00:42:17.089 16:50:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3385077 root 20 0 20.1t 205440 107904 R 13.3 0.3 0:00.86 reactor_0 00:42:17.090 16:50:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:17.090 16:50:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:17.090 16:50:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=13.3 00:42:17.090 16:50:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=13 00:42:17.090 16:50:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:42:17.090 16:50:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:42:17.090 16:50:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:42:18.025 16:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:42:18.025 16:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:18.025 16:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3385077 -w 256 00:42:18.025 16:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:42:18.283 16:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3385077 root 20 0 20.1t 216192 107904 R 99.9 0.4 0:03.06 reactor_0' 00:42:18.283 16:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3385077 root 20 0 20.1t 216192 107904 R 99.9 0.4 0:03.06 reactor_0 00:42:18.283 16:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:18.283 16:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:18.283 16:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:42:18.283 16:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:42:18.283 16:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:42:18.283 16:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:42:18.283 16:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:42:18.283 16:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:18.283 16:50:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:42:18.283 16:50:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:42:18.283 16:50:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3385077 1 00:42:18.283 16:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3385077 1 busy 00:42:18.283 16:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3385077 00:42:18.283 16:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:42:18.283 16:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:42:18.283 16:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:42:18.283 16:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:18.283 16:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:42:18.283 16:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:18.283 16:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:18.283 16:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:18.283 16:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3385077 -w 256 00:42:18.283 16:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:42:18.283 16:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3385181 root 20 0 20.1t 216192 107904 R 93.3 0.4 0:01.20 reactor_1' 00:42:18.283 16:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3385181 root 20 0 20.1t 216192 107904 R 93.3 0.4 0:01.20 reactor_1 00:42:18.283 16:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:18.283 16:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:18.283 16:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.3 00:42:18.283 16:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:42:18.283 16:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:42:18.283 16:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:42:18.283 16:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:42:18.283 16:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:18.283 16:50:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 3385352 00:42:28.261 Initializing NVMe Controllers 00:42:28.261 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:42:28.261 Controller IO queue size 256, less than required. 00:42:28.261 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:42:28.261 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:42:28.261 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:42:28.261 Initialization complete. Launching workers. 00:42:28.261 ======================================================== 00:42:28.261 Latency(us) 00:42:28.261 Device Information : IOPS MiB/s Average min max 00:42:28.261 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 9855.39 38.50 26003.68 6724.36 67274.56 00:42:28.261 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 10843.89 42.36 23626.13 6718.56 27135.14 00:42:28.261 ======================================================== 00:42:28.261 Total : 20699.28 80.86 24758.14 6718.56 67274.56 00:42:28.261 00:42:28.261 16:50:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:42:28.261 16:50:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3385077 0 00:42:28.261 16:50:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3385077 0 idle 00:42:28.261 16:50:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3385077 00:42:28.261 16:50:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:42:28.261 16:50:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:28.261 16:50:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:42:28.261 16:50:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:28.261 16:50:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:42:28.261 16:50:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:42:28.261 16:50:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:28.261 16:50:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:28.261 16:50:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:28.261 16:50:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3385077 -w 256 00:42:28.261 16:50:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:42:28.261 16:50:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3385077 root 20 0 20.1t 216192 107904 S 0.0 0.4 0:19.80 reactor_0' 00:42:28.261 16:50:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3385077 root 20 0 20.1t 216192 107904 S 0.0 0.4 0:19.80 reactor_0 00:42:28.262 16:50:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:28.262 16:50:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:28.262 16:50:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:28.262 16:50:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:28.262 16:50:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:42:28.262 16:50:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:42:28.262 16:50:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:42:28.262 16:50:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:28.262 16:50:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:42:28.262 16:50:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3385077 1 00:42:28.262 16:50:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3385077 1 idle 00:42:28.262 16:50:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3385077 00:42:28.262 16:50:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:42:28.262 16:50:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:28.262 16:50:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:42:28.262 16:50:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:28.262 16:50:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:42:28.262 16:50:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:42:28.262 16:50:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:28.262 16:50:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:28.262 16:50:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:28.262 16:50:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3385077 -w 256 00:42:28.262 16:50:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:42:28.262 16:50:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3385181 root 20 0 20.1t 216192 107904 S 0.0 0.4 0:08.98 reactor_1' 00:42:28.262 16:50:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3385181 root 20 0 20.1t 216192 107904 S 0.0 0.4 0:08.98 reactor_1 00:42:28.262 16:50:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:28.262 16:50:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:28.262 16:50:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:28.262 16:50:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:28.262 16:50:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:42:28.262 16:50:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:42:28.262 16:50:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:42:28.262 16:50:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:28.262 16:50:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:42:28.262 16:50:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:42:28.262 16:50:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1198 -- # local i=0 00:42:28.262 16:50:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:42:28.262 16:50:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:42:28.262 16:50:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1205 -- # sleep 2 00:42:30.163 16:50:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:42:30.163 16:50:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:42:30.163 16:50:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:42:30.163 16:50:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:42:30.163 16:50:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:42:30.163 16:50:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # return 0 00:42:30.163 16:50:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:42:30.163 16:50:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3385077 0 00:42:30.163 16:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3385077 0 idle 00:42:30.163 16:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3385077 00:42:30.163 16:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:42:30.163 16:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:30.163 16:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:42:30.163 16:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:30.163 16:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:42:30.163 16:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:42:30.163 16:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:30.163 16:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:30.163 16:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:30.163 16:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3385077 -w 256 00:42:30.163 16:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:42:30.163 16:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3385077 root 20 0 20.1t 243840 117504 S 0.0 0.4 0:19.99 reactor_0' 00:42:30.163 16:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3385077 root 20 0 20.1t 243840 117504 S 0.0 0.4 0:19.99 reactor_0 00:42:30.163 16:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:30.163 16:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:30.163 16:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:30.163 16:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:30.163 16:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:42:30.163 16:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:42:30.163 16:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:42:30.163 16:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:30.163 16:50:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:42:30.163 16:50:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3385077 1 00:42:30.163 16:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3385077 1 idle 00:42:30.163 16:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3385077 00:42:30.163 16:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:42:30.163 16:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:30.163 16:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:42:30.163 16:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:30.163 16:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:42:30.163 16:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:42:30.163 16:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:30.163 16:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:30.163 16:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:30.163 16:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3385077 -w 256 00:42:30.163 16:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:42:30.163 16:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3385181 root 20 0 20.1t 243840 117504 S 0.0 0.4 0:09.05 reactor_1' 00:42:30.163 16:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3385181 root 20 0 20.1t 243840 117504 S 0.0 0.4 0:09.05 reactor_1 00:42:30.163 16:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:30.163 16:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:30.163 16:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:30.163 16:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:30.164 16:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:42:30.164 16:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:42:30.164 16:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:42:30.164 16:50:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:30.164 16:50:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:42:30.422 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:42:30.422 16:50:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:42:30.422 16:50:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1219 -- # local i=0 00:42:30.422 16:50:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:42:30.422 16:50:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:42:30.422 16:50:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:42:30.422 16:50:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:42:30.422 16:50:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # return 0 00:42:30.422 16:50:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:42:30.422 16:50:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:42:30.422 16:50:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # nvmfcleanup 00:42:30.422 16:50:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:42:30.422 16:50:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:30.422 16:50:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:42:30.422 16:50:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:30.422 16:50:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:30.422 rmmod nvme_tcp 00:42:30.422 rmmod nvme_fabrics 00:42:30.422 rmmod nvme_keyring 00:42:30.680 16:50:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:30.681 16:50:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:42:30.681 16:50:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:42:30.681 16:50:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@513 -- # '[' -n 3385077 ']' 00:42:30.681 16:50:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@514 -- # killprocess 3385077 00:42:30.681 16:50:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@950 -- # '[' -z 3385077 ']' 00:42:30.681 16:50:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # kill -0 3385077 00:42:30.681 16:50:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # uname 00:42:30.681 16:50:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:30.681 16:50:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3385077 00:42:30.681 16:50:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:42:30.681 16:50:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:42:30.681 16:50:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3385077' 00:42:30.681 killing process with pid 3385077 00:42:30.681 16:50:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@969 -- # kill 3385077 00:42:30.681 16:50:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@974 -- # wait 3385077 00:42:32.056 16:50:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:42:32.056 16:50:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:42:32.056 16:50:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:42:32.056 16:50:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:42:32.056 16:50:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@787 -- # iptables-save 00:42:32.056 16:50:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:42:32.056 16:50:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@787 -- # iptables-restore 00:42:32.056 16:50:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:32.056 16:50:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:32.056 16:50:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:32.056 16:50:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:42:32.056 16:50:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:34.049 16:50:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:34.049 00:42:34.049 real 0m21.000s 00:42:34.049 user 0m38.228s 00:42:34.049 sys 0m7.193s 00:42:34.049 16:50:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:34.049 16:50:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:34.049 ************************************ 00:42:34.049 END TEST nvmf_interrupt 00:42:34.049 ************************************ 00:42:34.050 00:42:34.050 real 35m47.080s 00:42:34.050 user 93m42.197s 00:42:34.050 sys 7m52.471s 00:42:34.050 16:50:34 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:34.050 16:50:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:34.050 ************************************ 00:42:34.050 END TEST nvmf_tcp 00:42:34.050 ************************************ 00:42:34.050 16:50:34 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:42:34.050 16:50:34 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:42:34.050 16:50:34 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:42:34.050 16:50:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:34.050 16:50:34 -- common/autotest_common.sh@10 -- # set +x 00:42:34.050 ************************************ 00:42:34.050 START TEST spdkcli_nvmf_tcp 00:42:34.050 ************************************ 00:42:34.050 16:50:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:42:34.050 * Looking for test storage... 00:42:34.050 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:42:34.050 16:50:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:42:34.050 16:50:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:42:34.050 16:50:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:42:34.307 16:50:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:42:34.307 16:50:34 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:34.307 16:50:34 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:34.307 16:50:34 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:34.307 16:50:34 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:42:34.307 16:50:34 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:42:34.307 16:50:34 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:42:34.307 16:50:34 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:42:34.307 16:50:34 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:42:34.307 16:50:34 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:42:34.307 16:50:34 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:42:34.307 16:50:34 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:34.307 16:50:34 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:42:34.307 16:50:34 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:42:34.307 16:50:34 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:34.307 16:50:34 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:34.307 16:50:34 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:42:34.307 16:50:34 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:42:34.307 16:50:34 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:34.307 16:50:34 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:42:34.307 16:50:34 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:42:34.307 16:50:34 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:42:34.307 16:50:34 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:42:34.307 16:50:34 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:34.307 16:50:34 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:42:34.307 16:50:34 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:42:34.307 16:50:34 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:34.307 16:50:34 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:34.307 16:50:34 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:42:34.307 16:50:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:34.307 16:50:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:42:34.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:34.307 --rc genhtml_branch_coverage=1 00:42:34.307 --rc genhtml_function_coverage=1 00:42:34.308 --rc genhtml_legend=1 00:42:34.308 --rc geninfo_all_blocks=1 00:42:34.308 --rc geninfo_unexecuted_blocks=1 00:42:34.308 00:42:34.308 ' 00:42:34.308 16:50:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:42:34.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:34.308 --rc genhtml_branch_coverage=1 00:42:34.308 --rc genhtml_function_coverage=1 00:42:34.308 --rc genhtml_legend=1 00:42:34.308 --rc geninfo_all_blocks=1 00:42:34.308 --rc geninfo_unexecuted_blocks=1 00:42:34.308 00:42:34.308 ' 00:42:34.308 16:50:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:42:34.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:34.308 --rc genhtml_branch_coverage=1 00:42:34.308 --rc genhtml_function_coverage=1 00:42:34.308 --rc genhtml_legend=1 00:42:34.308 --rc geninfo_all_blocks=1 00:42:34.308 --rc geninfo_unexecuted_blocks=1 00:42:34.308 00:42:34.308 ' 00:42:34.308 16:50:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:42:34.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:34.308 --rc genhtml_branch_coverage=1 00:42:34.308 --rc genhtml_function_coverage=1 00:42:34.308 --rc genhtml_legend=1 00:42:34.308 --rc geninfo_all_blocks=1 00:42:34.308 --rc geninfo_unexecuted_blocks=1 00:42:34.308 00:42:34.308 ' 00:42:34.308 16:50:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:42:34.308 16:50:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:42:34.308 16:50:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:42:34.308 16:50:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:34.308 16:50:34 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:42:34.308 16:50:34 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:34.308 16:50:34 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:34.308 16:50:34 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:34.308 16:50:34 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:34.308 16:50:34 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:34.308 16:50:34 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:34.308 16:50:34 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:34.308 16:50:34 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:34.308 16:50:34 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:34.308 16:50:34 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:34.308 16:50:34 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:34.308 16:50:34 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:42:34.308 16:50:34 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:34.308 16:50:34 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:34.308 16:50:34 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:34.308 16:50:34 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:34.308 16:50:34 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:34.308 16:50:34 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:42:34.308 16:50:34 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:34.308 16:50:34 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:34.308 16:50:34 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:34.308 16:50:34 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:34.308 16:50:34 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:34.308 16:50:34 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:34.308 16:50:34 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:42:34.308 16:50:34 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:34.308 16:50:34 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:42:34.308 16:50:34 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:34.308 16:50:34 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:34.308 16:50:34 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:34.308 16:50:34 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:34.308 16:50:34 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:34.308 16:50:34 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:34.308 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:34.308 16:50:34 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:34.308 16:50:34 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:34.308 16:50:34 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:34.308 16:50:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:42:34.308 16:50:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:42:34.308 16:50:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:42:34.308 16:50:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:42:34.308 16:50:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:34.308 16:50:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:34.308 16:50:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:42:34.308 16:50:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3387500 00:42:34.308 16:50:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:42:34.308 16:50:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3387500 00:42:34.308 16:50:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 3387500 ']' 00:42:34.308 16:50:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:34.308 16:50:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:34.308 16:50:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:34.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:34.308 16:50:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:34.308 16:50:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:34.308 [2024-09-29 16:50:34.816218] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:42:34.308 [2024-09-29 16:50:34.816370] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3387500 ] 00:42:34.565 [2024-09-29 16:50:34.944503] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:42:34.822 [2024-09-29 16:50:35.167648] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:42:34.822 [2024-09-29 16:50:35.167653] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:42:35.389 16:50:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:35.389 16:50:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:42:35.389 16:50:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:42:35.389 16:50:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:35.389 16:50:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:35.389 16:50:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:42:35.389 16:50:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:42:35.389 16:50:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:42:35.389 16:50:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:35.389 16:50:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:35.389 16:50:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:42:35.389 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:42:35.389 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:42:35.389 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:42:35.389 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:42:35.389 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:42:35.389 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:42:35.389 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:42:35.389 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:42:35.389 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:42:35.389 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:42:35.389 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:42:35.389 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:42:35.389 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:42:35.389 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:42:35.389 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:42:35.389 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:42:35.389 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:42:35.389 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:42:35.389 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:42:35.389 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:42:35.389 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:42:35.389 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:42:35.389 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:42:35.389 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:42:35.389 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:42:35.389 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:42:35.389 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:42:35.389 ' 00:42:38.672 [2024-09-29 16:50:38.596766] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:39.605 [2024-09-29 16:50:39.874670] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:42:42.136 [2024-09-29 16:50:42.258429] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:42:44.035 [2024-09-29 16:50:44.273063] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:42:45.410 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:42:45.410 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:42:45.410 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:42:45.410 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:42:45.410 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:42:45.410 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:42:45.410 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:42:45.410 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:42:45.410 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:42:45.410 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:42:45.410 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:42:45.410 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:42:45.410 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:42:45.410 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:42:45.410 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:42:45.410 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:42:45.410 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:42:45.410 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:42:45.410 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:42:45.410 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:42:45.410 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:42:45.410 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:42:45.410 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:42:45.410 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:42:45.410 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:42:45.410 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:42:45.410 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:42:45.410 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:42:45.410 16:50:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:42:45.410 16:50:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:45.410 16:50:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:45.410 16:50:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:42:45.410 16:50:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:45.410 16:50:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:45.410 16:50:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:42:45.410 16:50:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:42:45.976 16:50:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:42:45.976 16:50:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:42:45.976 16:50:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:42:45.976 16:50:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:45.976 16:50:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:45.976 16:50:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:42:45.976 16:50:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:45.976 16:50:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:45.976 16:50:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:42:45.976 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:42:45.976 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:42:45.976 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:42:45.976 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:42:45.976 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:42:45.976 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:42:45.976 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:42:45.976 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:42:45.976 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:42:45.976 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:42:45.976 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:42:45.976 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:42:45.976 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:42:45.976 ' 00:42:52.535 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:42:52.535 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:42:52.535 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:42:52.535 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:42:52.535 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:42:52.535 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:42:52.535 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:42:52.535 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:42:52.535 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:42:52.535 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:42:52.535 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:42:52.535 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:42:52.535 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:42:52.535 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:42:52.535 16:50:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:42:52.535 16:50:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:52.535 16:50:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:52.535 16:50:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3387500 00:42:52.535 16:50:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 3387500 ']' 00:42:52.535 16:50:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 3387500 00:42:52.535 16:50:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:42:52.535 16:50:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:52.535 16:50:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3387500 00:42:52.535 16:50:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:42:52.535 16:50:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:42:52.535 16:50:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3387500' 00:42:52.535 killing process with pid 3387500 00:42:52.535 16:50:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 3387500 00:42:52.535 16:50:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 3387500 00:42:53.470 16:50:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:42:53.470 16:50:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:42:53.470 16:50:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3387500 ']' 00:42:53.470 16:50:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3387500 00:42:53.470 16:50:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 3387500 ']' 00:42:53.470 16:50:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 3387500 00:42:53.470 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3387500) - No such process 00:42:53.470 16:50:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 3387500 is not found' 00:42:53.470 Process with pid 3387500 is not found 00:42:53.470 16:50:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:42:53.470 16:50:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:42:53.470 16:50:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:42:53.470 00:42:53.470 real 0m19.225s 00:42:53.470 user 0m39.950s 00:42:53.470 sys 0m1.039s 00:42:53.470 16:50:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:53.470 16:50:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:53.470 ************************************ 00:42:53.470 END TEST spdkcli_nvmf_tcp 00:42:53.470 ************************************ 00:42:53.470 16:50:53 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:42:53.470 16:50:53 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:42:53.470 16:50:53 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:53.470 16:50:53 -- common/autotest_common.sh@10 -- # set +x 00:42:53.470 ************************************ 00:42:53.470 START TEST nvmf_identify_passthru 00:42:53.470 ************************************ 00:42:53.470 16:50:53 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:42:53.470 * Looking for test storage... 00:42:53.470 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:53.470 16:50:53 nvmf_identify_passthru -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:42:53.470 16:50:53 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # lcov --version 00:42:53.470 16:50:53 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:42:53.470 16:50:53 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:42:53.470 16:50:53 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:53.470 16:50:53 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:53.470 16:50:53 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:53.470 16:50:53 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:42:53.470 16:50:53 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:42:53.470 16:50:53 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:42:53.470 16:50:53 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:42:53.470 16:50:53 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:42:53.470 16:50:53 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:42:53.470 16:50:53 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:42:53.470 16:50:53 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:53.470 16:50:53 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:42:53.470 16:50:53 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:42:53.470 16:50:53 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:53.470 16:50:53 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:53.470 16:50:53 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:42:53.470 16:50:53 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:42:53.470 16:50:53 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:53.470 16:50:53 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:42:53.470 16:50:53 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:42:53.470 16:50:53 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:42:53.470 16:50:53 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:42:53.470 16:50:53 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:53.470 16:50:53 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:42:53.470 16:50:53 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:42:53.470 16:50:53 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:53.471 16:50:53 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:53.471 16:50:53 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:42:53.471 16:50:53 nvmf_identify_passthru -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:53.471 16:50:53 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:42:53.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:53.471 --rc genhtml_branch_coverage=1 00:42:53.471 --rc genhtml_function_coverage=1 00:42:53.471 --rc genhtml_legend=1 00:42:53.471 --rc geninfo_all_blocks=1 00:42:53.471 --rc geninfo_unexecuted_blocks=1 00:42:53.471 00:42:53.471 ' 00:42:53.471 16:50:53 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:42:53.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:53.471 --rc genhtml_branch_coverage=1 00:42:53.471 --rc genhtml_function_coverage=1 00:42:53.471 --rc genhtml_legend=1 00:42:53.471 --rc geninfo_all_blocks=1 00:42:53.471 --rc geninfo_unexecuted_blocks=1 00:42:53.471 00:42:53.471 ' 00:42:53.471 16:50:53 nvmf_identify_passthru -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:42:53.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:53.471 --rc genhtml_branch_coverage=1 00:42:53.471 --rc genhtml_function_coverage=1 00:42:53.471 --rc genhtml_legend=1 00:42:53.471 --rc geninfo_all_blocks=1 00:42:53.471 --rc geninfo_unexecuted_blocks=1 00:42:53.471 00:42:53.471 ' 00:42:53.471 16:50:53 nvmf_identify_passthru -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:42:53.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:53.471 --rc genhtml_branch_coverage=1 00:42:53.471 --rc genhtml_function_coverage=1 00:42:53.471 --rc genhtml_legend=1 00:42:53.471 --rc geninfo_all_blocks=1 00:42:53.471 --rc geninfo_unexecuted_blocks=1 00:42:53.471 00:42:53.471 ' 00:42:53.471 16:50:53 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:53.471 16:50:53 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:42:53.471 16:50:53 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:53.471 16:50:53 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:53.471 16:50:53 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:53.471 16:50:53 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:53.471 16:50:53 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:53.471 16:50:53 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:53.471 16:50:53 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:53.471 16:50:53 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:53.471 16:50:53 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:53.471 16:50:53 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:53.471 16:50:53 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:53.471 16:50:53 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:42:53.471 16:50:53 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:53.471 16:50:53 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:53.471 16:50:53 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:53.471 16:50:53 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:53.471 16:50:53 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:53.471 16:50:53 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:42:53.471 16:50:53 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:53.471 16:50:53 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:53.471 16:50:53 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:53.471 16:50:53 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:53.471 16:50:53 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:53.471 16:50:53 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:53.471 16:50:53 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:42:53.471 16:50:53 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:53.471 16:50:53 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:42:53.471 16:50:53 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:53.471 16:50:53 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:53.471 16:50:53 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:53.471 16:50:53 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:53.471 16:50:53 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:53.471 16:50:53 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:53.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:53.471 16:50:53 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:53.471 16:50:53 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:53.471 16:50:53 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:53.471 16:50:53 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:53.471 16:50:53 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:42:53.471 16:50:53 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:53.471 16:50:53 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:53.471 16:50:53 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:53.471 16:50:53 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:53.471 16:50:53 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:53.471 16:50:53 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:53.471 16:50:53 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:42:53.471 16:50:53 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:53.471 16:50:53 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:42:53.471 16:50:53 nvmf_identify_passthru -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:42:53.471 16:50:53 nvmf_identify_passthru -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:53.471 16:50:53 nvmf_identify_passthru -- nvmf/common.sh@472 -- # prepare_net_devs 00:42:53.471 16:50:53 nvmf_identify_passthru -- nvmf/common.sh@434 -- # local -g is_hw=no 00:42:53.471 16:50:53 nvmf_identify_passthru -- nvmf/common.sh@436 -- # remove_spdk_ns 00:42:53.471 16:50:53 nvmf_identify_passthru -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:53.471 16:50:53 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:53.471 16:50:53 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:53.471 16:50:53 nvmf_identify_passthru -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:42:53.471 16:50:53 nvmf_identify_passthru -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:42:53.471 16:50:53 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:42:53.471 16:50:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:42:56.005 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:42:56.005 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ up == up ]] 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:42:56.005 Found net devices under 0000:0a:00.0: cvl_0_0 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ up == up ]] 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:42:56.005 Found net devices under 0000:0a:00.1: cvl_0_1 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@438 -- # is_hw=yes 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:56.005 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:56.005 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:42:56.005 00:42:56.005 --- 10.0.0.2 ping statistics --- 00:42:56.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:56.005 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:56.005 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:56.005 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:42:56.005 00:42:56.005 --- 10.0.0.1 ping statistics --- 00:42:56.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:56.005 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@446 -- # return 0 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:42:56.005 16:50:56 nvmf_identify_passthru -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:42:56.005 16:50:56 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:42:56.005 16:50:56 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:56.005 16:50:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:56.005 16:50:56 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:42:56.005 16:50:56 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:42:56.005 16:50:56 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:42:56.005 16:50:56 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:42:56.005 16:50:56 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:42:56.005 16:50:56 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:42:56.005 16:50:56 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:42:56.006 16:50:56 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:42:56.006 16:50:56 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:42:56.006 16:50:56 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:42:56.006 16:50:56 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:42:56.006 16:50:56 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:88:00.0 00:42:56.006 16:50:56 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:88:00.0 00:42:56.006 16:50:56 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:42:56.006 16:50:56 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:42:56.006 16:50:56 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:42:56.006 16:50:56 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:42:56.006 16:50:56 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:43:00.190 16:51:00 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:43:00.190 16:51:00 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:43:00.190 16:51:00 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:43:00.190 16:51:00 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:43:05.447 16:51:05 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:43:05.447 16:51:05 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:43:05.447 16:51:05 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:05.447 16:51:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:05.447 16:51:05 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:43:05.447 16:51:05 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:43:05.447 16:51:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:05.447 16:51:05 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3392504 00:43:05.447 16:51:05 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:43:05.447 16:51:05 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:43:05.447 16:51:05 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3392504 00:43:05.447 16:51:05 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 3392504 ']' 00:43:05.447 16:51:05 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:05.447 16:51:05 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:43:05.447 16:51:05 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:05.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:05.447 16:51:05 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:43:05.447 16:51:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:05.447 [2024-09-29 16:51:05.128088] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:43:05.447 [2024-09-29 16:51:05.128229] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:05.447 [2024-09-29 16:51:05.271414] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:43:05.447 [2024-09-29 16:51:05.538222] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:05.447 [2024-09-29 16:51:05.538307] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:05.447 [2024-09-29 16:51:05.538337] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:05.447 [2024-09-29 16:51:05.538364] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:05.447 [2024-09-29 16:51:05.538384] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:05.447 [2024-09-29 16:51:05.538511] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:43:05.447 [2024-09-29 16:51:05.538587] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:43:05.447 [2024-09-29 16:51:05.538648] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:43:05.447 [2024-09-29 16:51:05.538654] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:43:05.705 16:51:06 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:43:05.705 16:51:06 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:43:05.705 16:51:06 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:43:05.705 16:51:06 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:05.705 16:51:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:05.705 INFO: Log level set to 20 00:43:05.705 INFO: Requests: 00:43:05.705 { 00:43:05.705 "jsonrpc": "2.0", 00:43:05.705 "method": "nvmf_set_config", 00:43:05.705 "id": 1, 00:43:05.705 "params": { 00:43:05.705 "admin_cmd_passthru": { 00:43:05.705 "identify_ctrlr": true 00:43:05.705 } 00:43:05.705 } 00:43:05.705 } 00:43:05.705 00:43:05.705 INFO: response: 00:43:05.705 { 00:43:05.705 "jsonrpc": "2.0", 00:43:05.705 "id": 1, 00:43:05.705 "result": true 00:43:05.705 } 00:43:05.705 00:43:05.705 16:51:06 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:05.705 16:51:06 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:43:05.705 16:51:06 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:05.705 16:51:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:05.705 INFO: Setting log level to 20 00:43:05.705 INFO: Setting log level to 20 00:43:05.705 INFO: Log level set to 20 00:43:05.705 INFO: Log level set to 20 00:43:05.705 INFO: Requests: 00:43:05.705 { 00:43:05.705 "jsonrpc": "2.0", 00:43:05.705 "method": "framework_start_init", 00:43:05.705 "id": 1 00:43:05.705 } 00:43:05.705 00:43:05.705 INFO: Requests: 00:43:05.705 { 00:43:05.705 "jsonrpc": "2.0", 00:43:05.705 "method": "framework_start_init", 00:43:05.705 "id": 1 00:43:05.705 } 00:43:05.705 00:43:05.963 [2024-09-29 16:51:06.456760] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:43:05.963 INFO: response: 00:43:05.963 { 00:43:05.963 "jsonrpc": "2.0", 00:43:05.963 "id": 1, 00:43:05.963 "result": true 00:43:05.963 } 00:43:05.963 00:43:05.963 INFO: response: 00:43:05.963 { 00:43:05.963 "jsonrpc": "2.0", 00:43:05.963 "id": 1, 00:43:05.963 "result": true 00:43:05.963 } 00:43:05.963 00:43:05.963 16:51:06 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:05.963 16:51:06 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:43:05.963 16:51:06 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:05.963 16:51:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:05.963 INFO: Setting log level to 40 00:43:05.963 INFO: Setting log level to 40 00:43:05.963 INFO: Setting log level to 40 00:43:05.963 [2024-09-29 16:51:06.469759] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:05.963 16:51:06 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:05.963 16:51:06 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:43:05.963 16:51:06 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:05.963 16:51:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:05.963 16:51:06 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:43:05.963 16:51:06 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:05.963 16:51:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:09.244 Nvme0n1 00:43:09.244 16:51:09 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:09.244 16:51:09 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:43:09.244 16:51:09 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:09.244 16:51:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:09.244 16:51:09 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:09.244 16:51:09 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:43:09.244 16:51:09 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:09.244 16:51:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:09.244 16:51:09 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:09.244 16:51:09 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:09.244 16:51:09 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:09.244 16:51:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:09.244 [2024-09-29 16:51:09.416848] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:09.244 16:51:09 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:09.244 16:51:09 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:43:09.244 16:51:09 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:09.244 16:51:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:09.244 [ 00:43:09.244 { 00:43:09.244 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:43:09.244 "subtype": "Discovery", 00:43:09.244 "listen_addresses": [], 00:43:09.244 "allow_any_host": true, 00:43:09.244 "hosts": [] 00:43:09.244 }, 00:43:09.244 { 00:43:09.244 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:43:09.244 "subtype": "NVMe", 00:43:09.244 "listen_addresses": [ 00:43:09.244 { 00:43:09.244 "trtype": "TCP", 00:43:09.244 "adrfam": "IPv4", 00:43:09.244 "traddr": "10.0.0.2", 00:43:09.244 "trsvcid": "4420" 00:43:09.244 } 00:43:09.244 ], 00:43:09.244 "allow_any_host": true, 00:43:09.244 "hosts": [], 00:43:09.244 "serial_number": "SPDK00000000000001", 00:43:09.244 "model_number": "SPDK bdev Controller", 00:43:09.244 "max_namespaces": 1, 00:43:09.244 "min_cntlid": 1, 00:43:09.244 "max_cntlid": 65519, 00:43:09.244 "namespaces": [ 00:43:09.244 { 00:43:09.244 "nsid": 1, 00:43:09.244 "bdev_name": "Nvme0n1", 00:43:09.244 "name": "Nvme0n1", 00:43:09.244 "nguid": "D3E63CC0C2824B40B58C3D08114C8EE1", 00:43:09.244 "uuid": "d3e63cc0-c282-4b40-b58c-3d08114c8ee1" 00:43:09.244 } 00:43:09.244 ] 00:43:09.244 } 00:43:09.244 ] 00:43:09.244 16:51:09 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:09.244 16:51:09 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:43:09.244 16:51:09 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:43:09.244 16:51:09 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:43:09.244 16:51:09 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:43:09.244 16:51:09 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:43:09.244 16:51:09 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:43:09.244 16:51:09 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:43:09.503 16:51:09 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:43:09.503 16:51:09 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:43:09.503 16:51:09 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:43:09.503 16:51:09 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:09.503 16:51:09 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:09.503 16:51:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:09.503 16:51:09 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:09.503 16:51:09 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:43:09.503 16:51:09 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:43:09.503 16:51:09 nvmf_identify_passthru -- nvmf/common.sh@512 -- # nvmfcleanup 00:43:09.503 16:51:09 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:43:09.503 16:51:09 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:09.503 16:51:09 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:43:09.503 16:51:09 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:09.503 16:51:09 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:09.503 rmmod nvme_tcp 00:43:09.503 rmmod nvme_fabrics 00:43:09.503 rmmod nvme_keyring 00:43:09.503 16:51:09 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:09.503 16:51:09 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:43:09.503 16:51:09 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:43:09.503 16:51:09 nvmf_identify_passthru -- nvmf/common.sh@513 -- # '[' -n 3392504 ']' 00:43:09.503 16:51:09 nvmf_identify_passthru -- nvmf/common.sh@514 -- # killprocess 3392504 00:43:09.503 16:51:09 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 3392504 ']' 00:43:09.503 16:51:09 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 3392504 00:43:09.503 16:51:09 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:43:09.503 16:51:09 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:43:09.503 16:51:09 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3392504 00:43:09.503 16:51:09 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:43:09.503 16:51:09 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:43:09.503 16:51:09 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3392504' 00:43:09.503 killing process with pid 3392504 00:43:09.503 16:51:09 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 3392504 00:43:09.503 16:51:09 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 3392504 00:43:12.034 16:51:12 nvmf_identify_passthru -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:43:12.034 16:51:12 nvmf_identify_passthru -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:43:12.034 16:51:12 nvmf_identify_passthru -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:43:12.034 16:51:12 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:43:12.034 16:51:12 nvmf_identify_passthru -- nvmf/common.sh@787 -- # iptables-save 00:43:12.034 16:51:12 nvmf_identify_passthru -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:43:12.034 16:51:12 nvmf_identify_passthru -- nvmf/common.sh@787 -- # iptables-restore 00:43:12.034 16:51:12 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:12.034 16:51:12 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:12.034 16:51:12 nvmf_identify_passthru -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:12.034 16:51:12 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:12.034 16:51:12 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:14.635 16:51:14 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:14.635 00:43:14.635 real 0m20.794s 00:43:14.635 user 0m32.836s 00:43:14.635 sys 0m3.603s 00:43:14.635 16:51:14 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:14.635 16:51:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:14.635 ************************************ 00:43:14.635 END TEST nvmf_identify_passthru 00:43:14.635 ************************************ 00:43:14.635 16:51:14 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:43:14.635 16:51:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:43:14.635 16:51:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:14.635 16:51:14 -- common/autotest_common.sh@10 -- # set +x 00:43:14.635 ************************************ 00:43:14.635 START TEST nvmf_dif 00:43:14.635 ************************************ 00:43:14.635 16:51:14 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:43:14.635 * Looking for test storage... 00:43:14.635 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:14.635 16:51:14 nvmf_dif -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:43:14.635 16:51:14 nvmf_dif -- common/autotest_common.sh@1681 -- # lcov --version 00:43:14.635 16:51:14 nvmf_dif -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:43:14.635 16:51:14 nvmf_dif -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:43:14.635 16:51:14 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:14.635 16:51:14 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:14.635 16:51:14 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:14.635 16:51:14 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:43:14.635 16:51:14 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:43:14.635 16:51:14 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:43:14.635 16:51:14 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:43:14.635 16:51:14 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:43:14.635 16:51:14 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:43:14.636 16:51:14 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:43:14.636 16:51:14 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:14.636 16:51:14 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:43:14.636 16:51:14 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:43:14.636 16:51:14 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:14.636 16:51:14 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:14.636 16:51:14 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:43:14.636 16:51:14 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:43:14.636 16:51:14 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:14.636 16:51:14 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:43:14.636 16:51:14 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:43:14.636 16:51:14 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:43:14.636 16:51:14 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:43:14.636 16:51:14 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:14.636 16:51:14 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:43:14.636 16:51:14 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:43:14.636 16:51:14 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:14.636 16:51:14 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:14.636 16:51:14 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:43:14.636 16:51:14 nvmf_dif -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:14.636 16:51:14 nvmf_dif -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:43:14.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:14.636 --rc genhtml_branch_coverage=1 00:43:14.636 --rc genhtml_function_coverage=1 00:43:14.636 --rc genhtml_legend=1 00:43:14.636 --rc geninfo_all_blocks=1 00:43:14.636 --rc geninfo_unexecuted_blocks=1 00:43:14.636 00:43:14.636 ' 00:43:14.636 16:51:14 nvmf_dif -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:43:14.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:14.636 --rc genhtml_branch_coverage=1 00:43:14.636 --rc genhtml_function_coverage=1 00:43:14.636 --rc genhtml_legend=1 00:43:14.636 --rc geninfo_all_blocks=1 00:43:14.636 --rc geninfo_unexecuted_blocks=1 00:43:14.636 00:43:14.636 ' 00:43:14.636 16:51:14 nvmf_dif -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:43:14.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:14.636 --rc genhtml_branch_coverage=1 00:43:14.636 --rc genhtml_function_coverage=1 00:43:14.636 --rc genhtml_legend=1 00:43:14.636 --rc geninfo_all_blocks=1 00:43:14.636 --rc geninfo_unexecuted_blocks=1 00:43:14.636 00:43:14.636 ' 00:43:14.636 16:51:14 nvmf_dif -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:43:14.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:14.636 --rc genhtml_branch_coverage=1 00:43:14.636 --rc genhtml_function_coverage=1 00:43:14.636 --rc genhtml_legend=1 00:43:14.636 --rc geninfo_all_blocks=1 00:43:14.636 --rc geninfo_unexecuted_blocks=1 00:43:14.636 00:43:14.636 ' 00:43:14.636 16:51:14 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:14.636 16:51:14 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:43:14.636 16:51:14 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:14.636 16:51:14 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:14.636 16:51:14 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:14.636 16:51:14 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:14.636 16:51:14 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:14.636 16:51:14 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:14.636 16:51:14 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:14.636 16:51:14 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:14.636 16:51:14 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:14.636 16:51:14 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:14.636 16:51:14 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:43:14.636 16:51:14 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:43:14.636 16:51:14 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:14.636 16:51:14 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:14.636 16:51:14 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:14.636 16:51:14 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:14.636 16:51:14 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:14.636 16:51:14 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:43:14.636 16:51:14 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:14.636 16:51:14 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:14.636 16:51:14 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:14.636 16:51:14 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:14.636 16:51:14 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:14.636 16:51:14 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:14.636 16:51:14 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:43:14.636 16:51:14 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:14.636 16:51:14 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:43:14.636 16:51:14 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:14.636 16:51:14 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:14.636 16:51:14 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:14.636 16:51:14 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:14.636 16:51:14 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:14.636 16:51:14 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:14.636 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:14.636 16:51:14 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:14.636 16:51:14 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:14.636 16:51:14 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:14.636 16:51:14 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:43:14.636 16:51:14 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:43:14.636 16:51:14 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:43:14.636 16:51:14 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:43:14.636 16:51:14 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:43:14.636 16:51:14 nvmf_dif -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:43:14.636 16:51:14 nvmf_dif -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:14.636 16:51:14 nvmf_dif -- nvmf/common.sh@472 -- # prepare_net_devs 00:43:14.636 16:51:14 nvmf_dif -- nvmf/common.sh@434 -- # local -g is_hw=no 00:43:14.636 16:51:14 nvmf_dif -- nvmf/common.sh@436 -- # remove_spdk_ns 00:43:14.636 16:51:14 nvmf_dif -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:14.636 16:51:14 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:14.636 16:51:14 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:14.636 16:51:14 nvmf_dif -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:43:14.636 16:51:14 nvmf_dif -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:43:14.636 16:51:14 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:43:14.636 16:51:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:43:16.542 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:43:16.542 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@414 -- # [[ up == up ]] 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:43:16.542 Found net devices under 0000:0a:00.0: cvl_0_0 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@414 -- # [[ up == up ]] 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:43:16.542 Found net devices under 0000:0a:00.1: cvl_0_1 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@438 -- # is_hw=yes 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:16.542 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:16.542 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.352 ms 00:43:16.542 00:43:16.542 --- 10.0.0.2 ping statistics --- 00:43:16.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:16.542 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:16.542 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:16.542 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:43:16.542 00:43:16.542 --- 10.0.0.1 ping statistics --- 00:43:16.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:16.542 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@446 -- # return 0 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@474 -- # '[' iso == iso ']' 00:43:16.542 16:51:16 nvmf_dif -- nvmf/common.sh@475 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:43:17.480 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:43:17.480 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:43:17.480 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:43:17.480 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:43:17.480 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:43:17.480 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:43:17.480 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:43:17.480 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:43:17.480 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:43:17.480 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:43:17.480 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:43:17.480 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:43:17.480 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:43:17.480 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:43:17.480 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:43:17.480 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:43:17.480 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:43:17.739 16:51:18 nvmf_dif -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:17.739 16:51:18 nvmf_dif -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:43:17.739 16:51:18 nvmf_dif -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:43:17.739 16:51:18 nvmf_dif -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:17.739 16:51:18 nvmf_dif -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:43:17.739 16:51:18 nvmf_dif -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:43:17.739 16:51:18 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:43:17.739 16:51:18 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:43:17.739 16:51:18 nvmf_dif -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:43:17.739 16:51:18 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:43:17.739 16:51:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:17.739 16:51:18 nvmf_dif -- nvmf/common.sh@505 -- # nvmfpid=3396534 00:43:17.739 16:51:18 nvmf_dif -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:43:17.739 16:51:18 nvmf_dif -- nvmf/common.sh@506 -- # waitforlisten 3396534 00:43:17.739 16:51:18 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 3396534 ']' 00:43:17.739 16:51:18 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:17.739 16:51:18 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:43:17.739 16:51:18 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:17.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:17.739 16:51:18 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:43:17.739 16:51:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:17.739 [2024-09-29 16:51:18.284925] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:43:17.739 [2024-09-29 16:51:18.285056] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:17.997 [2024-09-29 16:51:18.424826] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:18.255 [2024-09-29 16:51:18.688556] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:18.255 [2024-09-29 16:51:18.688649] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:18.255 [2024-09-29 16:51:18.688685] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:18.255 [2024-09-29 16:51:18.688713] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:18.255 [2024-09-29 16:51:18.688738] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:18.255 [2024-09-29 16:51:18.688793] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:43:18.820 16:51:19 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:43:18.820 16:51:19 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:43:18.820 16:51:19 nvmf_dif -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:43:18.820 16:51:19 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:18.820 16:51:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:18.820 16:51:19 nvmf_dif -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:18.820 16:51:19 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:43:18.820 16:51:19 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:43:18.820 16:51:19 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:18.820 16:51:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:18.820 [2024-09-29 16:51:19.381929] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:19.079 16:51:19 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:19.079 16:51:19 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:43:19.079 16:51:19 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:43:19.079 16:51:19 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:19.079 16:51:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:19.079 ************************************ 00:43:19.079 START TEST fio_dif_1_default 00:43:19.079 ************************************ 00:43:19.079 16:51:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:43:19.079 16:51:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:43:19.079 16:51:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:43:19.079 16:51:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:43:19.079 16:51:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:43:19.079 16:51:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:43:19.079 16:51:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:43:19.079 16:51:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:19.079 16:51:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:19.079 bdev_null0 00:43:19.079 16:51:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:19.079 16:51:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:19.079 16:51:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:19.079 16:51:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:19.079 16:51:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:19.079 16:51:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:19.079 16:51:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:19.079 16:51:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:19.079 16:51:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:19.079 16:51:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:19.079 16:51:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:19.079 16:51:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:19.079 [2024-09-29 16:51:19.442311] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:19.079 16:51:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:19.079 16:51:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:43:19.079 16:51:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:43:19.079 16:51:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:43:19.079 16:51:19 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # config=() 00:43:19.079 16:51:19 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # local subsystem config 00:43:19.079 16:51:19 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:43:19.079 16:51:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:19.079 16:51:19 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:43:19.079 { 00:43:19.079 "params": { 00:43:19.079 "name": "Nvme$subsystem", 00:43:19.079 "trtype": "$TEST_TRANSPORT", 00:43:19.079 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:19.079 "adrfam": "ipv4", 00:43:19.079 "trsvcid": "$NVMF_PORT", 00:43:19.079 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:19.079 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:19.079 "hdgst": ${hdgst:-false}, 00:43:19.079 "ddgst": ${ddgst:-false} 00:43:19.079 }, 00:43:19.079 "method": "bdev_nvme_attach_controller" 00:43:19.079 } 00:43:19.079 EOF 00:43:19.079 )") 00:43:19.079 16:51:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:43:19.079 16:51:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:19.079 16:51:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:43:19.079 16:51:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:43:19.079 16:51:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:43:19.079 16:51:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:19.079 16:51:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:43:19.079 16:51:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:19.079 16:51:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:43:19.079 16:51:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:43:19.079 16:51:19 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@578 -- # cat 00:43:19.079 16:51:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:43:19.079 16:51:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:43:19.079 16:51:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:19.079 16:51:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:43:19.079 16:51:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:43:19.079 16:51:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:43:19.080 16:51:19 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # jq . 00:43:19.080 16:51:19 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@581 -- # IFS=, 00:43:19.080 16:51:19 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:43:19.080 "params": { 00:43:19.080 "name": "Nvme0", 00:43:19.080 "trtype": "tcp", 00:43:19.080 "traddr": "10.0.0.2", 00:43:19.080 "adrfam": "ipv4", 00:43:19.080 "trsvcid": "4420", 00:43:19.080 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:19.080 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:19.080 "hdgst": false, 00:43:19.080 "ddgst": false 00:43:19.080 }, 00:43:19.080 "method": "bdev_nvme_attach_controller" 00:43:19.080 }' 00:43:19.080 16:51:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:43:19.080 16:51:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:43:19.080 16:51:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # break 00:43:19.080 16:51:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:19.080 16:51:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:19.338 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:43:19.338 fio-3.35 00:43:19.338 Starting 1 thread 00:43:31.534 00:43:31.534 filename0: (groupid=0, jobs=1): err= 0: pid=3396887: Sun Sep 29 16:51:30 2024 00:43:31.534 read: IOPS=189, BW=758KiB/s (776kB/s)(7584KiB/10008msec) 00:43:31.534 slat (nsec): min=5305, max=72895, avg=13750.51, stdev=4434.21 00:43:31.534 clat (usec): min=747, max=44298, avg=21070.77, stdev=20158.61 00:43:31.534 lat (usec): min=758, max=44322, avg=21084.52, stdev=20158.23 00:43:31.534 clat percentiles (usec): 00:43:31.534 | 1.00th=[ 775], 5.00th=[ 791], 10.00th=[ 799], 20.00th=[ 816], 00:43:31.534 | 30.00th=[ 832], 40.00th=[ 865], 50.00th=[41157], 60.00th=[41157], 00:43:31.534 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:43:31.534 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44303], 99.95th=[44303], 00:43:31.534 | 99.99th=[44303] 00:43:31.534 bw ( KiB/s): min= 672, max= 768, per=99.76%, avg=756.80, stdev=28.00, samples=20 00:43:31.534 iops : min= 168, max= 192, avg=189.20, stdev= 7.00, samples=20 00:43:31.534 lat (usec) : 750=0.05%, 1000=49.68% 00:43:31.534 lat (msec) : 2=0.05%, 50=50.21% 00:43:31.534 cpu : usr=92.25%, sys=7.25%, ctx=14, majf=0, minf=1635 00:43:31.534 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:31.534 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:31.534 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:31.534 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:31.534 latency : target=0, window=0, percentile=100.00%, depth=4 00:43:31.534 00:43:31.534 Run status group 0 (all jobs): 00:43:31.534 READ: bw=758KiB/s (776kB/s), 758KiB/s-758KiB/s (776kB/s-776kB/s), io=7584KiB (7766kB), run=10008-10008msec 00:43:31.534 ----------------------------------------------------- 00:43:31.534 Suppressions used: 00:43:31.534 count bytes template 00:43:31.534 1 8 /usr/src/fio/parse.c 00:43:31.534 1 8 libtcmalloc_minimal.so 00:43:31.534 1 904 libcrypto.so 00:43:31.534 ----------------------------------------------------- 00:43:31.534 00:43:31.534 16:51:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:43:31.534 16:51:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:43:31.534 16:51:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:43:31.534 16:51:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:31.534 16:51:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:43:31.534 16:51:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:31.534 16:51:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:31.534 16:51:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:31.534 16:51:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:31.534 16:51:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:31.534 16:51:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:31.534 16:51:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:31.534 16:51:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:31.534 00:43:31.534 real 0m12.383s 00:43:31.534 user 0m11.458s 00:43:31.534 sys 0m1.194s 00:43:31.534 16:51:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:31.534 16:51:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:31.534 ************************************ 00:43:31.534 END TEST fio_dif_1_default 00:43:31.534 ************************************ 00:43:31.534 16:51:31 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:43:31.534 16:51:31 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:43:31.534 16:51:31 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:31.534 16:51:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:31.534 ************************************ 00:43:31.534 START TEST fio_dif_1_multi_subsystems 00:43:31.534 ************************************ 00:43:31.534 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:43:31.534 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:43:31.534 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:43:31.534 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:43:31.534 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:43:31.534 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:43:31.534 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:43:31.534 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:43:31.534 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:31.534 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:31.534 bdev_null0 00:43:31.534 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:31.534 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:31.534 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:31.534 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:31.534 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:31.534 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:31.534 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:31.534 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:31.534 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:31.534 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:31.534 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:31.534 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:31.534 [2024-09-29 16:51:31.881324] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:31.534 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:31.534 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:43:31.534 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:43:31.534 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:43:31.534 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:43:31.534 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:31.534 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:31.534 bdev_null1 00:43:31.534 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:31.534 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:43:31.534 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:31.534 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:31.534 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:31.534 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:43:31.534 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:31.534 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:31.534 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:31.534 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:31.534 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:31.534 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:31.535 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:31.535 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:43:31.535 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:43:31.535 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:43:31.535 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # config=() 00:43:31.535 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # local subsystem config 00:43:31.535 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:43:31.535 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:43:31.535 { 00:43:31.535 "params": { 00:43:31.535 "name": "Nvme$subsystem", 00:43:31.535 "trtype": "$TEST_TRANSPORT", 00:43:31.535 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:31.535 "adrfam": "ipv4", 00:43:31.535 "trsvcid": "$NVMF_PORT", 00:43:31.535 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:31.535 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:31.535 "hdgst": ${hdgst:-false}, 00:43:31.535 "ddgst": ${ddgst:-false} 00:43:31.535 }, 00:43:31.535 "method": "bdev_nvme_attach_controller" 00:43:31.535 } 00:43:31.535 EOF 00:43:31.535 )") 00:43:31.535 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:31.535 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:31.535 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:43:31.535 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:43:31.535 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:31.535 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:43:31.535 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:43:31.535 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:43:31.535 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:31.535 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:43:31.535 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:43:31.535 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:43:31.535 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # cat 00:43:31.535 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:31.535 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:43:31.535 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:43:31.535 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:43:31.535 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:43:31.535 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:43:31.535 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:43:31.535 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:43:31.535 { 00:43:31.535 "params": { 00:43:31.535 "name": "Nvme$subsystem", 00:43:31.535 "trtype": "$TEST_TRANSPORT", 00:43:31.535 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:31.535 "adrfam": "ipv4", 00:43:31.535 "trsvcid": "$NVMF_PORT", 00:43:31.535 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:31.535 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:31.535 "hdgst": ${hdgst:-false}, 00:43:31.535 "ddgst": ${ddgst:-false} 00:43:31.535 }, 00:43:31.535 "method": "bdev_nvme_attach_controller" 00:43:31.535 } 00:43:31.535 EOF 00:43:31.535 )") 00:43:31.535 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # cat 00:43:31.535 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:43:31.535 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:43:31.535 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # jq . 00:43:31.535 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@581 -- # IFS=, 00:43:31.535 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:43:31.535 "params": { 00:43:31.535 "name": "Nvme0", 00:43:31.535 "trtype": "tcp", 00:43:31.535 "traddr": "10.0.0.2", 00:43:31.535 "adrfam": "ipv4", 00:43:31.535 "trsvcid": "4420", 00:43:31.535 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:31.535 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:31.535 "hdgst": false, 00:43:31.535 "ddgst": false 00:43:31.535 }, 00:43:31.535 "method": "bdev_nvme_attach_controller" 00:43:31.535 },{ 00:43:31.535 "params": { 00:43:31.535 "name": "Nvme1", 00:43:31.535 "trtype": "tcp", 00:43:31.535 "traddr": "10.0.0.2", 00:43:31.535 "adrfam": "ipv4", 00:43:31.535 "trsvcid": "4420", 00:43:31.535 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:31.535 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:31.535 "hdgst": false, 00:43:31.535 "ddgst": false 00:43:31.535 }, 00:43:31.535 "method": "bdev_nvme_attach_controller" 00:43:31.535 }' 00:43:31.535 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:43:31.535 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:43:31.535 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # break 00:43:31.535 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:31.535 16:51:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:31.793 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:43:31.793 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:43:31.793 fio-3.35 00:43:31.793 Starting 2 threads 00:43:43.993 00:43:43.993 filename0: (groupid=0, jobs=1): err= 0: pid=3398409: Sun Sep 29 16:51:43 2024 00:43:43.993 read: IOPS=97, BW=389KiB/s (398kB/s)(3904KiB/10035msec) 00:43:43.993 slat (nsec): min=4974, max=75509, avg=13667.59, stdev=5321.57 00:43:43.993 clat (usec): min=40842, max=45042, avg=41081.50, stdev=392.79 00:43:43.993 lat (usec): min=40856, max=45058, avg=41095.17, stdev=393.59 00:43:43.993 clat percentiles (usec): 00:43:43.993 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:43:43.993 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:43:43.993 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:43:43.993 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44827], 99.95th=[44827], 00:43:43.993 | 99.99th=[44827] 00:43:43.993 bw ( KiB/s): min= 384, max= 416, per=49.87%, avg=388.80, stdev=11.72, samples=20 00:43:43.993 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:43:43.993 lat (msec) : 50=100.00% 00:43:43.993 cpu : usr=95.34%, sys=4.16%, ctx=14, majf=0, minf=1633 00:43:43.993 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:43.993 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:43.993 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:43.993 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:43.993 latency : target=0, window=0, percentile=100.00%, depth=4 00:43:43.993 filename1: (groupid=0, jobs=1): err= 0: pid=3398410: Sun Sep 29 16:51:43 2024 00:43:43.993 read: IOPS=97, BW=389KiB/s (398kB/s)(3904KiB/10035msec) 00:43:43.993 slat (nsec): min=5344, max=41808, avg=13774.90, stdev=4990.79 00:43:43.993 clat (usec): min=40847, max=45242, avg=41082.84, stdev=418.25 00:43:43.993 lat (usec): min=40857, max=45284, avg=41096.61, stdev=418.99 00:43:43.993 clat percentiles (usec): 00:43:43.993 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:43:43.993 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:43:43.993 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:43:43.993 | 99.00th=[42730], 99.50th=[42730], 99.90th=[45351], 99.95th=[45351], 00:43:43.993 | 99.99th=[45351] 00:43:43.993 bw ( KiB/s): min= 384, max= 416, per=49.87%, avg=388.80, stdev=11.72, samples=20 00:43:43.993 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:43:43.993 lat (msec) : 50=100.00% 00:43:43.993 cpu : usr=94.89%, sys=4.60%, ctx=16, majf=0, minf=1636 00:43:43.993 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:43.993 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:43.993 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:43.993 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:43.993 latency : target=0, window=0, percentile=100.00%, depth=4 00:43:43.993 00:43:43.993 Run status group 0 (all jobs): 00:43:43.993 READ: bw=778KiB/s (797kB/s), 389KiB/s-389KiB/s (398kB/s-398kB/s), io=7808KiB (7995kB), run=10035-10035msec 00:43:43.993 ----------------------------------------------------- 00:43:43.993 Suppressions used: 00:43:43.993 count bytes template 00:43:43.993 2 16 /usr/src/fio/parse.c 00:43:43.993 1 8 libtcmalloc_minimal.so 00:43:43.993 1 904 libcrypto.so 00:43:43.993 ----------------------------------------------------- 00:43:43.993 00:43:43.993 16:51:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:43:43.993 16:51:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:43:43.993 16:51:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:43:43.993 16:51:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:43.993 16:51:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:43:43.993 16:51:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:43.993 16:51:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:43.993 16:51:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:43.993 16:51:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:43.993 16:51:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:43.993 16:51:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:43.993 16:51:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:43.993 16:51:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:43.993 16:51:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:43:43.993 16:51:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:43:43.993 16:51:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:43:43.993 16:51:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:43.993 16:51:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:43.993 16:51:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:43.993 16:51:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:43.993 16:51:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:43:43.993 16:51:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:43.993 16:51:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:43.993 16:51:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:43.993 00:43:43.993 real 0m12.591s 00:43:43.993 user 0m21.379s 00:43:43.993 sys 0m1.357s 00:43:43.993 16:51:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:43.993 16:51:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:43.993 ************************************ 00:43:43.993 END TEST fio_dif_1_multi_subsystems 00:43:43.993 ************************************ 00:43:43.993 16:51:44 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:43:43.993 16:51:44 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:43:43.993 16:51:44 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:43.993 16:51:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:43.993 ************************************ 00:43:43.993 START TEST fio_dif_rand_params 00:43:43.993 ************************************ 00:43:43.993 16:51:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:43:43.993 16:51:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:43:43.993 16:51:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:43:43.993 16:51:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:43:43.993 16:51:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:43:43.993 16:51:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:43:43.993 16:51:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:43:43.993 16:51:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:43:43.993 16:51:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:43:43.993 16:51:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:43:43.993 16:51:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:43.993 16:51:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:43:43.993 16:51:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:43:43.993 16:51:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:43:43.993 16:51:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:43.993 16:51:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:43.993 bdev_null0 00:43:43.993 16:51:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:43.993 16:51:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:43.993 16:51:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:43.993 16:51:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:43.993 16:51:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:43.993 16:51:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:43.993 16:51:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:43.993 16:51:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:43.993 16:51:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:43.993 16:51:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:43.993 16:51:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:43.993 16:51:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:43.993 [2024-09-29 16:51:44.521763] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:43.993 16:51:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:43.993 16:51:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:43:43.993 16:51:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:43:43.993 16:51:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:43:43.993 16:51:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:43:43.993 16:51:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:43:43.993 16:51:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:43:43.993 16:51:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:43.993 16:51:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:43:43.993 { 00:43:43.993 "params": { 00:43:43.993 "name": "Nvme$subsystem", 00:43:43.993 "trtype": "$TEST_TRANSPORT", 00:43:43.993 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:43.993 "adrfam": "ipv4", 00:43:43.993 "trsvcid": "$NVMF_PORT", 00:43:43.993 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:43.993 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:43.993 "hdgst": ${hdgst:-false}, 00:43:43.993 "ddgst": ${ddgst:-false} 00:43:43.993 }, 00:43:43.993 "method": "bdev_nvme_attach_controller" 00:43:43.993 } 00:43:43.993 EOF 00:43:43.994 )") 00:43:43.994 16:51:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:43.994 16:51:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:43:43.994 16:51:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:43.994 16:51:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:43:43.994 16:51:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:43:43.994 16:51:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:43.994 16:51:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:43:43.994 16:51:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:43:43.994 16:51:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:43:43.994 16:51:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:43:43.994 16:51:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:43:43.994 16:51:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:43:43.994 16:51:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:43.994 16:51:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:43:43.994 16:51:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:43:43.994 16:51:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:43:43.994 16:51:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:43.994 16:51:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:43:43.994 16:51:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:43:43.994 16:51:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:43:43.994 "params": { 00:43:43.994 "name": "Nvme0", 00:43:43.994 "trtype": "tcp", 00:43:43.994 "traddr": "10.0.0.2", 00:43:43.994 "adrfam": "ipv4", 00:43:43.994 "trsvcid": "4420", 00:43:43.994 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:43.994 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:43.994 "hdgst": false, 00:43:43.994 "ddgst": false 00:43:43.994 }, 00:43:43.994 "method": "bdev_nvme_attach_controller" 00:43:43.994 }' 00:43:43.994 16:51:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:43:43.994 16:51:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:43:43.994 16:51:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:43:43.994 16:51:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:43.994 16:51:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:44.560 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:43:44.560 ... 00:43:44.560 fio-3.35 00:43:44.560 Starting 3 threads 00:43:51.117 00:43:51.117 filename0: (groupid=0, jobs=1): err= 0: pid=3399928: Sun Sep 29 16:51:50 2024 00:43:51.117 read: IOPS=166, BW=20.9MiB/s (21.9MB/s)(105MiB/5048msec) 00:43:51.117 slat (nsec): min=6196, max=59199, avg=26669.02, stdev=5727.68 00:43:51.117 clat (usec): min=8992, max=64936, avg=17882.62, stdev=4885.55 00:43:51.117 lat (usec): min=9022, max=64959, avg=17909.29, stdev=4884.32 00:43:51.117 clat percentiles (usec): 00:43:51.117 | 1.00th=[10421], 5.00th=[11994], 10.00th=[13829], 20.00th=[15533], 00:43:51.117 | 30.00th=[16319], 40.00th=[16909], 50.00th=[17433], 60.00th=[17957], 00:43:51.117 | 70.00th=[18482], 80.00th=[19268], 90.00th=[21890], 95.00th=[24511], 00:43:51.117 | 99.00th=[26346], 99.50th=[51643], 99.90th=[64750], 99.95th=[64750], 00:43:51.117 | 99.99th=[64750] 00:43:51.117 bw ( KiB/s): min=16640, max=23808, per=31.64%, avg=21504.00, stdev=2577.01, samples=10 00:43:51.117 iops : min= 130, max= 186, avg=168.00, stdev=20.13, samples=10 00:43:51.117 lat (msec) : 10=0.59%, 20=84.58%, 50=14.00%, 100=0.83% 00:43:51.117 cpu : usr=93.40%, sys=5.41%, ctx=185, majf=0, minf=1633 00:43:51.117 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:51.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:51.117 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:51.117 issued rwts: total=843,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:51.117 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:51.117 filename0: (groupid=0, jobs=1): err= 0: pid=3399929: Sun Sep 29 16:51:50 2024 00:43:51.117 read: IOPS=176, BW=22.0MiB/s (23.1MB/s)(111MiB/5046msec) 00:43:51.117 slat (nsec): min=5347, max=57491, avg=21606.44, stdev=5428.53 00:43:51.117 clat (usec): min=6312, max=59672, avg=16932.94, stdev=4719.09 00:43:51.117 lat (usec): min=6330, max=59690, avg=16954.55, stdev=4719.09 00:43:51.117 clat percentiles (usec): 00:43:51.117 | 1.00th=[ 7308], 5.00th=[10552], 10.00th=[12518], 20.00th=[14484], 00:43:51.117 | 30.00th=[15795], 40.00th=[16581], 50.00th=[17171], 60.00th=[17695], 00:43:51.117 | 70.00th=[18220], 80.00th=[18744], 90.00th=[19530], 95.00th=[20317], 00:43:51.117 | 99.00th=[24249], 99.50th=[56886], 99.90th=[59507], 99.95th=[59507], 00:43:51.117 | 99.99th=[59507] 00:43:51.117 bw ( KiB/s): min=19712, max=25600, per=33.45%, avg=22737.00, stdev=1852.72, samples=10 00:43:51.117 iops : min= 154, max= 200, avg=177.60, stdev=14.51, samples=10 00:43:51.117 lat (msec) : 10=3.60%, 20=89.78%, 50=5.73%, 100=0.90% 00:43:51.117 cpu : usr=93.36%, sys=6.09%, ctx=12, majf=0, minf=1634 00:43:51.117 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:51.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:51.117 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:51.117 issued rwts: total=890,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:51.117 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:51.117 filename0: (groupid=0, jobs=1): err= 0: pid=3399930: Sun Sep 29 16:51:50 2024 00:43:51.117 read: IOPS=187, BW=23.5MiB/s (24.6MB/s)(119MiB/5049msec) 00:43:51.117 slat (nsec): min=5067, max=50814, avg=20768.34, stdev=5440.70 00:43:51.117 clat (usec): min=10303, max=60569, avg=15902.83, stdev=6522.67 00:43:51.117 lat (usec): min=10326, max=60586, avg=15923.59, stdev=6522.28 00:43:51.117 clat percentiles (usec): 00:43:51.117 | 1.00th=[11076], 5.00th=[12256], 10.00th=[12649], 20.00th=[13173], 00:43:51.117 | 30.00th=[13566], 40.00th=[13960], 50.00th=[14353], 60.00th=[15008], 00:43:51.117 | 70.00th=[15664], 80.00th=[17171], 90.00th=[18482], 95.00th=[19792], 00:43:51.117 | 99.00th=[54264], 99.50th=[55837], 99.90th=[60556], 99.95th=[60556], 00:43:51.117 | 99.99th=[60556] 00:43:51.117 bw ( KiB/s): min=19712, max=27136, per=35.63%, avg=24217.60, stdev=2627.93, samples=10 00:43:51.117 iops : min= 154, max= 212, avg=189.20, stdev=20.53, samples=10 00:43:51.117 lat (msec) : 20=95.15%, 50=2.43%, 100=2.43% 00:43:51.117 cpu : usr=92.21%, sys=7.23%, ctx=11, majf=0, minf=1634 00:43:51.117 IO depths : 1=1.1%, 2=98.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:51.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:51.117 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:51.117 issued rwts: total=948,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:51.117 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:51.117 00:43:51.117 Run status group 0 (all jobs): 00:43:51.117 READ: bw=66.4MiB/s (69.6MB/s), 20.9MiB/s-23.5MiB/s (21.9MB/s-24.6MB/s), io=335MiB (351MB), run=5046-5049msec 00:43:51.685 ----------------------------------------------------- 00:43:51.685 Suppressions used: 00:43:51.685 count bytes template 00:43:51.685 5 44 /usr/src/fio/parse.c 00:43:51.685 1 8 libtcmalloc_minimal.so 00:43:51.685 1 904 libcrypto.so 00:43:51.685 ----------------------------------------------------- 00:43:51.685 00:43:51.685 16:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:43:51.685 16:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:43:51.685 16:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:51.685 16:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:51.685 16:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:43:51.685 16:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:51.685 16:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:51.685 16:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:51.685 16:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:51.685 16:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:51.685 16:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:51.685 16:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:51.685 16:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:51.685 16:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:43:51.685 16:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:43:51.685 16:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:43:51.685 16:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:43:51.685 16:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:51.686 bdev_null0 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:51.686 [2024-09-29 16:51:52.072568] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:51.686 bdev_null1 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:51.686 bdev_null2 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:43:51.686 { 00:43:51.686 "params": { 00:43:51.686 "name": "Nvme$subsystem", 00:43:51.686 "trtype": "$TEST_TRANSPORT", 00:43:51.686 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:51.686 "adrfam": "ipv4", 00:43:51.686 "trsvcid": "$NVMF_PORT", 00:43:51.686 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:51.686 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:51.686 "hdgst": ${hdgst:-false}, 00:43:51.686 "ddgst": ${ddgst:-false} 00:43:51.686 }, 00:43:51.686 "method": "bdev_nvme_attach_controller" 00:43:51.686 } 00:43:51.686 EOF 00:43:51.686 )") 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:51.686 16:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:43:51.687 16:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:43:51.687 16:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:43:51.687 16:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:51.687 16:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:43:51.687 16:51:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:43:51.687 16:51:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:43:51.687 { 00:43:51.687 "params": { 00:43:51.687 "name": "Nvme$subsystem", 00:43:51.687 "trtype": "$TEST_TRANSPORT", 00:43:51.687 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:51.687 "adrfam": "ipv4", 00:43:51.687 "trsvcid": "$NVMF_PORT", 00:43:51.687 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:51.687 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:51.687 "hdgst": ${hdgst:-false}, 00:43:51.687 "ddgst": ${ddgst:-false} 00:43:51.687 }, 00:43:51.687 "method": "bdev_nvme_attach_controller" 00:43:51.687 } 00:43:51.687 EOF 00:43:51.687 )") 00:43:51.687 16:51:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:43:51.687 16:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:43:51.687 16:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:51.687 16:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:43:51.687 16:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:43:51.687 16:51:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:43:51.687 16:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:51.687 16:51:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:43:51.687 { 00:43:51.687 "params": { 00:43:51.687 "name": "Nvme$subsystem", 00:43:51.687 "trtype": "$TEST_TRANSPORT", 00:43:51.687 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:51.687 "adrfam": "ipv4", 00:43:51.687 "trsvcid": "$NVMF_PORT", 00:43:51.687 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:51.687 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:51.687 "hdgst": ${hdgst:-false}, 00:43:51.687 "ddgst": ${ddgst:-false} 00:43:51.687 }, 00:43:51.687 "method": "bdev_nvme_attach_controller" 00:43:51.687 } 00:43:51.687 EOF 00:43:51.687 )") 00:43:51.687 16:51:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:43:51.687 16:51:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:43:51.687 16:51:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:43:51.687 16:51:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:43:51.687 "params": { 00:43:51.687 "name": "Nvme0", 00:43:51.687 "trtype": "tcp", 00:43:51.687 "traddr": "10.0.0.2", 00:43:51.687 "adrfam": "ipv4", 00:43:51.687 "trsvcid": "4420", 00:43:51.687 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:51.687 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:51.687 "hdgst": false, 00:43:51.687 "ddgst": false 00:43:51.687 }, 00:43:51.687 "method": "bdev_nvme_attach_controller" 00:43:51.687 },{ 00:43:51.687 "params": { 00:43:51.687 "name": "Nvme1", 00:43:51.687 "trtype": "tcp", 00:43:51.687 "traddr": "10.0.0.2", 00:43:51.687 "adrfam": "ipv4", 00:43:51.687 "trsvcid": "4420", 00:43:51.687 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:51.687 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:51.687 "hdgst": false, 00:43:51.687 "ddgst": false 00:43:51.687 }, 00:43:51.687 "method": "bdev_nvme_attach_controller" 00:43:51.687 },{ 00:43:51.687 "params": { 00:43:51.687 "name": "Nvme2", 00:43:51.687 "trtype": "tcp", 00:43:51.687 "traddr": "10.0.0.2", 00:43:51.687 "adrfam": "ipv4", 00:43:51.687 "trsvcid": "4420", 00:43:51.687 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:43:51.687 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:43:51.687 "hdgst": false, 00:43:51.687 "ddgst": false 00:43:51.687 }, 00:43:51.687 "method": "bdev_nvme_attach_controller" 00:43:51.687 }' 00:43:51.687 16:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:43:51.687 16:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:43:51.687 16:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:43:51.687 16:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:51.687 16:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:51.946 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:43:51.946 ... 00:43:51.946 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:43:51.946 ... 00:43:51.946 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:43:51.946 ... 00:43:51.946 fio-3.35 00:43:51.946 Starting 24 threads 00:44:04.149 00:44:04.149 filename0: (groupid=0, jobs=1): err= 0: pid=3400929: Sun Sep 29 16:52:04 2024 00:44:04.149 read: IOPS=275, BW=1103KiB/s (1129kB/s)(10.8MiB/10041msec) 00:44:04.149 slat (usec): min=5, max=100, avg=44.44, stdev=14.82 00:44:04.149 clat (usec): min=31292, max=87734, avg=57628.22, stdev=10301.72 00:44:04.149 lat (usec): min=31317, max=87751, avg=57672.66, stdev=10300.18 00:44:04.149 clat percentiles (usec): 00:44:04.149 | 1.00th=[44303], 5.00th=[44827], 10.00th=[44827], 20.00th=[45351], 00:44:04.150 | 30.00th=[46400], 40.00th=[51119], 50.00th=[64226], 60.00th=[64750], 00:44:04.150 | 70.00th=[64750], 80.00th=[65274], 90.00th=[66847], 95.00th=[67634], 00:44:04.150 | 99.00th=[84411], 99.50th=[85459], 99.90th=[86508], 99.95th=[87557], 00:44:04.150 | 99.99th=[87557] 00:44:04.150 bw ( KiB/s): min= 896, max= 1408, per=4.17%, avg=1100.35, stdev=200.14, samples=20 00:44:04.150 iops : min= 224, max= 352, avg=275.05, stdev=50.05, samples=20 00:44:04.150 lat (msec) : 50=39.45%, 100=60.55% 00:44:04.150 cpu : usr=98.43%, sys=1.04%, ctx=19, majf=0, minf=1632 00:44:04.150 IO depths : 1=5.2%, 2=11.5%, 4=25.0%, 8=51.0%, 16=7.3%, 32=0.0%, >=64=0.0% 00:44:04.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:04.150 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:04.150 issued rwts: total=2768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:04.150 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:04.150 filename0: (groupid=0, jobs=1): err= 0: pid=3400930: Sun Sep 29 16:52:04 2024 00:44:04.150 read: IOPS=289, BW=1157KiB/s (1185kB/s)(11.3MiB/10028msec) 00:44:04.150 slat (nsec): min=4793, max=66324, avg=23511.88, stdev=10887.74 00:44:04.150 clat (msec): min=16, max=147, avg=55.15, stdev=14.57 00:44:04.150 lat (msec): min=16, max=147, avg=55.17, stdev=14.57 00:44:04.150 clat percentiles (msec): 00:44:04.150 | 1.00th=[ 24], 5.00th=[ 35], 10.00th=[ 41], 20.00th=[ 46], 00:44:04.150 | 30.00th=[ 46], 40.00th=[ 47], 50.00th=[ 56], 60.00th=[ 65], 00:44:04.150 | 70.00th=[ 65], 80.00th=[ 66], 90.00th=[ 66], 95.00th=[ 68], 00:44:04.150 | 99.00th=[ 104], 99.50th=[ 148], 99.90th=[ 148], 99.95th=[ 148], 00:44:04.150 | 99.99th=[ 148] 00:44:04.150 bw ( KiB/s): min= 753, max= 1584, per=4.37%, avg=1153.50, stdev=223.66, samples=20 00:44:04.150 iops : min= 188, max= 396, avg=288.35, stdev=55.92, samples=20 00:44:04.150 lat (msec) : 20=0.14%, 50=45.31%, 100=53.52%, 250=1.03% 00:44:04.150 cpu : usr=97.40%, sys=1.70%, ctx=113, majf=0, minf=1635 00:44:04.150 IO depths : 1=2.6%, 2=6.2%, 4=15.9%, 8=64.1%, 16=11.2%, 32=0.0%, >=64=0.0% 00:44:04.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:04.150 complete : 0=0.0%, 4=92.0%, 8=3.6%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:04.150 issued rwts: total=2900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:04.150 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:04.150 filename0: (groupid=0, jobs=1): err= 0: pid=3400931: Sun Sep 29 16:52:04 2024 00:44:04.150 read: IOPS=275, BW=1103KiB/s (1129kB/s)(10.8MiB/10042msec) 00:44:04.150 slat (usec): min=4, max=102, avg=39.54, stdev=13.35 00:44:04.150 clat (usec): min=32167, max=89060, avg=57731.53, stdev=10239.81 00:44:04.150 lat (usec): min=32198, max=89102, avg=57771.07, stdev=10237.52 00:44:04.150 clat percentiles (usec): 00:44:04.150 | 1.00th=[44303], 5.00th=[44827], 10.00th=[45351], 20.00th=[45351], 00:44:04.150 | 30.00th=[46400], 40.00th=[55837], 50.00th=[64226], 60.00th=[64750], 00:44:04.150 | 70.00th=[65274], 80.00th=[65274], 90.00th=[66847], 95.00th=[67634], 00:44:04.150 | 99.00th=[85459], 99.50th=[86508], 99.90th=[88605], 99.95th=[88605], 00:44:04.150 | 99.99th=[88605] 00:44:04.150 bw ( KiB/s): min= 896, max= 1408, per=4.17%, avg=1100.35, stdev=200.61, samples=20 00:44:04.150 iops : min= 224, max= 352, avg=275.05, stdev=50.17, samples=20 00:44:04.150 lat (msec) : 50=38.73%, 100=61.27% 00:44:04.150 cpu : usr=96.26%, sys=2.12%, ctx=169, majf=0, minf=1635 00:44:04.150 IO depths : 1=5.6%, 2=11.8%, 4=25.0%, 8=50.7%, 16=6.9%, 32=0.0%, >=64=0.0% 00:44:04.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:04.150 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:04.150 issued rwts: total=2768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:04.150 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:04.150 filename0: (groupid=0, jobs=1): err= 0: pid=3400932: Sun Sep 29 16:52:04 2024 00:44:04.150 read: IOPS=274, BW=1098KiB/s (1124kB/s)(10.8MiB/10026msec) 00:44:04.150 slat (nsec): min=5192, max=86310, avg=40741.82, stdev=11862.86 00:44:04.150 clat (msec): min=31, max=127, avg=57.92, stdev=11.68 00:44:04.150 lat (msec): min=31, max=127, avg=57.96, stdev=11.67 00:44:04.150 clat percentiles (msec): 00:44:04.150 | 1.00th=[ 45], 5.00th=[ 45], 10.00th=[ 45], 20.00th=[ 46], 00:44:04.150 | 30.00th=[ 47], 40.00th=[ 51], 50.00th=[ 65], 60.00th=[ 65], 00:44:04.150 | 70.00th=[ 65], 80.00th=[ 66], 90.00th=[ 67], 95.00th=[ 68], 00:44:04.150 | 99.00th=[ 86], 99.50th=[ 128], 99.90th=[ 128], 99.95th=[ 128], 00:44:04.150 | 99.99th=[ 128] 00:44:04.150 bw ( KiB/s): min= 769, max= 1408, per=4.15%, avg=1094.10, stdev=200.32, samples=20 00:44:04.150 iops : min= 192, max= 352, avg=273.50, stdev=50.11, samples=20 00:44:04.150 lat (msec) : 50=39.90%, 100=59.52%, 250=0.58% 00:44:04.150 cpu : usr=98.34%, sys=1.13%, ctx=18, majf=0, minf=1633 00:44:04.150 IO depths : 1=5.0%, 2=11.2%, 4=25.0%, 8=51.3%, 16=7.5%, 32=0.0%, >=64=0.0% 00:44:04.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:04.150 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:04.150 issued rwts: total=2752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:04.150 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:04.150 filename0: (groupid=0, jobs=1): err= 0: pid=3400933: Sun Sep 29 16:52:04 2024 00:44:04.150 read: IOPS=275, BW=1102KiB/s (1129kB/s)(10.8MiB/10043msec) 00:44:04.150 slat (nsec): min=4525, max=83556, avg=27887.65, stdev=14023.34 00:44:04.150 clat (usec): min=32030, max=88376, avg=57832.70, stdev=10046.19 00:44:04.150 lat (usec): min=32049, max=88440, avg=57860.58, stdev=10044.01 00:44:04.150 clat percentiles (usec): 00:44:04.150 | 1.00th=[44303], 5.00th=[44827], 10.00th=[45351], 20.00th=[45876], 00:44:04.150 | 30.00th=[46924], 40.00th=[56361], 50.00th=[64226], 60.00th=[64750], 00:44:04.150 | 70.00th=[65274], 80.00th=[65799], 90.00th=[66847], 95.00th=[67634], 00:44:04.150 | 99.00th=[86508], 99.50th=[86508], 99.90th=[87557], 99.95th=[88605], 00:44:04.150 | 99.99th=[88605] 00:44:04.150 bw ( KiB/s): min= 896, max= 1408, per=4.17%, avg=1100.35, stdev=201.08, samples=20 00:44:04.150 iops : min= 224, max= 352, avg=275.05, stdev=50.29, samples=20 00:44:04.150 lat (msec) : 50=38.22%, 100=61.78% 00:44:04.150 cpu : usr=97.29%, sys=1.62%, ctx=45, majf=0, minf=1633 00:44:04.150 IO depths : 1=5.9%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:44:04.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:04.150 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:04.150 issued rwts: total=2768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:04.150 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:04.150 filename0: (groupid=0, jobs=1): err= 0: pid=3400934: Sun Sep 29 16:52:04 2024 00:44:04.150 read: IOPS=274, BW=1098KiB/s (1124kB/s)(10.8MiB/10028msec) 00:44:04.150 slat (usec): min=5, max=116, avg=32.06, stdev=13.99 00:44:04.150 clat (msec): min=44, max=114, avg=58.01, stdev=10.35 00:44:04.150 lat (msec): min=44, max=114, avg=58.04, stdev=10.34 00:44:04.150 clat percentiles (msec): 00:44:04.150 | 1.00th=[ 45], 5.00th=[ 45], 10.00th=[ 46], 20.00th=[ 46], 00:44:04.150 | 30.00th=[ 47], 40.00th=[ 58], 50.00th=[ 65], 60.00th=[ 65], 00:44:04.150 | 70.00th=[ 66], 80.00th=[ 66], 90.00th=[ 67], 95.00th=[ 68], 00:44:04.150 | 99.00th=[ 69], 99.50th=[ 115], 99.90th=[ 115], 99.95th=[ 115], 00:44:04.150 | 99.99th=[ 115] 00:44:04.150 bw ( KiB/s): min= 896, max= 1408, per=4.15%, avg=1094.40, stdev=196.88, samples=20 00:44:04.150 iops : min= 224, max= 352, avg=273.60, stdev=49.22, samples=20 00:44:04.150 lat (msec) : 50=37.21%, 100=62.21%, 250=0.58% 00:44:04.150 cpu : usr=96.05%, sys=2.30%, ctx=86, majf=0, minf=1635 00:44:04.150 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:44:04.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:04.150 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:04.150 issued rwts: total=2752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:04.150 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:04.150 filename0: (groupid=0, jobs=1): err= 0: pid=3400935: Sun Sep 29 16:52:04 2024 00:44:04.150 read: IOPS=274, BW=1098KiB/s (1124kB/s)(10.8MiB/10030msec) 00:44:04.150 slat (nsec): min=4795, max=66949, avg=33772.82, stdev=9739.44 00:44:04.150 clat (msec): min=37, max=120, avg=57.97, stdev=10.64 00:44:04.150 lat (msec): min=37, max=120, avg=58.00, stdev=10.64 00:44:04.150 clat percentiles (msec): 00:44:04.150 | 1.00th=[ 45], 5.00th=[ 45], 10.00th=[ 46], 20.00th=[ 46], 00:44:04.150 | 30.00th=[ 47], 40.00th=[ 57], 50.00th=[ 65], 60.00th=[ 65], 00:44:04.150 | 70.00th=[ 66], 80.00th=[ 66], 90.00th=[ 67], 95.00th=[ 68], 00:44:04.150 | 99.00th=[ 78], 99.50th=[ 121], 99.90th=[ 121], 99.95th=[ 122], 00:44:04.150 | 99.99th=[ 122] 00:44:04.150 bw ( KiB/s): min= 769, max= 1408, per=4.15%, avg=1094.45, stdev=204.91, samples=20 00:44:04.150 iops : min= 192, max= 352, avg=273.60, stdev=51.25, samples=20 00:44:04.150 lat (msec) : 50=37.65%, 100=61.77%, 250=0.58% 00:44:04.150 cpu : usr=97.08%, sys=1.81%, ctx=79, majf=0, minf=1634 00:44:04.150 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:44:04.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:04.150 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:04.150 issued rwts: total=2752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:04.150 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:04.150 filename0: (groupid=0, jobs=1): err= 0: pid=3400936: Sun Sep 29 16:52:04 2024 00:44:04.150 read: IOPS=273, BW=1093KiB/s (1120kB/s)(10.7MiB/10009msec) 00:44:04.150 slat (usec): min=4, max=116, avg=29.25, stdev=11.07 00:44:04.150 clat (msec): min=21, max=156, avg=58.25, stdev=12.94 00:44:04.151 lat (msec): min=21, max=156, avg=58.28, stdev=12.93 00:44:04.151 clat percentiles (msec): 00:44:04.151 | 1.00th=[ 35], 5.00th=[ 45], 10.00th=[ 46], 20.00th=[ 46], 00:44:04.151 | 30.00th=[ 47], 40.00th=[ 57], 50.00th=[ 65], 60.00th=[ 65], 00:44:04.151 | 70.00th=[ 66], 80.00th=[ 66], 90.00th=[ 67], 95.00th=[ 68], 00:44:04.151 | 99.00th=[ 100], 99.50th=[ 157], 99.90th=[ 157], 99.95th=[ 157], 00:44:04.151 | 99.99th=[ 157] 00:44:04.151 bw ( KiB/s): min= 769, max= 1410, per=4.13%, avg=1089.10, stdev=201.51, samples=20 00:44:04.151 iops : min= 192, max= 352, avg=272.00, stdev=50.35, samples=20 00:44:04.151 lat (msec) : 50=37.50%, 100=61.77%, 250=0.73% 00:44:04.151 cpu : usr=93.63%, sys=3.29%, ctx=374, majf=0, minf=1635 00:44:04.151 IO depths : 1=5.6%, 2=11.8%, 4=25.0%, 8=50.7%, 16=6.9%, 32=0.0%, >=64=0.0% 00:44:04.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:04.151 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:04.151 issued rwts: total=2736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:04.151 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:04.151 filename1: (groupid=0, jobs=1): err= 0: pid=3400937: Sun Sep 29 16:52:04 2024 00:44:04.151 read: IOPS=274, BW=1097KiB/s (1124kB/s)(10.8MiB/10031msec) 00:44:04.151 slat (nsec): min=7782, max=78352, avg=34005.40, stdev=8079.05 00:44:04.151 clat (msec): min=44, max=121, avg=57.99, stdev=10.63 00:44:04.151 lat (msec): min=44, max=121, avg=58.02, stdev=10.63 00:44:04.151 clat percentiles (msec): 00:44:04.151 | 1.00th=[ 45], 5.00th=[ 45], 10.00th=[ 46], 20.00th=[ 46], 00:44:04.151 | 30.00th=[ 47], 40.00th=[ 58], 50.00th=[ 65], 60.00th=[ 65], 00:44:04.151 | 70.00th=[ 66], 80.00th=[ 66], 90.00th=[ 67], 95.00th=[ 68], 00:44:04.151 | 99.00th=[ 69], 99.50th=[ 122], 99.90th=[ 122], 99.95th=[ 122], 00:44:04.151 | 99.99th=[ 122] 00:44:04.151 bw ( KiB/s): min= 768, max= 1408, per=4.15%, avg=1094.40, stdev=205.45, samples=20 00:44:04.151 iops : min= 192, max= 352, avg=273.60, stdev=51.36, samples=20 00:44:04.151 lat (msec) : 50=37.72%, 100=61.70%, 250=0.58% 00:44:04.151 cpu : usr=97.72%, sys=1.46%, ctx=90, majf=0, minf=1634 00:44:04.151 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:44:04.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:04.151 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:04.151 issued rwts: total=2752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:04.151 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:04.151 filename1: (groupid=0, jobs=1): err= 0: pid=3400938: Sun Sep 29 16:52:04 2024 00:44:04.151 read: IOPS=275, BW=1100KiB/s (1127kB/s)(10.8MiB/10061msec) 00:44:04.151 slat (nsec): min=4729, max=84363, avg=36213.19, stdev=13116.39 00:44:04.151 clat (usec): min=35329, max=89111, avg=57783.26, stdev=10304.12 00:44:04.151 lat (usec): min=35355, max=89144, avg=57819.47, stdev=10301.64 00:44:04.151 clat percentiles (usec): 00:44:04.151 | 1.00th=[43779], 5.00th=[44827], 10.00th=[45351], 20.00th=[45351], 00:44:04.151 | 30.00th=[46400], 40.00th=[55837], 50.00th=[64226], 60.00th=[64750], 00:44:04.151 | 70.00th=[65274], 80.00th=[65799], 90.00th=[66847], 95.00th=[67634], 00:44:04.151 | 99.00th=[86508], 99.50th=[88605], 99.90th=[88605], 99.95th=[88605], 00:44:04.151 | 99.99th=[88605] 00:44:04.151 bw ( KiB/s): min= 896, max= 1408, per=4.17%, avg=1100.15, stdev=201.17, samples=20 00:44:04.151 iops : min= 224, max= 352, avg=275.00, stdev=50.31, samples=20 00:44:04.151 lat (msec) : 50=38.80%, 100=61.20% 00:44:04.151 cpu : usr=95.10%, sys=2.78%, ctx=270, majf=0, minf=1633 00:44:04.151 IO depths : 1=5.6%, 2=11.9%, 4=25.0%, 8=50.6%, 16=6.9%, 32=0.0%, >=64=0.0% 00:44:04.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:04.151 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:04.151 issued rwts: total=2768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:04.151 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:04.151 filename1: (groupid=0, jobs=1): err= 0: pid=3400939: Sun Sep 29 16:52:04 2024 00:44:04.151 read: IOPS=276, BW=1104KiB/s (1131kB/s)(10.8MiB/10026msec) 00:44:04.151 slat (nsec): min=4556, max=87129, avg=42555.78, stdev=11418.32 00:44:04.151 clat (usec): min=31251, max=87856, avg=57579.41, stdev=10243.74 00:44:04.151 lat (usec): min=31275, max=87906, avg=57621.96, stdev=10242.33 00:44:04.151 clat percentiles (usec): 00:44:04.151 | 1.00th=[43779], 5.00th=[44827], 10.00th=[44827], 20.00th=[45351], 00:44:04.151 | 30.00th=[46400], 40.00th=[51643], 50.00th=[64226], 60.00th=[64750], 00:44:04.151 | 70.00th=[64750], 80.00th=[65274], 90.00th=[66847], 95.00th=[67634], 00:44:04.151 | 99.00th=[84411], 99.50th=[85459], 99.90th=[87557], 99.95th=[87557], 00:44:04.151 | 99.99th=[87557] 00:44:04.151 bw ( KiB/s): min= 896, max= 1408, per=4.18%, avg=1104.84, stdev=195.48, samples=19 00:44:04.151 iops : min= 224, max= 352, avg=276.21, stdev=48.87, samples=19 00:44:04.151 lat (msec) : 50=39.23%, 100=60.77% 00:44:04.151 cpu : usr=96.80%, sys=1.92%, ctx=98, majf=0, minf=1631 00:44:04.151 IO depths : 1=5.3%, 2=11.5%, 4=25.0%, 8=51.0%, 16=7.2%, 32=0.0%, >=64=0.0% 00:44:04.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:04.151 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:04.151 issued rwts: total=2768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:04.151 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:04.151 filename1: (groupid=0, jobs=1): err= 0: pid=3400940: Sun Sep 29 16:52:04 2024 00:44:04.151 read: IOPS=275, BW=1100KiB/s (1127kB/s)(10.8MiB/10006msec) 00:44:04.151 slat (nsec): min=4736, max=75919, avg=27467.70, stdev=10318.09 00:44:04.151 clat (msec): min=22, max=101, avg=57.90, stdev=10.57 00:44:04.151 lat (msec): min=22, max=101, avg=57.93, stdev=10.57 00:44:04.151 clat percentiles (msec): 00:44:04.151 | 1.00th=[ 45], 5.00th=[ 45], 10.00th=[ 46], 20.00th=[ 46], 00:44:04.151 | 30.00th=[ 47], 40.00th=[ 57], 50.00th=[ 65], 60.00th=[ 65], 00:44:04.151 | 70.00th=[ 66], 80.00th=[ 66], 90.00th=[ 67], 95.00th=[ 68], 00:44:04.151 | 99.00th=[ 93], 99.50th=[ 96], 99.90th=[ 101], 99.95th=[ 102], 00:44:04.151 | 99.99th=[ 102] 00:44:04.151 bw ( KiB/s): min= 896, max= 1408, per=4.16%, avg=1098.11, stdev=182.68, samples=19 00:44:04.151 iops : min= 224, max= 352, avg=274.53, stdev=45.67, samples=19 00:44:04.151 lat (msec) : 50=37.28%, 100=62.57%, 250=0.15% 00:44:04.151 cpu : usr=97.72%, sys=1.45%, ctx=77, majf=0, minf=1631 00:44:04.151 IO depths : 1=5.6%, 2=11.9%, 4=25.0%, 8=50.6%, 16=6.9%, 32=0.0%, >=64=0.0% 00:44:04.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:04.151 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:04.151 issued rwts: total=2752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:04.151 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:04.151 filename1: (groupid=0, jobs=1): err= 0: pid=3400941: Sun Sep 29 16:52:04 2024 00:44:04.151 read: IOPS=274, BW=1098KiB/s (1125kB/s)(10.8MiB/10021msec) 00:44:04.151 slat (nsec): min=15392, max=98487, avg=62791.94, stdev=10038.20 00:44:04.151 clat (msec): min=44, max=116, avg=57.68, stdev=10.50 00:44:04.151 lat (msec): min=44, max=117, avg=57.75, stdev=10.50 00:44:04.151 clat percentiles (msec): 00:44:04.151 | 1.00th=[ 45], 5.00th=[ 45], 10.00th=[ 45], 20.00th=[ 46], 00:44:04.151 | 30.00th=[ 47], 40.00th=[ 58], 50.00th=[ 65], 60.00th=[ 65], 00:44:04.151 | 70.00th=[ 65], 80.00th=[ 66], 90.00th=[ 67], 95.00th=[ 67], 00:44:04.151 | 99.00th=[ 69], 99.50th=[ 117], 99.90th=[ 117], 99.95th=[ 117], 00:44:04.151 | 99.99th=[ 117] 00:44:04.151 bw ( KiB/s): min= 769, max= 1408, per=4.15%, avg=1094.10, stdev=205.50, samples=20 00:44:04.151 iops : min= 192, max= 352, avg=273.50, stdev=51.40, samples=20 00:44:04.151 lat (msec) : 50=38.05%, 100=61.37%, 250=0.58% 00:44:04.151 cpu : usr=98.02%, sys=1.38%, ctx=16, majf=0, minf=1631 00:44:04.151 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:44:04.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:04.151 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:04.151 issued rwts: total=2752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:04.151 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:04.151 filename1: (groupid=0, jobs=1): err= 0: pid=3400942: Sun Sep 29 16:52:04 2024 00:44:04.151 read: IOPS=275, BW=1100KiB/s (1127kB/s)(10.8MiB/10006msec) 00:44:04.151 slat (nsec): min=6585, max=66981, avg=24914.46, stdev=8699.38 00:44:04.151 clat (msec): min=23, max=125, avg=57.96, stdev=12.56 00:44:04.151 lat (msec): min=23, max=125, avg=57.99, stdev=12.56 00:44:04.151 clat percentiles (msec): 00:44:04.151 | 1.00th=[ 34], 5.00th=[ 45], 10.00th=[ 46], 20.00th=[ 46], 00:44:04.151 | 30.00th=[ 47], 40.00th=[ 52], 50.00th=[ 65], 60.00th=[ 65], 00:44:04.151 | 70.00th=[ 66], 80.00th=[ 66], 90.00th=[ 67], 95.00th=[ 68], 00:44:04.151 | 99.00th=[ 99], 99.50th=[ 126], 99.90th=[ 126], 99.95th=[ 126], 00:44:04.151 | 99.99th=[ 126] 00:44:04.151 bw ( KiB/s): min= 896, max= 1408, per=4.16%, avg=1098.11, stdev=182.05, samples=19 00:44:04.151 iops : min= 224, max= 352, avg=274.53, stdev=45.51, samples=19 00:44:04.151 lat (msec) : 50=38.95%, 100=60.39%, 250=0.65% 00:44:04.151 cpu : usr=98.53%, sys=0.95%, ctx=20, majf=0, minf=1634 00:44:04.151 IO depths : 1=5.1%, 2=11.3%, 4=25.0%, 8=51.2%, 16=7.4%, 32=0.0%, >=64=0.0% 00:44:04.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:04.152 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:04.152 issued rwts: total=2752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:04.152 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:04.152 filename1: (groupid=0, jobs=1): err= 0: pid=3400943: Sun Sep 29 16:52:04 2024 00:44:04.152 read: IOPS=275, BW=1104KiB/s (1130kB/s)(10.8MiB/10033msec) 00:44:04.152 slat (nsec): min=6210, max=89289, avg=43279.24, stdev=13206.19 00:44:04.152 clat (usec): min=30246, max=88883, avg=57633.53, stdev=10372.81 00:44:04.152 lat (usec): min=30272, max=88925, avg=57676.81, stdev=10371.60 00:44:04.152 clat percentiles (usec): 00:44:04.152 | 1.00th=[43779], 5.00th=[44827], 10.00th=[44827], 20.00th=[45351], 00:44:04.152 | 30.00th=[46400], 40.00th=[51119], 50.00th=[64226], 60.00th=[64750], 00:44:04.152 | 70.00th=[65274], 80.00th=[65799], 90.00th=[66847], 95.00th=[67634], 00:44:04.152 | 99.00th=[84411], 99.50th=[85459], 99.90th=[87557], 99.95th=[88605], 00:44:04.152 | 99.99th=[88605] 00:44:04.152 bw ( KiB/s): min= 896, max= 1408, per=4.17%, avg=1100.80, stdev=199.95, samples=20 00:44:04.152 iops : min= 224, max= 352, avg=275.20, stdev=49.99, samples=20 00:44:04.152 lat (msec) : 50=39.45%, 100=60.55% 00:44:04.152 cpu : usr=98.33%, sys=1.17%, ctx=27, majf=0, minf=1634 00:44:04.152 IO depths : 1=5.2%, 2=11.5%, 4=25.0%, 8=51.0%, 16=7.3%, 32=0.0%, >=64=0.0% 00:44:04.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:04.152 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:04.152 issued rwts: total=2768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:04.152 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:04.152 filename1: (groupid=0, jobs=1): err= 0: pid=3400944: Sun Sep 29 16:52:04 2024 00:44:04.152 read: IOPS=275, BW=1104KiB/s (1130kB/s)(10.8MiB/10029msec) 00:44:04.152 slat (nsec): min=6366, max=93423, avg=43518.08, stdev=13095.77 00:44:04.152 clat (usec): min=31858, max=87416, avg=57594.41, stdev=9693.87 00:44:04.152 lat (usec): min=31875, max=87447, avg=57637.93, stdev=9692.68 00:44:04.152 clat percentiles (usec): 00:44:04.152 | 1.00th=[43779], 5.00th=[44827], 10.00th=[45351], 20.00th=[45351], 00:44:04.152 | 30.00th=[46400], 40.00th=[58459], 50.00th=[64226], 60.00th=[64750], 00:44:04.152 | 70.00th=[64750], 80.00th=[65274], 90.00th=[66323], 95.00th=[67634], 00:44:04.152 | 99.00th=[68682], 99.50th=[72877], 99.90th=[84411], 99.95th=[87557], 00:44:04.152 | 99.99th=[87557] 00:44:04.152 bw ( KiB/s): min= 896, max= 1408, per=4.17%, avg=1100.80, stdev=189.43, samples=20 00:44:04.152 iops : min= 224, max= 352, avg=275.20, stdev=47.36, samples=20 00:44:04.152 lat (msec) : 50=37.36%, 100=62.64% 00:44:04.152 cpu : usr=98.46%, sys=1.03%, ctx=15, majf=0, minf=1634 00:44:04.152 IO depths : 1=5.6%, 2=11.8%, 4=25.0%, 8=50.7%, 16=6.9%, 32=0.0%, >=64=0.0% 00:44:04.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:04.152 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:04.152 issued rwts: total=2768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:04.152 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:04.152 filename2: (groupid=0, jobs=1): err= 0: pid=3400945: Sun Sep 29 16:52:04 2024 00:44:04.152 read: IOPS=274, BW=1098KiB/s (1124kB/s)(10.8MiB/10026msec) 00:44:04.152 slat (nsec): min=5078, max=88152, avg=42165.30, stdev=13521.23 00:44:04.152 clat (msec): min=28, max=149, avg=57.91, stdev=11.63 00:44:04.152 lat (msec): min=28, max=149, avg=57.95, stdev=11.63 00:44:04.152 clat percentiles (msec): 00:44:04.152 | 1.00th=[ 45], 5.00th=[ 45], 10.00th=[ 45], 20.00th=[ 46], 00:44:04.152 | 30.00th=[ 47], 40.00th=[ 52], 50.00th=[ 65], 60.00th=[ 65], 00:44:04.152 | 70.00th=[ 65], 80.00th=[ 66], 90.00th=[ 67], 95.00th=[ 68], 00:44:04.152 | 99.00th=[ 88], 99.50th=[ 129], 99.90th=[ 129], 99.95th=[ 150], 00:44:04.152 | 99.99th=[ 150] 00:44:04.152 bw ( KiB/s): min= 768, max= 1408, per=4.15%, avg=1094.05, stdev=201.35, samples=20 00:44:04.152 iops : min= 192, max= 352, avg=273.50, stdev=50.34, samples=20 00:44:04.152 lat (msec) : 50=39.24%, 100=60.17%, 250=0.58% 00:44:04.152 cpu : usr=98.31%, sys=1.17%, ctx=22, majf=0, minf=1634 00:44:04.152 IO depths : 1=5.2%, 2=11.5%, 4=25.0%, 8=51.0%, 16=7.3%, 32=0.0%, >=64=0.0% 00:44:04.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:04.152 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:04.152 issued rwts: total=2752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:04.152 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:04.152 filename2: (groupid=0, jobs=1): err= 0: pid=3400946: Sun Sep 29 16:52:04 2024 00:44:04.152 read: IOPS=275, BW=1103KiB/s (1129kB/s)(10.8MiB/10038msec) 00:44:04.152 slat (nsec): min=7982, max=98344, avg=42192.73, stdev=14407.31 00:44:04.152 clat (usec): min=32408, max=88030, avg=57684.61, stdev=10309.59 00:44:04.152 lat (usec): min=32423, max=88093, avg=57726.80, stdev=10308.11 00:44:04.152 clat percentiles (usec): 00:44:04.152 | 1.00th=[44303], 5.00th=[44827], 10.00th=[45351], 20.00th=[45351], 00:44:04.152 | 30.00th=[46400], 40.00th=[51643], 50.00th=[64226], 60.00th=[64750], 00:44:04.152 | 70.00th=[65274], 80.00th=[65274], 90.00th=[66847], 95.00th=[67634], 00:44:04.152 | 99.00th=[84411], 99.50th=[85459], 99.90th=[87557], 99.95th=[87557], 00:44:04.152 | 99.99th=[87557] 00:44:04.152 bw ( KiB/s): min= 896, max= 1408, per=4.17%, avg=1100.80, stdev=200.89, samples=20 00:44:04.152 iops : min= 224, max= 352, avg=275.20, stdev=50.22, samples=20 00:44:04.152 lat (msec) : 50=39.09%, 100=60.91% 00:44:04.152 cpu : usr=98.32%, sys=1.17%, ctx=24, majf=0, minf=1631 00:44:04.152 IO depths : 1=5.4%, 2=11.7%, 4=25.0%, 8=50.8%, 16=7.1%, 32=0.0%, >=64=0.0% 00:44:04.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:04.152 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:04.152 issued rwts: total=2768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:04.152 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:04.152 filename2: (groupid=0, jobs=1): err= 0: pid=3400947: Sun Sep 29 16:52:04 2024 00:44:04.152 read: IOPS=274, BW=1098KiB/s (1125kB/s)(10.8MiB/10021msec) 00:44:04.152 slat (nsec): min=6548, max=66249, avg=31526.65, stdev=10426.04 00:44:04.152 clat (msec): min=35, max=107, avg=58.00, stdev=10.23 00:44:04.152 lat (msec): min=35, max=107, avg=58.03, stdev=10.23 00:44:04.152 clat percentiles (msec): 00:44:04.152 | 1.00th=[ 45], 5.00th=[ 46], 10.00th=[ 46], 20.00th=[ 46], 00:44:04.152 | 30.00th=[ 47], 40.00th=[ 58], 50.00th=[ 65], 60.00th=[ 65], 00:44:04.152 | 70.00th=[ 66], 80.00th=[ 66], 90.00th=[ 67], 95.00th=[ 68], 00:44:04.152 | 99.00th=[ 78], 99.50th=[ 108], 99.90th=[ 108], 99.95th=[ 108], 00:44:04.152 | 99.99th=[ 108] 00:44:04.152 bw ( KiB/s): min= 896, max= 1424, per=4.15%, avg=1094.40, stdev=198.04, samples=20 00:44:04.152 iops : min= 224, max= 356, avg=273.60, stdev=49.51, samples=20 00:44:04.152 lat (msec) : 50=36.63%, 100=62.79%, 250=0.58% 00:44:04.152 cpu : usr=98.35%, sys=1.13%, ctx=16, majf=0, minf=1631 00:44:04.152 IO depths : 1=4.6%, 2=10.8%, 4=25.0%, 8=51.7%, 16=7.9%, 32=0.0%, >=64=0.0% 00:44:04.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:04.152 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:04.152 issued rwts: total=2752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:04.152 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:04.152 filename2: (groupid=0, jobs=1): err= 0: pid=3400948: Sun Sep 29 16:52:04 2024 00:44:04.152 read: IOPS=274, BW=1098KiB/s (1124kB/s)(10.8MiB/10025msec) 00:44:04.152 slat (nsec): min=5044, max=98429, avg=42449.44, stdev=16060.47 00:44:04.152 clat (msec): min=31, max=126, avg=57.87, stdev=11.55 00:44:04.152 lat (msec): min=31, max=126, avg=57.92, stdev=11.55 00:44:04.152 clat percentiles (msec): 00:44:04.152 | 1.00th=[ 45], 5.00th=[ 45], 10.00th=[ 45], 20.00th=[ 46], 00:44:04.152 | 30.00th=[ 47], 40.00th=[ 52], 50.00th=[ 65], 60.00th=[ 65], 00:44:04.152 | 70.00th=[ 65], 80.00th=[ 66], 90.00th=[ 67], 95.00th=[ 68], 00:44:04.152 | 99.00th=[ 88], 99.50th=[ 127], 99.90th=[ 127], 99.95th=[ 127], 00:44:04.152 | 99.99th=[ 127] 00:44:04.152 bw ( KiB/s): min= 768, max= 1408, per=4.15%, avg=1094.25, stdev=200.97, samples=20 00:44:04.152 iops : min= 192, max= 352, avg=273.55, stdev=50.22, samples=20 00:44:04.152 lat (msec) : 50=39.39%, 100=59.67%, 250=0.94% 00:44:04.152 cpu : usr=98.26%, sys=1.24%, ctx=15, majf=0, minf=1631 00:44:04.152 IO depths : 1=5.4%, 2=11.6%, 4=24.8%, 8=51.1%, 16=7.1%, 32=0.0%, >=64=0.0% 00:44:04.153 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:04.153 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:04.153 issued rwts: total=2752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:04.153 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:04.153 filename2: (groupid=0, jobs=1): err= 0: pid=3400949: Sun Sep 29 16:52:04 2024 00:44:04.153 read: IOPS=274, BW=1098KiB/s (1124kB/s)(10.8MiB/10025msec) 00:44:04.153 slat (nsec): min=5250, max=68498, avg=34198.39, stdev=8962.87 00:44:04.153 clat (msec): min=44, max=121, avg=57.97, stdev=10.60 00:44:04.153 lat (msec): min=44, max=121, avg=58.00, stdev=10.60 00:44:04.153 clat percentiles (msec): 00:44:04.153 | 1.00th=[ 45], 5.00th=[ 45], 10.00th=[ 46], 20.00th=[ 46], 00:44:04.153 | 30.00th=[ 47], 40.00th=[ 58], 50.00th=[ 65], 60.00th=[ 65], 00:44:04.153 | 70.00th=[ 66], 80.00th=[ 66], 90.00th=[ 67], 95.00th=[ 68], 00:44:04.153 | 99.00th=[ 69], 99.50th=[ 122], 99.90th=[ 122], 99.95th=[ 122], 00:44:04.153 | 99.99th=[ 122] 00:44:04.153 bw ( KiB/s): min= 769, max= 1408, per=4.15%, avg=1094.45, stdev=205.37, samples=20 00:44:04.153 iops : min= 192, max= 352, avg=273.60, stdev=51.36, samples=20 00:44:04.153 lat (msec) : 50=37.79%, 100=61.63%, 250=0.58% 00:44:04.153 cpu : usr=98.45%, sys=1.05%, ctx=15, majf=0, minf=1631 00:44:04.153 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:04.153 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:04.153 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:04.153 issued rwts: total=2752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:04.153 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:04.153 filename2: (groupid=0, jobs=1): err= 0: pid=3400950: Sun Sep 29 16:52:04 2024 00:44:04.153 read: IOPS=275, BW=1102KiB/s (1129kB/s)(10.8MiB/10036msec) 00:44:04.153 slat (nsec): min=5048, max=98797, avg=45794.06, stdev=14329.79 00:44:04.153 clat (usec): min=31282, max=87358, avg=57639.78, stdev=10166.20 00:44:04.153 lat (usec): min=31311, max=87406, avg=57685.58, stdev=10164.99 00:44:04.153 clat percentiles (usec): 00:44:04.153 | 1.00th=[44303], 5.00th=[44827], 10.00th=[44827], 20.00th=[45351], 00:44:04.153 | 30.00th=[46400], 40.00th=[55837], 50.00th=[64226], 60.00th=[64750], 00:44:04.153 | 70.00th=[64750], 80.00th=[65274], 90.00th=[66847], 95.00th=[67634], 00:44:04.153 | 99.00th=[83362], 99.50th=[84411], 99.90th=[86508], 99.95th=[87557], 00:44:04.153 | 99.99th=[87557] 00:44:04.153 bw ( KiB/s): min= 896, max= 1408, per=4.17%, avg=1100.80, stdev=200.49, samples=20 00:44:04.153 iops : min= 224, max= 352, avg=275.20, stdev=50.12, samples=20 00:44:04.153 lat (msec) : 50=38.54%, 100=61.46% 00:44:04.153 cpu : usr=98.36%, sys=1.14%, ctx=29, majf=0, minf=1633 00:44:04.153 IO depths : 1=4.4%, 2=10.7%, 4=25.0%, 8=51.8%, 16=8.0%, 32=0.0%, >=64=0.0% 00:44:04.153 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:04.153 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:04.153 issued rwts: total=2766,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:04.153 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:04.153 filename2: (groupid=0, jobs=1): err= 0: pid=3400951: Sun Sep 29 16:52:04 2024 00:44:04.153 read: IOPS=274, BW=1098KiB/s (1124kB/s)(10.8MiB/10025msec) 00:44:04.153 slat (nsec): min=6344, max=63986, avg=35123.69, stdev=8901.16 00:44:04.153 clat (msec): min=38, max=132, avg=57.98, stdev=10.85 00:44:04.153 lat (msec): min=38, max=133, avg=58.02, stdev=10.85 00:44:04.153 clat percentiles (msec): 00:44:04.153 | 1.00th=[ 45], 5.00th=[ 45], 10.00th=[ 46], 20.00th=[ 46], 00:44:04.153 | 30.00th=[ 47], 40.00th=[ 53], 50.00th=[ 65], 60.00th=[ 65], 00:44:04.153 | 70.00th=[ 66], 80.00th=[ 66], 90.00th=[ 67], 95.00th=[ 68], 00:44:04.153 | 99.00th=[ 79], 99.50th=[ 122], 99.90th=[ 122], 99.95th=[ 133], 00:44:04.153 | 99.99th=[ 133] 00:44:04.153 bw ( KiB/s): min= 768, max= 1408, per=4.15%, avg=1094.40, stdev=204.99, samples=20 00:44:04.153 iops : min= 192, max= 352, avg=273.60, stdev=51.25, samples=20 00:44:04.153 lat (msec) : 50=37.86%, 100=61.56%, 250=0.58% 00:44:04.153 cpu : usr=98.40%, sys=1.09%, ctx=18, majf=0, minf=1634 00:44:04.153 IO depths : 1=5.6%, 2=11.8%, 4=25.0%, 8=50.7%, 16=6.9%, 32=0.0%, >=64=0.0% 00:44:04.153 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:04.153 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:04.153 issued rwts: total=2752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:04.153 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:04.153 filename2: (groupid=0, jobs=1): err= 0: pid=3400952: Sun Sep 29 16:52:04 2024 00:44:04.153 read: IOPS=279, BW=1119KiB/s (1146kB/s)(11.0MiB/10029msec) 00:44:04.153 slat (usec): min=17, max=124, avg=62.19, stdev=11.89 00:44:04.153 clat (msec): min=22, max=145, avg=56.80, stdev=13.04 00:44:04.153 lat (msec): min=22, max=145, avg=56.86, stdev=13.04 00:44:04.153 clat percentiles (msec): 00:44:04.153 | 1.00th=[ 35], 5.00th=[ 42], 10.00th=[ 45], 20.00th=[ 46], 00:44:04.153 | 30.00th=[ 47], 40.00th=[ 51], 50.00th=[ 59], 60.00th=[ 65], 00:44:04.153 | 70.00th=[ 65], 80.00th=[ 66], 90.00th=[ 66], 95.00th=[ 67], 00:44:04.153 | 99.00th=[ 102], 99.50th=[ 146], 99.90th=[ 146], 99.95th=[ 146], 00:44:04.153 | 99.99th=[ 146] 00:44:04.153 bw ( KiB/s): min= 768, max= 1424, per=4.24%, avg=1118.40, stdev=194.09, samples=20 00:44:04.153 iops : min= 192, max= 356, avg=279.60, stdev=48.52, samples=20 00:44:04.153 lat (msec) : 50=40.02%, 100=58.84%, 250=1.14% 00:44:04.153 cpu : usr=98.24%, sys=1.16%, ctx=13, majf=0, minf=1633 00:44:04.153 IO depths : 1=1.4%, 2=3.4%, 4=9.3%, 8=71.7%, 16=14.3%, 32=0.0%, >=64=0.0% 00:44:04.153 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:04.153 complete : 0=0.0%, 4=90.7%, 8=6.6%, 16=2.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:04.153 issued rwts: total=2806,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:04.153 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:04.153 00:44:04.153 Run status group 0 (all jobs): 00:44:04.153 READ: bw=25.8MiB/s (27.0MB/s), 1093KiB/s-1157KiB/s (1120kB/s-1185kB/s), io=259MiB (272MB), run=10006-10061msec 00:44:04.720 ----------------------------------------------------- 00:44:04.720 Suppressions used: 00:44:04.720 count bytes template 00:44:04.720 45 402 /usr/src/fio/parse.c 00:44:04.720 1 8 libtcmalloc_minimal.so 00:44:04.720 1 904 libcrypto.so 00:44:04.720 ----------------------------------------------------- 00:44:04.720 00:44:04.720 16:52:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:44:04.720 16:52:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:44:04.720 16:52:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:44:04.720 16:52:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:44:04.720 16:52:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:44:04.720 16:52:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:44:04.720 16:52:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:04.720 16:52:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:04.720 16:52:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:04.720 16:52:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:44:04.720 16:52:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:04.720 16:52:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:04.720 16:52:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:04.720 16:52:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:44:04.720 16:52:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:44:04.720 16:52:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:44:04.720 16:52:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:44:04.720 16:52:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:04.720 16:52:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:04.720 16:52:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:04.720 16:52:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:44:04.720 16:52:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:04.720 16:52:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:04.720 16:52:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:04.720 16:52:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:44:04.720 16:52:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:44:04.720 16:52:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:44:04.720 16:52:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:44:04.720 16:52:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:04.720 16:52:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:04.720 16:52:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:04.720 16:52:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:44:04.720 16:52:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:04.720 16:52:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:04.720 16:52:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:04.720 16:52:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:44:04.720 16:52:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:44:04.720 16:52:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:44:04.720 16:52:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:44:04.720 16:52:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:44:04.720 16:52:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:44:04.720 16:52:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:44:04.720 16:52:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:44:04.720 16:52:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:44:04.720 16:52:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:44:04.720 16:52:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:44:04.720 16:52:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:44:04.720 16:52:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:04.720 16:52:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:04.720 bdev_null0 00:44:04.721 16:52:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:04.721 16:52:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:44:04.721 16:52:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:04.721 16:52:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:04.721 16:52:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:04.721 16:52:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:44:04.721 16:52:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:04.721 16:52:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:04.721 16:52:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:04.721 16:52:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:44:04.721 16:52:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:04.721 16:52:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:04.721 [2024-09-29 16:52:05.218252] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:04.721 16:52:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:04.721 16:52:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:44:04.721 16:52:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:44:04.721 16:52:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:44:04.721 16:52:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:44:04.721 16:52:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:04.721 16:52:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:04.721 bdev_null1 00:44:04.721 16:52:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:04.721 16:52:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:44:04.721 16:52:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:04.721 16:52:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:04.721 16:52:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:04.721 16:52:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:44:04.721 16:52:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:04.721 16:52:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:04.721 16:52:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:04.721 16:52:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:04.721 16:52:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:04.721 16:52:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:04.721 16:52:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:04.721 16:52:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:44:04.721 16:52:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:04.721 16:52:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:44:04.721 16:52:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:04.721 16:52:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:44:04.721 16:52:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:44:04.721 16:52:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:44:04.721 16:52:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:44:04.721 16:52:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:44:04.721 16:52:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:44:04.721 16:52:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:04.721 16:52:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:44:04.721 16:52:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:44:04.721 16:52:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:44:04.721 16:52:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:44:04.721 16:52:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:44:04.721 16:52:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:44:04.721 16:52:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:44:04.721 16:52:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:44:04.721 { 00:44:04.721 "params": { 00:44:04.721 "name": "Nvme$subsystem", 00:44:04.721 "trtype": "$TEST_TRANSPORT", 00:44:04.721 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:04.721 "adrfam": "ipv4", 00:44:04.721 "trsvcid": "$NVMF_PORT", 00:44:04.721 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:04.721 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:04.721 "hdgst": ${hdgst:-false}, 00:44:04.721 "ddgst": ${ddgst:-false} 00:44:04.721 }, 00:44:04.721 "method": "bdev_nvme_attach_controller" 00:44:04.721 } 00:44:04.721 EOF 00:44:04.721 )") 00:44:04.721 16:52:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:44:04.721 16:52:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:04.721 16:52:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:44:04.721 16:52:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:44:04.721 16:52:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:44:04.721 16:52:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:44:04.721 16:52:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:44:04.721 16:52:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:44:04.721 16:52:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:44:04.721 { 00:44:04.721 "params": { 00:44:04.721 "name": "Nvme$subsystem", 00:44:04.721 "trtype": "$TEST_TRANSPORT", 00:44:04.721 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:04.721 "adrfam": "ipv4", 00:44:04.721 "trsvcid": "$NVMF_PORT", 00:44:04.721 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:04.721 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:04.721 "hdgst": ${hdgst:-false}, 00:44:04.721 "ddgst": ${ddgst:-false} 00:44:04.721 }, 00:44:04.721 "method": "bdev_nvme_attach_controller" 00:44:04.721 } 00:44:04.721 EOF 00:44:04.721 )") 00:44:04.721 16:52:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:44:04.721 16:52:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:44:04.721 16:52:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:44:04.721 16:52:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:44:04.721 16:52:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:44:04.721 16:52:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:44:04.721 "params": { 00:44:04.721 "name": "Nvme0", 00:44:04.721 "trtype": "tcp", 00:44:04.721 "traddr": "10.0.0.2", 00:44:04.721 "adrfam": "ipv4", 00:44:04.721 "trsvcid": "4420", 00:44:04.721 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:04.721 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:04.721 "hdgst": false, 00:44:04.721 "ddgst": false 00:44:04.721 }, 00:44:04.721 "method": "bdev_nvme_attach_controller" 00:44:04.721 },{ 00:44:04.721 "params": { 00:44:04.721 "name": "Nvme1", 00:44:04.721 "trtype": "tcp", 00:44:04.721 "traddr": "10.0.0.2", 00:44:04.721 "adrfam": "ipv4", 00:44:04.721 "trsvcid": "4420", 00:44:04.721 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:44:04.721 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:44:04.721 "hdgst": false, 00:44:04.721 "ddgst": false 00:44:04.721 }, 00:44:04.721 "method": "bdev_nvme_attach_controller" 00:44:04.721 }' 00:44:04.721 16:52:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:44:04.721 16:52:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:44:04.721 16:52:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:44:04.721 16:52:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:44:04.721 16:52:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:05.288 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:44:05.288 ... 00:44:05.288 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:44:05.288 ... 00:44:05.288 fio-3.35 00:44:05.288 Starting 4 threads 00:44:11.880 00:44:11.880 filename0: (groupid=0, jobs=1): err= 0: pid=3402322: Sun Sep 29 16:52:11 2024 00:44:11.880 read: IOPS=1480, BW=11.6MiB/s (12.1MB/s)(57.9MiB/5003msec) 00:44:11.880 slat (nsec): min=6681, max=80187, avg=16600.84, stdev=6610.22 00:44:11.880 clat (usec): min=1434, max=10516, avg=5350.66, stdev=461.28 00:44:11.880 lat (usec): min=1452, max=10596, avg=5367.26, stdev=461.17 00:44:11.880 clat percentiles (usec): 00:44:11.880 | 1.00th=[ 4146], 5.00th=[ 4752], 10.00th=[ 4948], 20.00th=[ 5145], 00:44:11.880 | 30.00th=[ 5211], 40.00th=[ 5276], 50.00th=[ 5342], 60.00th=[ 5407], 00:44:11.880 | 70.00th=[ 5538], 80.00th=[ 5604], 90.00th=[ 5735], 95.00th=[ 5866], 00:44:11.880 | 99.00th=[ 6456], 99.50th=[ 7111], 99.90th=[10159], 99.95th=[10421], 00:44:11.880 | 99.99th=[10552] 00:44:11.880 bw ( KiB/s): min=11392, max=12304, per=25.15%, avg=11840.00, stdev=321.60, samples=10 00:44:11.880 iops : min= 1424, max= 1538, avg=1480.00, stdev=40.20, samples=10 00:44:11.880 lat (msec) : 2=0.09%, 4=0.67%, 10=99.12%, 20=0.11% 00:44:11.880 cpu : usr=93.18%, sys=6.16%, ctx=18, majf=0, minf=1635 00:44:11.880 IO depths : 1=0.3%, 2=8.4%, 4=63.2%, 8=28.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:11.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:11.880 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:11.880 issued rwts: total=7408,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:11.880 latency : target=0, window=0, percentile=100.00%, depth=8 00:44:11.880 filename0: (groupid=0, jobs=1): err= 0: pid=3402323: Sun Sep 29 16:52:11 2024 00:44:11.880 read: IOPS=1471, BW=11.5MiB/s (12.1MB/s)(57.5MiB/5001msec) 00:44:11.880 slat (nsec): min=6692, max=75449, avg=20261.29, stdev=8636.86 00:44:11.880 clat (usec): min=1017, max=11999, avg=5360.38, stdev=785.70 00:44:11.880 lat (usec): min=1048, max=12021, avg=5380.65, stdev=785.19 00:44:11.880 clat percentiles (usec): 00:44:11.880 | 1.00th=[ 2114], 5.00th=[ 4621], 10.00th=[ 4948], 20.00th=[ 5080], 00:44:11.880 | 30.00th=[ 5211], 40.00th=[ 5276], 50.00th=[ 5342], 60.00th=[ 5407], 00:44:11.880 | 70.00th=[ 5473], 80.00th=[ 5604], 90.00th=[ 5735], 95.00th=[ 6128], 00:44:11.880 | 99.00th=[ 8717], 99.50th=[ 9241], 99.90th=[ 9765], 99.95th=[10159], 00:44:11.880 | 99.99th=[11994] 00:44:11.880 bw ( KiB/s): min=11392, max=12128, per=24.98%, avg=11760.00, stdev=278.63, samples=9 00:44:11.880 iops : min= 1424, max= 1516, avg=1470.00, stdev=34.83, samples=9 00:44:11.880 lat (msec) : 2=0.76%, 4=1.88%, 10=97.30%, 20=0.07% 00:44:11.880 cpu : usr=93.80%, sys=5.50%, ctx=7, majf=0, minf=1632 00:44:11.880 IO depths : 1=0.1%, 2=20.0%, 4=53.0%, 8=27.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:11.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:11.880 complete : 0=0.0%, 4=91.6%, 8=8.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:11.880 issued rwts: total=7360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:11.880 latency : target=0, window=0, percentile=100.00%, depth=8 00:44:11.880 filename1: (groupid=0, jobs=1): err= 0: pid=3402324: Sun Sep 29 16:52:11 2024 00:44:11.880 read: IOPS=1462, BW=11.4MiB/s (12.0MB/s)(57.1MiB/5002msec) 00:44:11.880 slat (nsec): min=6609, max=96072, avg=20137.96, stdev=7871.43 00:44:11.880 clat (usec): min=1196, max=10152, avg=5398.57, stdev=738.39 00:44:11.880 lat (usec): min=1223, max=10162, avg=5418.71, stdev=737.64 00:44:11.880 clat percentiles (usec): 00:44:11.880 | 1.00th=[ 2835], 5.00th=[ 4686], 10.00th=[ 5014], 20.00th=[ 5145], 00:44:11.880 | 30.00th=[ 5211], 40.00th=[ 5276], 50.00th=[ 5342], 60.00th=[ 5407], 00:44:11.880 | 70.00th=[ 5538], 80.00th=[ 5604], 90.00th=[ 5735], 95.00th=[ 6194], 00:44:11.880 | 99.00th=[ 8455], 99.50th=[ 9241], 99.90th=[ 9896], 99.95th=[10028], 00:44:11.880 | 99.99th=[10159] 00:44:11.880 bw ( KiB/s): min=11232, max=12032, per=24.83%, avg=11690.30, stdev=299.67, samples=10 00:44:11.880 iops : min= 1404, max= 1504, avg=1461.20, stdev=37.38, samples=10 00:44:11.880 lat (msec) : 2=0.41%, 4=1.60%, 10=97.92%, 20=0.07% 00:44:11.880 cpu : usr=89.54%, sys=7.28%, ctx=68, majf=0, minf=1637 00:44:11.880 IO depths : 1=0.2%, 2=18.3%, 4=54.7%, 8=26.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:11.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:11.880 complete : 0=0.0%, 4=91.7%, 8=8.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:11.880 issued rwts: total=7313,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:11.880 latency : target=0, window=0, percentile=100.00%, depth=8 00:44:11.880 filename1: (groupid=0, jobs=1): err= 0: pid=3402325: Sun Sep 29 16:52:11 2024 00:44:11.880 read: IOPS=1472, BW=11.5MiB/s (12.1MB/s)(57.5MiB/5001msec) 00:44:11.880 slat (nsec): min=6501, max=88695, avg=20261.97, stdev=8672.98 00:44:11.880 clat (usec): min=1011, max=11558, avg=5358.84, stdev=756.94 00:44:11.880 lat (usec): min=1031, max=11580, avg=5379.10, stdev=756.43 00:44:11.880 clat percentiles (usec): 00:44:11.880 | 1.00th=[ 2245], 5.00th=[ 4621], 10.00th=[ 4948], 20.00th=[ 5080], 00:44:11.880 | 30.00th=[ 5211], 40.00th=[ 5276], 50.00th=[ 5342], 60.00th=[ 5407], 00:44:11.880 | 70.00th=[ 5473], 80.00th=[ 5604], 90.00th=[ 5735], 95.00th=[ 6063], 00:44:11.880 | 99.00th=[ 8586], 99.50th=[ 9110], 99.90th=[ 9765], 99.95th=[ 9765], 00:44:11.880 | 99.99th=[11600] 00:44:11.880 bw ( KiB/s): min=11168, max=12240, per=25.01%, avg=11773.50, stdev=350.80, samples=10 00:44:11.880 iops : min= 1396, max= 1530, avg=1471.60, stdev=43.87, samples=10 00:44:11.880 lat (msec) : 2=0.76%, 4=1.59%, 10=97.62%, 20=0.03% 00:44:11.880 cpu : usr=94.36%, sys=5.04%, ctx=8, majf=0, minf=1634 00:44:11.880 IO depths : 1=0.2%, 2=19.7%, 4=53.4%, 8=26.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:11.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:11.880 complete : 0=0.0%, 4=91.6%, 8=8.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:11.880 issued rwts: total=7362,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:11.880 latency : target=0, window=0, percentile=100.00%, depth=8 00:44:11.880 00:44:11.880 Run status group 0 (all jobs): 00:44:11.880 READ: bw=46.0MiB/s (48.2MB/s), 11.4MiB/s-11.6MiB/s (12.0MB/s-12.1MB/s), io=230MiB (241MB), run=5001-5003msec 00:44:12.467 ----------------------------------------------------- 00:44:12.467 Suppressions used: 00:44:12.467 count bytes template 00:44:12.467 6 52 /usr/src/fio/parse.c 00:44:12.467 1 8 libtcmalloc_minimal.so 00:44:12.467 1 904 libcrypto.so 00:44:12.467 ----------------------------------------------------- 00:44:12.467 00:44:12.467 16:52:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:44:12.467 16:52:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:44:12.467 16:52:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:44:12.467 16:52:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:44:12.467 16:52:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:44:12.467 16:52:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:44:12.467 16:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:12.467 16:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:12.467 16:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:12.467 16:52:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:44:12.467 16:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:12.467 16:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:12.467 16:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:12.467 16:52:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:44:12.467 16:52:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:44:12.467 16:52:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:44:12.467 16:52:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:44:12.467 16:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:12.467 16:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:12.467 16:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:12.467 16:52:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:44:12.467 16:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:12.467 16:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:12.467 16:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:12.467 00:44:12.467 real 0m28.309s 00:44:12.467 user 4m36.028s 00:44:12.467 sys 0m7.297s 00:44:12.467 16:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:44:12.467 16:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:12.467 ************************************ 00:44:12.467 END TEST fio_dif_rand_params 00:44:12.467 ************************************ 00:44:12.467 16:52:12 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:44:12.467 16:52:12 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:44:12.467 16:52:12 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:44:12.467 16:52:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:12.467 ************************************ 00:44:12.467 START TEST fio_dif_digest 00:44:12.467 ************************************ 00:44:12.467 16:52:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:44:12.467 16:52:12 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:44:12.467 16:52:12 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:44:12.467 16:52:12 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:44:12.467 16:52:12 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:44:12.467 16:52:12 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:44:12.467 16:52:12 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:44:12.467 16:52:12 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:44:12.467 16:52:12 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:44:12.467 16:52:12 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:44:12.467 16:52:12 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:44:12.467 16:52:12 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:44:12.467 16:52:12 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:44:12.467 16:52:12 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:44:12.467 16:52:12 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:44:12.467 16:52:12 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:44:12.467 16:52:12 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:44:12.467 16:52:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:12.467 16:52:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:12.467 bdev_null0 00:44:12.467 16:52:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:12.467 16:52:12 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:44:12.467 16:52:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:12.467 16:52:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:12.467 16:52:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:12.467 16:52:12 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:44:12.467 16:52:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:12.467 16:52:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:12.467 16:52:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:12.467 16:52:12 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:44:12.467 16:52:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:12.467 16:52:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:12.467 [2024-09-29 16:52:12.870614] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:12.467 16:52:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:12.467 16:52:12 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:44:12.467 16:52:12 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:44:12.467 16:52:12 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:44:12.468 16:52:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # config=() 00:44:12.468 16:52:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # local subsystem config 00:44:12.468 16:52:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:44:12.468 16:52:12 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:12.468 16:52:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:44:12.468 { 00:44:12.468 "params": { 00:44:12.468 "name": "Nvme$subsystem", 00:44:12.468 "trtype": "$TEST_TRANSPORT", 00:44:12.468 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:12.468 "adrfam": "ipv4", 00:44:12.468 "trsvcid": "$NVMF_PORT", 00:44:12.468 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:12.468 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:12.468 "hdgst": ${hdgst:-false}, 00:44:12.468 "ddgst": ${ddgst:-false} 00:44:12.468 }, 00:44:12.468 "method": "bdev_nvme_attach_controller" 00:44:12.468 } 00:44:12.468 EOF 00:44:12.468 )") 00:44:12.468 16:52:12 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:44:12.468 16:52:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:12.468 16:52:12 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:44:12.468 16:52:12 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:44:12.468 16:52:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:44:12.468 16:52:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:44:12.468 16:52:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:44:12.468 16:52:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:12.468 16:52:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:44:12.468 16:52:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@578 -- # cat 00:44:12.468 16:52:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:44:12.468 16:52:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:44:12.468 16:52:12 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:44:12.468 16:52:12 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:44:12.468 16:52:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:12.468 16:52:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:44:12.468 16:52:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:44:12.468 16:52:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # jq . 00:44:12.468 16:52:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@581 -- # IFS=, 00:44:12.468 16:52:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:44:12.468 "params": { 00:44:12.468 "name": "Nvme0", 00:44:12.468 "trtype": "tcp", 00:44:12.468 "traddr": "10.0.0.2", 00:44:12.468 "adrfam": "ipv4", 00:44:12.468 "trsvcid": "4420", 00:44:12.468 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:12.468 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:12.468 "hdgst": true, 00:44:12.468 "ddgst": true 00:44:12.468 }, 00:44:12.468 "method": "bdev_nvme_attach_controller" 00:44:12.468 }' 00:44:12.468 16:52:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:44:12.468 16:52:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:44:12.468 16:52:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # break 00:44:12.468 16:52:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:44:12.468 16:52:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:12.726 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:44:12.726 ... 00:44:12.726 fio-3.35 00:44:12.726 Starting 3 threads 00:44:24.919 00:44:24.919 filename0: (groupid=0, jobs=1): err= 0: pid=3403327: Sun Sep 29 16:52:24 2024 00:44:24.919 read: IOPS=173, BW=21.7MiB/s (22.8MB/s)(219MiB/10050msec) 00:44:24.919 slat (nsec): min=5599, max=60826, avg=22132.85, stdev=4300.36 00:44:24.919 clat (usec): min=14415, max=54520, avg=17196.20, stdev=1520.04 00:44:24.919 lat (usec): min=14436, max=54553, avg=17218.34, stdev=1520.31 00:44:24.920 clat percentiles (usec): 00:44:24.920 | 1.00th=[15008], 5.00th=[15795], 10.00th=[16057], 20.00th=[16450], 00:44:24.920 | 30.00th=[16909], 40.00th=[16909], 50.00th=[17171], 60.00th=[17433], 00:44:24.920 | 70.00th=[17433], 80.00th=[17695], 90.00th=[17957], 95.00th=[18482], 00:44:24.920 | 99.00th=[19530], 99.50th=[20579], 99.90th=[50070], 99.95th=[54264], 00:44:24.920 | 99.99th=[54264] 00:44:24.920 bw ( KiB/s): min=20777, max=23040, per=34.11%, avg=22350.85, stdev=477.87, samples=20 00:44:24.920 iops : min= 162, max= 180, avg=174.60, stdev= 3.79, samples=20 00:44:24.920 lat (msec) : 20=99.37%, 50=0.57%, 100=0.06% 00:44:24.920 cpu : usr=89.10%, sys=8.17%, ctx=386, majf=0, minf=1634 00:44:24.920 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:24.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:24.920 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:24.920 issued rwts: total=1748,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:24.920 latency : target=0, window=0, percentile=100.00%, depth=3 00:44:24.920 filename0: (groupid=0, jobs=1): err= 0: pid=3403328: Sun Sep 29 16:52:24 2024 00:44:24.920 read: IOPS=172, BW=21.6MiB/s (22.6MB/s)(217MiB/10050msec) 00:44:24.920 slat (nsec): min=5489, max=61199, avg=21442.43, stdev=2765.51 00:44:24.920 clat (usec): min=11145, max=55327, avg=17337.58, stdev=1590.44 00:44:24.920 lat (usec): min=11150, max=55351, avg=17359.02, stdev=1590.56 00:44:24.920 clat percentiles (usec): 00:44:24.920 | 1.00th=[14746], 5.00th=[15664], 10.00th=[16057], 20.00th=[16581], 00:44:24.920 | 30.00th=[16909], 40.00th=[17171], 50.00th=[17433], 60.00th=[17433], 00:44:24.920 | 70.00th=[17695], 80.00th=[17957], 90.00th=[18482], 95.00th=[19006], 00:44:24.920 | 99.00th=[20055], 99.50th=[20579], 99.90th=[51119], 99.95th=[55313], 00:44:24.920 | 99.99th=[55313] 00:44:24.920 bw ( KiB/s): min=21504, max=22784, per=33.81%, avg=22156.80, stdev=347.21, samples=20 00:44:24.920 iops : min= 168, max= 178, avg=173.10, stdev= 2.71, samples=20 00:44:24.920 lat (msec) : 20=98.85%, 50=1.04%, 100=0.12% 00:44:24.920 cpu : usr=93.35%, sys=6.06%, ctx=37, majf=0, minf=1635 00:44:24.920 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:24.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:24.920 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:24.920 issued rwts: total=1734,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:24.920 latency : target=0, window=0, percentile=100.00%, depth=3 00:44:24.920 filename0: (groupid=0, jobs=1): err= 0: pid=3403329: Sun Sep 29 16:52:24 2024 00:44:24.920 read: IOPS=165, BW=20.7MiB/s (21.7MB/s)(208MiB/10050msec) 00:44:24.920 slat (nsec): min=5869, max=45064, avg=21417.33, stdev=2662.57 00:44:24.920 clat (usec): min=14011, max=55948, avg=18078.47, stdev=1728.22 00:44:24.920 lat (usec): min=14032, max=55974, avg=18099.89, stdev=1728.15 00:44:24.920 clat percentiles (usec): 00:44:24.920 | 1.00th=[15401], 5.00th=[16450], 10.00th=[16909], 20.00th=[17171], 00:44:24.920 | 30.00th=[17433], 40.00th=[17695], 50.00th=[17957], 60.00th=[18220], 00:44:24.920 | 70.00th=[18482], 80.00th=[18744], 90.00th=[19530], 95.00th=[20055], 00:44:24.920 | 99.00th=[21103], 99.50th=[21103], 99.90th=[49546], 99.95th=[55837], 00:44:24.920 | 99.99th=[55837] 00:44:24.920 bw ( KiB/s): min=20480, max=22016, per=32.43%, avg=21248.00, stdev=389.57, samples=20 00:44:24.920 iops : min= 160, max= 172, avg=166.00, stdev= 3.04, samples=20 00:44:24.920 lat (msec) : 20=94.71%, 50=5.23%, 100=0.06% 00:44:24.920 cpu : usr=93.60%, sys=5.81%, ctx=19, majf=0, minf=1635 00:44:24.920 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:24.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:24.920 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:24.920 issued rwts: total=1663,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:24.920 latency : target=0, window=0, percentile=100.00%, depth=3 00:44:24.920 00:44:24.920 Run status group 0 (all jobs): 00:44:24.920 READ: bw=64.0MiB/s (67.1MB/s), 20.7MiB/s-21.7MiB/s (21.7MB/s-22.8MB/s), io=643MiB (674MB), run=10050-10050msec 00:44:24.920 ----------------------------------------------------- 00:44:24.920 Suppressions used: 00:44:24.920 count bytes template 00:44:24.920 5 44 /usr/src/fio/parse.c 00:44:24.920 1 8 libtcmalloc_minimal.so 00:44:24.920 1 904 libcrypto.so 00:44:24.920 ----------------------------------------------------- 00:44:24.920 00:44:24.920 16:52:25 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:44:24.920 16:52:25 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:44:24.920 16:52:25 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:44:24.920 16:52:25 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:44:24.920 16:52:25 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:44:24.920 16:52:25 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:44:24.920 16:52:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:24.920 16:52:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:24.920 16:52:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:24.920 16:52:25 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:44:24.920 16:52:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:24.920 16:52:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:24.920 16:52:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:24.920 00:44:24.920 real 0m12.187s 00:44:24.920 user 0m29.712s 00:44:24.920 sys 0m2.443s 00:44:24.920 16:52:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:44:24.920 16:52:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:24.920 ************************************ 00:44:24.920 END TEST fio_dif_digest 00:44:24.920 ************************************ 00:44:24.920 16:52:25 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:44:24.920 16:52:25 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:44:24.920 16:52:25 nvmf_dif -- nvmf/common.sh@512 -- # nvmfcleanup 00:44:24.920 16:52:25 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:44:24.920 16:52:25 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:44:24.920 16:52:25 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:44:24.920 16:52:25 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:44:24.920 16:52:25 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:44:24.920 rmmod nvme_tcp 00:44:24.920 rmmod nvme_fabrics 00:44:24.920 rmmod nvme_keyring 00:44:24.920 16:52:25 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:44:24.920 16:52:25 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:44:24.920 16:52:25 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:44:24.920 16:52:25 nvmf_dif -- nvmf/common.sh@513 -- # '[' -n 3396534 ']' 00:44:24.920 16:52:25 nvmf_dif -- nvmf/common.sh@514 -- # killprocess 3396534 00:44:24.920 16:52:25 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 3396534 ']' 00:44:24.920 16:52:25 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 3396534 00:44:24.920 16:52:25 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:44:24.920 16:52:25 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:44:24.920 16:52:25 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3396534 00:44:24.920 16:52:25 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:44:24.920 16:52:25 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:44:24.920 16:52:25 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3396534' 00:44:24.920 killing process with pid 3396534 00:44:24.920 16:52:25 nvmf_dif -- common/autotest_common.sh@969 -- # kill 3396534 00:44:24.920 16:52:25 nvmf_dif -- common/autotest_common.sh@974 -- # wait 3396534 00:44:26.294 16:52:26 nvmf_dif -- nvmf/common.sh@516 -- # '[' iso == iso ']' 00:44:26.294 16:52:26 nvmf_dif -- nvmf/common.sh@517 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:44:27.230 Waiting for block devices as requested 00:44:27.230 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:44:27.230 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:44:27.230 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:44:27.487 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:44:27.487 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:44:27.487 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:44:27.487 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:44:27.746 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:44:27.746 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:44:27.746 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:44:27.746 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:44:28.004 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:44:28.004 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:44:28.004 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:44:28.262 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:44:28.262 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:44:28.262 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:44:28.520 16:52:28 nvmf_dif -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:44:28.520 16:52:28 nvmf_dif -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:44:28.520 16:52:28 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:44:28.520 16:52:28 nvmf_dif -- nvmf/common.sh@787 -- # iptables-save 00:44:28.520 16:52:28 nvmf_dif -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:44:28.520 16:52:28 nvmf_dif -- nvmf/common.sh@787 -- # iptables-restore 00:44:28.520 16:52:28 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:44:28.520 16:52:28 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:44:28.520 16:52:28 nvmf_dif -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:28.520 16:52:28 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:28.520 16:52:28 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:30.421 16:52:30 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:44:30.421 00:44:30.421 real 1m16.227s 00:44:30.421 user 6m45.034s 00:44:30.421 sys 0m19.062s 00:44:30.421 16:52:30 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:44:30.421 16:52:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:30.421 ************************************ 00:44:30.421 END TEST nvmf_dif 00:44:30.421 ************************************ 00:44:30.421 16:52:30 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:44:30.421 16:52:30 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:44:30.421 16:52:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:44:30.421 16:52:30 -- common/autotest_common.sh@10 -- # set +x 00:44:30.421 ************************************ 00:44:30.421 START TEST nvmf_abort_qd_sizes 00:44:30.421 ************************************ 00:44:30.421 16:52:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:44:30.680 * Looking for test storage... 00:44:30.680 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:44:30.680 16:52:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:44:30.680 16:52:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lcov --version 00:44:30.680 16:52:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:44:30.680 16:52:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:44:30.680 16:52:31 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:30.680 16:52:31 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:30.680 16:52:31 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:30.680 16:52:31 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:44:30.680 16:52:31 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:44:30.680 16:52:31 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:44:30.680 16:52:31 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:44:30.680 16:52:31 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:44:30.680 16:52:31 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:44:30.680 16:52:31 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:44:30.680 16:52:31 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:30.680 16:52:31 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:44:30.680 16:52:31 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:44:30.680 16:52:31 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:30.680 16:52:31 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:30.680 16:52:31 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:44:30.680 16:52:31 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:44:30.680 16:52:31 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:30.680 16:52:31 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:44:30.680 16:52:31 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:44:30.680 16:52:31 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:44:30.680 16:52:31 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:44:30.680 16:52:31 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:30.680 16:52:31 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:44:30.680 16:52:31 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:44:30.680 16:52:31 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:30.680 16:52:31 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:30.680 16:52:31 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:44:30.681 16:52:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:30.681 16:52:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:44:30.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:30.681 --rc genhtml_branch_coverage=1 00:44:30.681 --rc genhtml_function_coverage=1 00:44:30.681 --rc genhtml_legend=1 00:44:30.681 --rc geninfo_all_blocks=1 00:44:30.681 --rc geninfo_unexecuted_blocks=1 00:44:30.681 00:44:30.681 ' 00:44:30.681 16:52:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:44:30.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:30.681 --rc genhtml_branch_coverage=1 00:44:30.681 --rc genhtml_function_coverage=1 00:44:30.681 --rc genhtml_legend=1 00:44:30.681 --rc geninfo_all_blocks=1 00:44:30.681 --rc geninfo_unexecuted_blocks=1 00:44:30.681 00:44:30.681 ' 00:44:30.681 16:52:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:44:30.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:30.681 --rc genhtml_branch_coverage=1 00:44:30.681 --rc genhtml_function_coverage=1 00:44:30.681 --rc genhtml_legend=1 00:44:30.681 --rc geninfo_all_blocks=1 00:44:30.681 --rc geninfo_unexecuted_blocks=1 00:44:30.681 00:44:30.681 ' 00:44:30.681 16:52:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:44:30.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:30.681 --rc genhtml_branch_coverage=1 00:44:30.681 --rc genhtml_function_coverage=1 00:44:30.681 --rc genhtml_legend=1 00:44:30.681 --rc geninfo_all_blocks=1 00:44:30.681 --rc geninfo_unexecuted_blocks=1 00:44:30.681 00:44:30.681 ' 00:44:30.681 16:52:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:30.681 16:52:31 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:44:30.681 16:52:31 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:30.681 16:52:31 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:30.681 16:52:31 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:30.681 16:52:31 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:30.681 16:52:31 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:30.681 16:52:31 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:30.681 16:52:31 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:30.681 16:52:31 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:30.681 16:52:31 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:30.681 16:52:31 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:30.681 16:52:31 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:44:30.681 16:52:31 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:44:30.681 16:52:31 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:30.681 16:52:31 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:30.681 16:52:31 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:30.681 16:52:31 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:30.681 16:52:31 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:30.681 16:52:31 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:44:30.681 16:52:31 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:30.681 16:52:31 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:30.681 16:52:31 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:30.681 16:52:31 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:30.681 16:52:31 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:30.681 16:52:31 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:30.681 16:52:31 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:44:30.681 16:52:31 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:30.681 16:52:31 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:44:30.681 16:52:31 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:30.681 16:52:31 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:30.681 16:52:31 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:30.681 16:52:31 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:30.681 16:52:31 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:30.681 16:52:31 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:44:30.681 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:44:30.681 16:52:31 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:30.681 16:52:31 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:30.681 16:52:31 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:30.681 16:52:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:44:30.681 16:52:31 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:44:30.681 16:52:31 nvmf_abort_qd_sizes -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:30.681 16:52:31 nvmf_abort_qd_sizes -- nvmf/common.sh@472 -- # prepare_net_devs 00:44:30.681 16:52:31 nvmf_abort_qd_sizes -- nvmf/common.sh@434 -- # local -g is_hw=no 00:44:30.681 16:52:31 nvmf_abort_qd_sizes -- nvmf/common.sh@436 -- # remove_spdk_ns 00:44:30.681 16:52:31 nvmf_abort_qd_sizes -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:30.681 16:52:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:30.681 16:52:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:30.681 16:52:31 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:44:30.681 16:52:31 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:44:30.681 16:52:31 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:44:30.681 16:52:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:44:32.582 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@365 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:44:32.582 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ up == up ]] 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:44:32.582 Found net devices under 0000:0a:00.0: cvl_0_0 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ up == up ]] 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:44:32.582 Found net devices under 0000:0a:00.1: cvl_0_1 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # is_hw=yes 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:44:32.582 16:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:44:32.583 16:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:44:32.583 16:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:44:32.583 16:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:44:32.583 16:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:44:32.583 16:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:44:32.583 16:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:44:32.583 16:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:44:32.583 16:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:44:32.583 16:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:44:32.583 16:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:44:32.583 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:32.583 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:44:32.583 00:44:32.583 --- 10.0.0.2 ping statistics --- 00:44:32.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:32.583 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:44:32.583 16:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:44:32.583 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:32.583 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:44:32.583 00:44:32.583 --- 10.0.0.1 ping statistics --- 00:44:32.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:32.583 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:44:32.583 16:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:32.583 16:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # return 0 00:44:32.583 16:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # '[' iso == iso ']' 00:44:32.583 16:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@475 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:44:34.022 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:44:34.022 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:44:34.022 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:44:34.022 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:44:34.022 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:44:34.022 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:44:34.022 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:44:34.022 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:44:34.022 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:44:34.022 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:44:34.022 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:44:34.022 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:44:34.022 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:44:34.022 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:44:34.022 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:44:34.022 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:44:34.956 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:44:34.956 16:52:35 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:34.956 16:52:35 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:44:34.956 16:52:35 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:44:34.956 16:52:35 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:34.956 16:52:35 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:44:34.956 16:52:35 nvmf_abort_qd_sizes -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:44:34.956 16:52:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:44:34.956 16:52:35 nvmf_abort_qd_sizes -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:44:34.956 16:52:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:44:34.956 16:52:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:34.956 16:52:35 nvmf_abort_qd_sizes -- nvmf/common.sh@505 -- # nvmfpid=3408378 00:44:34.956 16:52:35 nvmf_abort_qd_sizes -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:44:34.956 16:52:35 nvmf_abort_qd_sizes -- nvmf/common.sh@506 -- # waitforlisten 3408378 00:44:34.956 16:52:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 3408378 ']' 00:44:34.956 16:52:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:34.956 16:52:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:44:34.956 16:52:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:34.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:34.956 16:52:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:44:34.956 16:52:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:34.956 [2024-09-29 16:52:35.496946] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:44:34.956 [2024-09-29 16:52:35.497105] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:35.214 [2024-09-29 16:52:35.636438] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:44:35.473 [2024-09-29 16:52:35.882125] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:35.473 [2024-09-29 16:52:35.882205] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:35.473 [2024-09-29 16:52:35.882230] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:35.473 [2024-09-29 16:52:35.882254] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:35.473 [2024-09-29 16:52:35.882274] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:35.473 [2024-09-29 16:52:35.882400] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:44:35.473 [2024-09-29 16:52:35.882471] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:44:35.473 [2024-09-29 16:52:35.882866] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:44:35.473 [2024-09-29 16:52:35.882870] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:44:36.040 16:52:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:44:36.040 16:52:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:44:36.040 16:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:44:36.040 16:52:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:44:36.040 16:52:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:36.040 16:52:36 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:36.040 16:52:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:44:36.040 16:52:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:44:36.040 16:52:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:44:36.040 16:52:36 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:44:36.040 16:52:36 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:44:36.040 16:52:36 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:88:00.0 ]] 00:44:36.040 16:52:36 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:44:36.040 16:52:36 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:44:36.040 16:52:36 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:44:36.040 16:52:36 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:44:36.040 16:52:36 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:44:36.040 16:52:36 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:44:36.040 16:52:36 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:44:36.040 16:52:36 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:88:00.0 00:44:36.040 16:52:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:44:36.040 16:52:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:44:36.040 16:52:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:44:36.040 16:52:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:44:36.040 16:52:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:44:36.040 16:52:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:36.040 ************************************ 00:44:36.040 START TEST spdk_target_abort 00:44:36.040 ************************************ 00:44:36.040 16:52:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:44:36.040 16:52:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:44:36.040 16:52:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:44:36.040 16:52:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:36.040 16:52:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:39.317 spdk_targetn1 00:44:39.317 16:52:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:39.317 16:52:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:44:39.317 16:52:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:39.317 16:52:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:39.317 [2024-09-29 16:52:39.424094] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:39.317 16:52:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:39.317 16:52:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:44:39.317 16:52:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:39.317 16:52:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:39.317 16:52:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:39.317 16:52:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:44:39.317 16:52:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:39.317 16:52:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:39.317 16:52:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:39.317 16:52:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:44:39.317 16:52:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:39.317 16:52:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:39.317 [2024-09-29 16:52:39.470765] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:39.317 16:52:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:39.317 16:52:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:44:39.317 16:52:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:44:39.317 16:52:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:44:39.317 16:52:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:44:39.317 16:52:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:44:39.317 16:52:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:44:39.317 16:52:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:44:39.317 16:52:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:44:39.317 16:52:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:44:39.317 16:52:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:39.317 16:52:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:44:39.317 16:52:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:39.317 16:52:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:44:39.317 16:52:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:39.317 16:52:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:44:39.317 16:52:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:39.317 16:52:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:44:39.317 16:52:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:39.317 16:52:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:39.317 16:52:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:39.317 16:52:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:42.606 Initializing NVMe Controllers 00:44:42.606 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:44:42.606 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:42.606 Initialization complete. Launching workers. 00:44:42.606 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9379, failed: 0 00:44:42.606 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1202, failed to submit 8177 00:44:42.606 success 735, unsuccessful 467, failed 0 00:44:42.606 16:52:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:42.606 16:52:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:45.888 Initializing NVMe Controllers 00:44:45.888 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:44:45.888 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:45.888 Initialization complete. Launching workers. 00:44:45.888 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8472, failed: 0 00:44:45.888 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1240, failed to submit 7232 00:44:45.888 success 327, unsuccessful 913, failed 0 00:44:45.888 16:52:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:45.888 16:52:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:49.173 Initializing NVMe Controllers 00:44:49.173 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:44:49.173 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:49.173 Initialization complete. Launching workers. 00:44:49.173 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 27455, failed: 0 00:44:49.173 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2605, failed to submit 24850 00:44:49.173 success 220, unsuccessful 2385, failed 0 00:44:49.173 16:52:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:44:49.173 16:52:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:49.173 16:52:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:49.173 16:52:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:49.173 16:52:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:44:49.173 16:52:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:49.173 16:52:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:50.548 16:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:50.548 16:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3408378 00:44:50.548 16:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 3408378 ']' 00:44:50.548 16:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 3408378 00:44:50.548 16:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:44:50.548 16:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:44:50.548 16:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3408378 00:44:50.548 16:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:44:50.548 16:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:44:50.548 16:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3408378' 00:44:50.548 killing process with pid 3408378 00:44:50.548 16:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 3408378 00:44:50.548 16:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 3408378 00:44:51.484 00:44:51.484 real 0m15.335s 00:44:51.484 user 0m59.034s 00:44:51.484 sys 0m2.908s 00:44:51.484 16:52:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:44:51.484 16:52:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:51.484 ************************************ 00:44:51.484 END TEST spdk_target_abort 00:44:51.484 ************************************ 00:44:51.484 16:52:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:44:51.484 16:52:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:44:51.484 16:52:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:44:51.484 16:52:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:51.484 ************************************ 00:44:51.484 START TEST kernel_target_abort 00:44:51.484 ************************************ 00:44:51.484 16:52:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:44:51.484 16:52:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:44:51.484 16:52:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@765 -- # local ip 00:44:51.484 16:52:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@766 -- # ip_candidates=() 00:44:51.484 16:52:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@766 -- # local -A ip_candidates 00:44:51.484 16:52:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:44:51.484 16:52:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:44:51.484 16:52:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:44:51.484 16:52:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:44:51.484 16:52:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:44:51.484 16:52:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:44:51.484 16:52:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:44:51.484 16:52:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:44:51.484 16:52:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:44:51.484 16:52:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:44:51.484 16:52:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:44:51.484 16:52:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:44:51.484 16:52:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:44:51.484 16:52:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # local block nvme 00:44:51.484 16:52:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:44:51.484 16:52:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@666 -- # modprobe nvmet 00:44:51.484 16:52:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:44:51.484 16:52:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:44:52.473 Waiting for block devices as requested 00:44:52.755 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:44:52.755 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:44:52.755 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:44:53.013 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:44:53.013 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:44:53.013 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:44:53.013 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:44:53.272 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:44:53.272 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:44:53.272 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:44:53.272 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:44:53.529 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:44:53.529 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:44:53.529 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:44:53.787 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:44:53.787 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:44:53.787 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:44:54.355 16:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:44:54.355 16:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:44:54.355 16:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:44:54.355 16:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:44:54.355 16:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:44:54.355 16:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:44:54.355 16:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:44:54.355 16:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:44:54.355 16:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:44:54.355 No valid GPT data, bailing 00:44:54.355 16:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:44:54.355 16:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:44:54.355 16:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:44:54.355 16:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:44:54.355 16:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # [[ -b /dev/nvme0n1 ]] 00:44:54.355 16:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:44:54.355 16:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:44:54.355 16:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:44:54.355 16:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:44:54.355 16:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # echo 1 00:44:54.355 16:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@692 -- # echo /dev/nvme0n1 00:44:54.355 16:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo 1 00:44:54.355 16:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:44:54.355 16:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo tcp 00:44:54.355 16:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 4420 00:44:54.355 16:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # echo ipv4 00:44:54.355 16:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:44:54.355 16:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:44:54.355 00:44:54.355 Discovery Log Number of Records 2, Generation counter 2 00:44:54.355 =====Discovery Log Entry 0====== 00:44:54.355 trtype: tcp 00:44:54.355 adrfam: ipv4 00:44:54.356 subtype: current discovery subsystem 00:44:54.356 treq: not specified, sq flow control disable supported 00:44:54.356 portid: 1 00:44:54.356 trsvcid: 4420 00:44:54.356 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:44:54.356 traddr: 10.0.0.1 00:44:54.356 eflags: none 00:44:54.356 sectype: none 00:44:54.356 =====Discovery Log Entry 1====== 00:44:54.356 trtype: tcp 00:44:54.356 adrfam: ipv4 00:44:54.356 subtype: nvme subsystem 00:44:54.356 treq: not specified, sq flow control disable supported 00:44:54.356 portid: 1 00:44:54.356 trsvcid: 4420 00:44:54.356 subnqn: nqn.2016-06.io.spdk:testnqn 00:44:54.356 traddr: 10.0.0.1 00:44:54.356 eflags: none 00:44:54.356 sectype: none 00:44:54.356 16:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:44:54.356 16:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:44:54.356 16:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:44:54.356 16:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:44:54.356 16:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:44:54.356 16:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:44:54.356 16:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:44:54.356 16:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:44:54.356 16:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:44:54.356 16:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:54.356 16:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:44:54.356 16:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:54.356 16:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:44:54.356 16:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:54.356 16:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:44:54.356 16:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:54.356 16:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:44:54.356 16:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:54.356 16:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:54.356 16:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:54.356 16:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:57.638 Initializing NVMe Controllers 00:44:57.638 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:44:57.638 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:57.638 Initialization complete. Launching workers. 00:44:57.638 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 33626, failed: 0 00:44:57.638 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 33626, failed to submit 0 00:44:57.638 success 0, unsuccessful 33626, failed 0 00:44:57.638 16:52:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:57.638 16:52:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:45:00.921 Initializing NVMe Controllers 00:45:00.921 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:45:00.921 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:45:00.921 Initialization complete. Launching workers. 00:45:00.921 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 68513, failed: 0 00:45:00.921 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 17286, failed to submit 51227 00:45:00.921 success 0, unsuccessful 17286, failed 0 00:45:00.921 16:53:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:45:00.921 16:53:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:45:04.198 Initializing NVMe Controllers 00:45:04.198 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:45:04.198 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:45:04.198 Initialization complete. Launching workers. 00:45:04.198 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 62445, failed: 0 00:45:04.198 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 15602, failed to submit 46843 00:45:04.198 success 0, unsuccessful 15602, failed 0 00:45:04.198 16:53:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:45:04.198 16:53:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:45:04.198 16:53:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@710 -- # echo 0 00:45:04.198 16:53:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:45:04.198 16:53:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:45:04.199 16:53:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:45:04.199 16:53:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:45:04.199 16:53:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:45:04.199 16:53:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:45:04.199 16:53:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@722 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:45:05.130 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:45:05.130 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:45:05.130 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:45:05.130 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:45:05.130 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:45:05.130 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:45:05.130 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:45:05.130 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:45:05.130 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:45:05.130 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:45:05.387 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:45:05.387 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:45:05.387 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:45:05.387 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:45:05.387 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:45:05.387 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:45:06.319 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:45:06.319 00:45:06.319 real 0m14.843s 00:45:06.319 user 0m6.794s 00:45:06.319 sys 0m3.608s 00:45:06.319 16:53:06 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:45:06.319 16:53:06 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:45:06.319 ************************************ 00:45:06.319 END TEST kernel_target_abort 00:45:06.319 ************************************ 00:45:06.319 16:53:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:45:06.319 16:53:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:45:06.319 16:53:06 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # nvmfcleanup 00:45:06.319 16:53:06 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:45:06.319 16:53:06 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:45:06.319 16:53:06 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:45:06.319 16:53:06 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:45:06.319 16:53:06 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:45:06.319 rmmod nvme_tcp 00:45:06.319 rmmod nvme_fabrics 00:45:06.319 rmmod nvme_keyring 00:45:06.319 16:53:06 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:45:06.319 16:53:06 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:45:06.319 16:53:06 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:45:06.319 16:53:06 nvmf_abort_qd_sizes -- nvmf/common.sh@513 -- # '[' -n 3408378 ']' 00:45:06.319 16:53:06 nvmf_abort_qd_sizes -- nvmf/common.sh@514 -- # killprocess 3408378 00:45:06.319 16:53:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 3408378 ']' 00:45:06.319 16:53:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 3408378 00:45:06.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3408378) - No such process 00:45:06.319 16:53:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 3408378 is not found' 00:45:06.319 Process with pid 3408378 is not found 00:45:06.319 16:53:06 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # '[' iso == iso ']' 00:45:06.319 16:53:06 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:45:07.694 Waiting for block devices as requested 00:45:07.694 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:45:07.694 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:45:07.694 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:45:07.952 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:45:07.952 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:45:07.952 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:45:07.952 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:45:08.211 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:45:08.211 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:45:08.211 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:45:08.211 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:45:08.469 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:45:08.469 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:45:08.469 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:45:08.469 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:45:08.727 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:45:08.727 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:45:08.727 16:53:09 nvmf_abort_qd_sizes -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:45:08.727 16:53:09 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:45:08.727 16:53:09 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:45:08.727 16:53:09 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # iptables-save 00:45:08.727 16:53:09 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:45:08.727 16:53:09 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # iptables-restore 00:45:08.727 16:53:09 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:45:08.727 16:53:09 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:45:08.727 16:53:09 nvmf_abort_qd_sizes -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:08.727 16:53:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:45:08.727 16:53:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:11.261 16:53:11 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:45:11.261 00:45:11.261 real 0m40.337s 00:45:11.261 user 1m8.307s 00:45:11.261 sys 0m9.934s 00:45:11.261 16:53:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:45:11.261 16:53:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:45:11.261 ************************************ 00:45:11.261 END TEST nvmf_abort_qd_sizes 00:45:11.261 ************************************ 00:45:11.261 16:53:11 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:45:11.261 16:53:11 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:45:11.261 16:53:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:45:11.261 16:53:11 -- common/autotest_common.sh@10 -- # set +x 00:45:11.261 ************************************ 00:45:11.261 START TEST keyring_file 00:45:11.261 ************************************ 00:45:11.261 16:53:11 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:45:11.261 * Looking for test storage... 00:45:11.261 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:45:11.261 16:53:11 keyring_file -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:45:11.261 16:53:11 keyring_file -- common/autotest_common.sh@1681 -- # lcov --version 00:45:11.261 16:53:11 keyring_file -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:45:11.261 16:53:11 keyring_file -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:45:11.261 16:53:11 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:11.261 16:53:11 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:11.261 16:53:11 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:11.261 16:53:11 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:45:11.261 16:53:11 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:45:11.261 16:53:11 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:45:11.261 16:53:11 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:45:11.261 16:53:11 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:45:11.261 16:53:11 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:45:11.261 16:53:11 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:45:11.261 16:53:11 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:11.261 16:53:11 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:45:11.261 16:53:11 keyring_file -- scripts/common.sh@345 -- # : 1 00:45:11.261 16:53:11 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:11.261 16:53:11 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:11.261 16:53:11 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:45:11.261 16:53:11 keyring_file -- scripts/common.sh@353 -- # local d=1 00:45:11.261 16:53:11 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:11.261 16:53:11 keyring_file -- scripts/common.sh@355 -- # echo 1 00:45:11.261 16:53:11 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:45:11.261 16:53:11 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:45:11.261 16:53:11 keyring_file -- scripts/common.sh@353 -- # local d=2 00:45:11.261 16:53:11 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:11.261 16:53:11 keyring_file -- scripts/common.sh@355 -- # echo 2 00:45:11.261 16:53:11 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:45:11.261 16:53:11 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:11.261 16:53:11 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:11.261 16:53:11 keyring_file -- scripts/common.sh@368 -- # return 0 00:45:11.261 16:53:11 keyring_file -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:11.261 16:53:11 keyring_file -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:45:11.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:11.261 --rc genhtml_branch_coverage=1 00:45:11.261 --rc genhtml_function_coverage=1 00:45:11.261 --rc genhtml_legend=1 00:45:11.261 --rc geninfo_all_blocks=1 00:45:11.261 --rc geninfo_unexecuted_blocks=1 00:45:11.261 00:45:11.261 ' 00:45:11.261 16:53:11 keyring_file -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:45:11.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:11.261 --rc genhtml_branch_coverage=1 00:45:11.261 --rc genhtml_function_coverage=1 00:45:11.261 --rc genhtml_legend=1 00:45:11.261 --rc geninfo_all_blocks=1 00:45:11.261 --rc geninfo_unexecuted_blocks=1 00:45:11.261 00:45:11.261 ' 00:45:11.261 16:53:11 keyring_file -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:45:11.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:11.261 --rc genhtml_branch_coverage=1 00:45:11.261 --rc genhtml_function_coverage=1 00:45:11.261 --rc genhtml_legend=1 00:45:11.261 --rc geninfo_all_blocks=1 00:45:11.261 --rc geninfo_unexecuted_blocks=1 00:45:11.261 00:45:11.261 ' 00:45:11.261 16:53:11 keyring_file -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:45:11.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:11.261 --rc genhtml_branch_coverage=1 00:45:11.261 --rc genhtml_function_coverage=1 00:45:11.261 --rc genhtml_legend=1 00:45:11.261 --rc geninfo_all_blocks=1 00:45:11.261 --rc geninfo_unexecuted_blocks=1 00:45:11.261 00:45:11.261 ' 00:45:11.261 16:53:11 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:45:11.262 16:53:11 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:45:11.262 16:53:11 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:45:11.262 16:53:11 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:11.262 16:53:11 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:11.262 16:53:11 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:11.262 16:53:11 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:11.262 16:53:11 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:11.262 16:53:11 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:11.262 16:53:11 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:11.262 16:53:11 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:11.262 16:53:11 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:11.262 16:53:11 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:11.262 16:53:11 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:45:11.262 16:53:11 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:45:11.262 16:53:11 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:11.262 16:53:11 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:11.262 16:53:11 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:45:11.262 16:53:11 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:11.262 16:53:11 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:45:11.262 16:53:11 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:45:11.262 16:53:11 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:11.262 16:53:11 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:11.262 16:53:11 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:11.262 16:53:11 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:11.262 16:53:11 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:11.262 16:53:11 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:11.262 16:53:11 keyring_file -- paths/export.sh@5 -- # export PATH 00:45:11.262 16:53:11 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:11.262 16:53:11 keyring_file -- nvmf/common.sh@51 -- # : 0 00:45:11.262 16:53:11 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:45:11.262 16:53:11 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:45:11.262 16:53:11 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:11.262 16:53:11 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:11.262 16:53:11 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:11.262 16:53:11 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:45:11.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:45:11.262 16:53:11 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:45:11.262 16:53:11 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:45:11.262 16:53:11 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:45:11.262 16:53:11 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:45:11.262 16:53:11 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:45:11.262 16:53:11 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:45:11.262 16:53:11 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:45:11.262 16:53:11 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:45:11.262 16:53:11 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:45:11.262 16:53:11 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:45:11.262 16:53:11 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:45:11.262 16:53:11 keyring_file -- keyring/common.sh@17 -- # name=key0 00:45:11.262 16:53:11 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:45:11.262 16:53:11 keyring_file -- keyring/common.sh@17 -- # digest=0 00:45:11.262 16:53:11 keyring_file -- keyring/common.sh@18 -- # mktemp 00:45:11.262 16:53:11 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.y4sip8KetL 00:45:11.262 16:53:11 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:45:11.262 16:53:11 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:45:11.262 16:53:11 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:45:11.262 16:53:11 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:45:11.262 16:53:11 keyring_file -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:45:11.262 16:53:11 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:45:11.262 16:53:11 keyring_file -- nvmf/common.sh@729 -- # python - 00:45:11.262 16:53:11 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.y4sip8KetL 00:45:11.262 16:53:11 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.y4sip8KetL 00:45:11.262 16:53:11 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.y4sip8KetL 00:45:11.262 16:53:11 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:45:11.262 16:53:11 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:45:11.262 16:53:11 keyring_file -- keyring/common.sh@17 -- # name=key1 00:45:11.262 16:53:11 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:45:11.262 16:53:11 keyring_file -- keyring/common.sh@17 -- # digest=0 00:45:11.262 16:53:11 keyring_file -- keyring/common.sh@18 -- # mktemp 00:45:11.262 16:53:11 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.zKXD9znAvq 00:45:11.262 16:53:11 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:45:11.262 16:53:11 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:45:11.262 16:53:11 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:45:11.262 16:53:11 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:45:11.262 16:53:11 keyring_file -- nvmf/common.sh@728 -- # key=112233445566778899aabbccddeeff00 00:45:11.262 16:53:11 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:45:11.262 16:53:11 keyring_file -- nvmf/common.sh@729 -- # python - 00:45:11.262 16:53:11 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.zKXD9znAvq 00:45:11.262 16:53:11 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.zKXD9znAvq 00:45:11.262 16:53:11 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.zKXD9znAvq 00:45:11.262 16:53:11 keyring_file -- keyring/file.sh@30 -- # tgtpid=3414599 00:45:11.262 16:53:11 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:45:11.262 16:53:11 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3414599 00:45:11.262 16:53:11 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 3414599 ']' 00:45:11.262 16:53:11 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:11.262 16:53:11 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:45:11.262 16:53:11 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:11.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:11.262 16:53:11 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:45:11.262 16:53:11 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:45:11.262 [2024-09-29 16:53:11.656044] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:45:11.262 [2024-09-29 16:53:11.656184] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3414599 ] 00:45:11.262 [2024-09-29 16:53:11.780264] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:11.521 [2024-09-29 16:53:12.001377] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:45:12.456 16:53:12 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:45:12.456 16:53:12 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:45:12.456 16:53:12 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:45:12.456 16:53:12 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:12.456 16:53:12 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:45:12.456 [2024-09-29 16:53:12.915997] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:12.456 null0 00:45:12.456 [2024-09-29 16:53:12.948043] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:45:12.456 [2024-09-29 16:53:12.948697] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:45:12.456 16:53:12 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:12.456 16:53:12 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:45:12.456 16:53:12 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:45:12.456 16:53:12 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:45:12.456 16:53:12 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:45:12.456 16:53:12 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:45:12.456 16:53:12 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:45:12.456 16:53:12 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:45:12.456 16:53:12 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:45:12.456 16:53:12 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:12.456 16:53:12 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:45:12.456 [2024-09-29 16:53:12.976093] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:45:12.456 request: 00:45:12.456 { 00:45:12.456 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:45:12.456 "secure_channel": false, 00:45:12.456 "listen_address": { 00:45:12.456 "trtype": "tcp", 00:45:12.456 "traddr": "127.0.0.1", 00:45:12.456 "trsvcid": "4420" 00:45:12.456 }, 00:45:12.456 "method": "nvmf_subsystem_add_listener", 00:45:12.456 "req_id": 1 00:45:12.456 } 00:45:12.456 Got JSON-RPC error response 00:45:12.456 response: 00:45:12.456 { 00:45:12.456 "code": -32602, 00:45:12.456 "message": "Invalid parameters" 00:45:12.456 } 00:45:12.456 16:53:12 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:45:12.456 16:53:12 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:45:12.456 16:53:12 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:45:12.456 16:53:12 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:45:12.456 16:53:12 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:45:12.456 16:53:12 keyring_file -- keyring/file.sh@47 -- # bperfpid=3414743 00:45:12.456 16:53:12 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:45:12.456 16:53:12 keyring_file -- keyring/file.sh@49 -- # waitforlisten 3414743 /var/tmp/bperf.sock 00:45:12.456 16:53:12 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 3414743 ']' 00:45:12.456 16:53:12 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:45:12.456 16:53:12 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:45:12.456 16:53:12 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:45:12.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:45:12.456 16:53:12 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:45:12.456 16:53:12 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:45:12.715 [2024-09-29 16:53:13.064587] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:45:12.715 [2024-09-29 16:53:13.064755] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3414743 ] 00:45:12.715 [2024-09-29 16:53:13.199965] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:12.973 [2024-09-29 16:53:13.454341] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:45:13.538 16:53:14 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:45:13.538 16:53:14 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:45:13.538 16:53:14 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.y4sip8KetL 00:45:13.538 16:53:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.y4sip8KetL 00:45:13.797 16:53:14 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.zKXD9znAvq 00:45:13.797 16:53:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.zKXD9znAvq 00:45:14.056 16:53:14 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:45:14.056 16:53:14 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:45:14.056 16:53:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:14.056 16:53:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:14.056 16:53:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:14.314 16:53:14 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.y4sip8KetL == \/\t\m\p\/\t\m\p\.\y\4\s\i\p\8\K\e\t\L ]] 00:45:14.314 16:53:14 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:45:14.314 16:53:14 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:45:14.314 16:53:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:14.314 16:53:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:14.314 16:53:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:14.573 16:53:15 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.zKXD9znAvq == \/\t\m\p\/\t\m\p\.\z\K\X\D\9\z\n\A\v\q ]] 00:45:14.573 16:53:15 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:45:14.573 16:53:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:14.573 16:53:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:14.573 16:53:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:14.573 16:53:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:14.573 16:53:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:14.831 16:53:15 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:45:14.831 16:53:15 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:45:14.831 16:53:15 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:45:14.831 16:53:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:14.831 16:53:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:14.831 16:53:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:14.831 16:53:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:15.090 16:53:15 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:45:15.090 16:53:15 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:15.090 16:53:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:15.348 [2024-09-29 16:53:15.889918] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:45:15.607 nvme0n1 00:45:15.607 16:53:15 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:45:15.607 16:53:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:15.607 16:53:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:15.607 16:53:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:15.607 16:53:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:15.607 16:53:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:15.865 16:53:16 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:45:15.865 16:53:16 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:45:15.865 16:53:16 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:45:15.865 16:53:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:15.865 16:53:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:15.865 16:53:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:15.865 16:53:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:16.124 16:53:16 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:45:16.124 16:53:16 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:45:16.124 Running I/O for 1 seconds... 00:45:17.501 5866.00 IOPS, 22.91 MiB/s 00:45:17.501 Latency(us) 00:45:17.501 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:17.501 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:45:17.501 nvme0n1 : 1.01 5914.59 23.10 0.00 0.00 21523.23 8349.77 34758.35 00:45:17.501 =================================================================================================================== 00:45:17.501 Total : 5914.59 23.10 0.00 0.00 21523.23 8349.77 34758.35 00:45:17.501 { 00:45:17.501 "results": [ 00:45:17.501 { 00:45:17.501 "job": "nvme0n1", 00:45:17.501 "core_mask": "0x2", 00:45:17.501 "workload": "randrw", 00:45:17.501 "percentage": 50, 00:45:17.501 "status": "finished", 00:45:17.501 "queue_depth": 128, 00:45:17.501 "io_size": 4096, 00:45:17.501 "runtime": 1.013426, 00:45:17.501 "iops": 5914.590705192091, 00:45:17.501 "mibps": 23.103869942156606, 00:45:17.501 "io_failed": 0, 00:45:17.501 "io_timeout": 0, 00:45:17.501 "avg_latency_us": 21523.23351895105, 00:45:17.501 "min_latency_us": 8349.771851851852, 00:45:17.501 "max_latency_us": 34758.35259259259 00:45:17.501 } 00:45:17.501 ], 00:45:17.501 "core_count": 1 00:45:17.501 } 00:45:17.501 16:53:17 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:45:17.501 16:53:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:45:17.501 16:53:17 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:45:17.501 16:53:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:17.501 16:53:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:17.501 16:53:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:17.501 16:53:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:17.501 16:53:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:17.759 16:53:18 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:45:17.759 16:53:18 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:45:17.759 16:53:18 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:45:17.759 16:53:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:17.760 16:53:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:17.760 16:53:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:17.760 16:53:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:18.018 16:53:18 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:45:18.018 16:53:18 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:45:18.018 16:53:18 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:45:18.018 16:53:18 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:45:18.018 16:53:18 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:45:18.018 16:53:18 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:45:18.018 16:53:18 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:45:18.018 16:53:18 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:45:18.018 16:53:18 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:45:18.018 16:53:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:45:18.277 [2024-09-29 16:53:18.779403] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_re[2024-09-29 16:53:18.779403] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7780 ad_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:45:18.277 (107): Transport endpoint is not connected 00:45:18.277 [2024-09-29 16:53:18.780372] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7780 (9): Bad file descriptor 00:45:18.277 [2024-09-29 16:53:18.781368] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:45:18.277 [2024-09-29 16:53:18.781404] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:45:18.277 [2024-09-29 16:53:18.781429] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:45:18.277 [2024-09-29 16:53:18.781465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:45:18.277 request: 00:45:18.277 { 00:45:18.277 "name": "nvme0", 00:45:18.277 "trtype": "tcp", 00:45:18.277 "traddr": "127.0.0.1", 00:45:18.277 "adrfam": "ipv4", 00:45:18.277 "trsvcid": "4420", 00:45:18.277 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:18.277 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:18.277 "prchk_reftag": false, 00:45:18.277 "prchk_guard": false, 00:45:18.277 "hdgst": false, 00:45:18.277 "ddgst": false, 00:45:18.277 "psk": "key1", 00:45:18.277 "allow_unrecognized_csi": false, 00:45:18.277 "method": "bdev_nvme_attach_controller", 00:45:18.277 "req_id": 1 00:45:18.277 } 00:45:18.277 Got JSON-RPC error response 00:45:18.277 response: 00:45:18.277 { 00:45:18.277 "code": -5, 00:45:18.277 "message": "Input/output error" 00:45:18.277 } 00:45:18.277 16:53:18 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:45:18.277 16:53:18 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:45:18.277 16:53:18 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:45:18.277 16:53:18 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:45:18.277 16:53:18 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:45:18.277 16:53:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:18.277 16:53:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:18.277 16:53:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:18.277 16:53:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:18.277 16:53:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:18.535 16:53:19 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:45:18.535 16:53:19 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:45:18.535 16:53:19 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:45:18.535 16:53:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:18.535 16:53:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:18.535 16:53:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:18.535 16:53:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:19.101 16:53:19 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:45:19.101 16:53:19 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:45:19.101 16:53:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:45:19.101 16:53:19 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:45:19.101 16:53:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:45:19.359 16:53:19 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:45:19.359 16:53:19 keyring_file -- keyring/file.sh@78 -- # jq length 00:45:19.359 16:53:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:19.617 16:53:20 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:45:19.617 16:53:20 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.y4sip8KetL 00:45:19.617 16:53:20 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.y4sip8KetL 00:45:19.617 16:53:20 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:45:19.617 16:53:20 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.y4sip8KetL 00:45:19.617 16:53:20 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:45:19.617 16:53:20 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:45:19.617 16:53:20 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:45:19.617 16:53:20 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:45:19.617 16:53:20 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.y4sip8KetL 00:45:19.617 16:53:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.y4sip8KetL 00:45:19.876 [2024-09-29 16:53:20.426459] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.y4sip8KetL': 0100660 00:45:19.876 [2024-09-29 16:53:20.426513] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:45:19.876 request: 00:45:19.876 { 00:45:19.876 "name": "key0", 00:45:19.876 "path": "/tmp/tmp.y4sip8KetL", 00:45:19.876 "method": "keyring_file_add_key", 00:45:19.876 "req_id": 1 00:45:19.876 } 00:45:19.876 Got JSON-RPC error response 00:45:19.876 response: 00:45:19.876 { 00:45:19.876 "code": -1, 00:45:19.876 "message": "Operation not permitted" 00:45:19.876 } 00:45:20.134 16:53:20 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:45:20.134 16:53:20 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:45:20.134 16:53:20 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:45:20.134 16:53:20 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:45:20.134 16:53:20 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.y4sip8KetL 00:45:20.134 16:53:20 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.y4sip8KetL 00:45:20.134 16:53:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.y4sip8KetL 00:45:20.393 16:53:20 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.y4sip8KetL 00:45:20.393 16:53:20 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:45:20.393 16:53:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:20.393 16:53:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:20.393 16:53:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:20.393 16:53:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:20.393 16:53:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:20.651 16:53:21 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:45:20.651 16:53:21 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:20.651 16:53:21 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:45:20.651 16:53:21 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:20.651 16:53:21 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:45:20.651 16:53:21 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:45:20.651 16:53:21 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:45:20.651 16:53:21 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:45:20.651 16:53:21 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:20.651 16:53:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:20.910 [2024-09-29 16:53:21.276931] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.y4sip8KetL': No such file or directory 00:45:20.910 [2024-09-29 16:53:21.277007] nvme_tcp.c:2609:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:45:20.910 [2024-09-29 16:53:21.277040] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:45:20.910 [2024-09-29 16:53:21.277061] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:45:20.910 [2024-09-29 16:53:21.277086] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:45:20.910 [2024-09-29 16:53:21.277105] bdev_nvme.c:6447:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:45:20.910 request: 00:45:20.910 { 00:45:20.910 "name": "nvme0", 00:45:20.910 "trtype": "tcp", 00:45:20.910 "traddr": "127.0.0.1", 00:45:20.910 "adrfam": "ipv4", 00:45:20.910 "trsvcid": "4420", 00:45:20.910 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:20.910 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:20.910 "prchk_reftag": false, 00:45:20.910 "prchk_guard": false, 00:45:20.910 "hdgst": false, 00:45:20.910 "ddgst": false, 00:45:20.910 "psk": "key0", 00:45:20.910 "allow_unrecognized_csi": false, 00:45:20.910 "method": "bdev_nvme_attach_controller", 00:45:20.910 "req_id": 1 00:45:20.910 } 00:45:20.910 Got JSON-RPC error response 00:45:20.910 response: 00:45:20.910 { 00:45:20.910 "code": -19, 00:45:20.910 "message": "No such device" 00:45:20.910 } 00:45:20.910 16:53:21 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:45:20.910 16:53:21 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:45:20.910 16:53:21 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:45:20.910 16:53:21 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:45:20.910 16:53:21 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:45:20.910 16:53:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:45:21.168 16:53:21 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:45:21.168 16:53:21 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:45:21.168 16:53:21 keyring_file -- keyring/common.sh@17 -- # name=key0 00:45:21.168 16:53:21 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:45:21.168 16:53:21 keyring_file -- keyring/common.sh@17 -- # digest=0 00:45:21.168 16:53:21 keyring_file -- keyring/common.sh@18 -- # mktemp 00:45:21.168 16:53:21 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.kjjeqjeLcx 00:45:21.168 16:53:21 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:45:21.168 16:53:21 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:45:21.168 16:53:21 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:45:21.168 16:53:21 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:45:21.168 16:53:21 keyring_file -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:45:21.168 16:53:21 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:45:21.168 16:53:21 keyring_file -- nvmf/common.sh@729 -- # python - 00:45:21.168 16:53:21 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.kjjeqjeLcx 00:45:21.168 16:53:21 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.kjjeqjeLcx 00:45:21.168 16:53:21 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.kjjeqjeLcx 00:45:21.168 16:53:21 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.kjjeqjeLcx 00:45:21.168 16:53:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.kjjeqjeLcx 00:45:21.427 16:53:21 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:21.427 16:53:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:21.685 nvme0n1 00:45:21.685 16:53:22 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:45:21.685 16:53:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:21.685 16:53:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:21.685 16:53:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:21.685 16:53:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:21.685 16:53:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:21.980 16:53:22 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:45:21.980 16:53:22 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:45:21.980 16:53:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:45:22.264 16:53:22 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:45:22.264 16:53:22 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:45:22.264 16:53:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:22.264 16:53:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:22.264 16:53:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:22.523 16:53:23 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:45:22.523 16:53:23 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:45:22.523 16:53:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:22.523 16:53:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:22.523 16:53:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:22.523 16:53:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:22.523 16:53:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:22.781 16:53:23 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:45:22.781 16:53:23 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:45:22.781 16:53:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:45:23.347 16:53:23 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:45:23.347 16:53:23 keyring_file -- keyring/file.sh@105 -- # jq length 00:45:23.347 16:53:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:23.347 16:53:23 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:45:23.347 16:53:23 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.kjjeqjeLcx 00:45:23.347 16:53:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.kjjeqjeLcx 00:45:23.616 16:53:24 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.zKXD9znAvq 00:45:23.616 16:53:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.zKXD9znAvq 00:45:23.874 16:53:24 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:23.874 16:53:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:24.441 nvme0n1 00:45:24.441 16:53:24 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:45:24.441 16:53:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:45:24.699 16:53:25 keyring_file -- keyring/file.sh@113 -- # config='{ 00:45:24.699 "subsystems": [ 00:45:24.699 { 00:45:24.699 "subsystem": "keyring", 00:45:24.699 "config": [ 00:45:24.699 { 00:45:24.699 "method": "keyring_file_add_key", 00:45:24.699 "params": { 00:45:24.699 "name": "key0", 00:45:24.699 "path": "/tmp/tmp.kjjeqjeLcx" 00:45:24.699 } 00:45:24.699 }, 00:45:24.699 { 00:45:24.699 "method": "keyring_file_add_key", 00:45:24.699 "params": { 00:45:24.699 "name": "key1", 00:45:24.699 "path": "/tmp/tmp.zKXD9znAvq" 00:45:24.699 } 00:45:24.699 } 00:45:24.699 ] 00:45:24.699 }, 00:45:24.699 { 00:45:24.699 "subsystem": "iobuf", 00:45:24.699 "config": [ 00:45:24.700 { 00:45:24.700 "method": "iobuf_set_options", 00:45:24.700 "params": { 00:45:24.700 "small_pool_count": 8192, 00:45:24.700 "large_pool_count": 1024, 00:45:24.700 "small_bufsize": 8192, 00:45:24.700 "large_bufsize": 135168 00:45:24.700 } 00:45:24.700 } 00:45:24.700 ] 00:45:24.700 }, 00:45:24.700 { 00:45:24.700 "subsystem": "sock", 00:45:24.700 "config": [ 00:45:24.700 { 00:45:24.700 "method": "sock_set_default_impl", 00:45:24.700 "params": { 00:45:24.700 "impl_name": "posix" 00:45:24.700 } 00:45:24.700 }, 00:45:24.700 { 00:45:24.700 "method": "sock_impl_set_options", 00:45:24.700 "params": { 00:45:24.700 "impl_name": "ssl", 00:45:24.700 "recv_buf_size": 4096, 00:45:24.700 "send_buf_size": 4096, 00:45:24.700 "enable_recv_pipe": true, 00:45:24.700 "enable_quickack": false, 00:45:24.700 "enable_placement_id": 0, 00:45:24.700 "enable_zerocopy_send_server": true, 00:45:24.700 "enable_zerocopy_send_client": false, 00:45:24.700 "zerocopy_threshold": 0, 00:45:24.700 "tls_version": 0, 00:45:24.700 "enable_ktls": false 00:45:24.700 } 00:45:24.700 }, 00:45:24.700 { 00:45:24.700 "method": "sock_impl_set_options", 00:45:24.700 "params": { 00:45:24.700 "impl_name": "posix", 00:45:24.700 "recv_buf_size": 2097152, 00:45:24.700 "send_buf_size": 2097152, 00:45:24.700 "enable_recv_pipe": true, 00:45:24.700 "enable_quickack": false, 00:45:24.700 "enable_placement_id": 0, 00:45:24.700 "enable_zerocopy_send_server": true, 00:45:24.700 "enable_zerocopy_send_client": false, 00:45:24.700 "zerocopy_threshold": 0, 00:45:24.700 "tls_version": 0, 00:45:24.700 "enable_ktls": false 00:45:24.700 } 00:45:24.700 } 00:45:24.700 ] 00:45:24.700 }, 00:45:24.700 { 00:45:24.700 "subsystem": "vmd", 00:45:24.700 "config": [] 00:45:24.700 }, 00:45:24.700 { 00:45:24.700 "subsystem": "accel", 00:45:24.700 "config": [ 00:45:24.700 { 00:45:24.700 "method": "accel_set_options", 00:45:24.700 "params": { 00:45:24.700 "small_cache_size": 128, 00:45:24.700 "large_cache_size": 16, 00:45:24.700 "task_count": 2048, 00:45:24.700 "sequence_count": 2048, 00:45:24.700 "buf_count": 2048 00:45:24.700 } 00:45:24.700 } 00:45:24.700 ] 00:45:24.700 }, 00:45:24.700 { 00:45:24.700 "subsystem": "bdev", 00:45:24.700 "config": [ 00:45:24.700 { 00:45:24.700 "method": "bdev_set_options", 00:45:24.700 "params": { 00:45:24.700 "bdev_io_pool_size": 65535, 00:45:24.700 "bdev_io_cache_size": 256, 00:45:24.700 "bdev_auto_examine": true, 00:45:24.700 "iobuf_small_cache_size": 128, 00:45:24.700 "iobuf_large_cache_size": 16 00:45:24.700 } 00:45:24.700 }, 00:45:24.700 { 00:45:24.700 "method": "bdev_raid_set_options", 00:45:24.700 "params": { 00:45:24.700 "process_window_size_kb": 1024, 00:45:24.700 "process_max_bandwidth_mb_sec": 0 00:45:24.700 } 00:45:24.700 }, 00:45:24.700 { 00:45:24.700 "method": "bdev_iscsi_set_options", 00:45:24.700 "params": { 00:45:24.700 "timeout_sec": 30 00:45:24.700 } 00:45:24.700 }, 00:45:24.700 { 00:45:24.700 "method": "bdev_nvme_set_options", 00:45:24.700 "params": { 00:45:24.700 "action_on_timeout": "none", 00:45:24.700 "timeout_us": 0, 00:45:24.700 "timeout_admin_us": 0, 00:45:24.700 "keep_alive_timeout_ms": 10000, 00:45:24.700 "arbitration_burst": 0, 00:45:24.700 "low_priority_weight": 0, 00:45:24.700 "medium_priority_weight": 0, 00:45:24.700 "high_priority_weight": 0, 00:45:24.700 "nvme_adminq_poll_period_us": 10000, 00:45:24.700 "nvme_ioq_poll_period_us": 0, 00:45:24.700 "io_queue_requests": 512, 00:45:24.700 "delay_cmd_submit": true, 00:45:24.700 "transport_retry_count": 4, 00:45:24.700 "bdev_retry_count": 3, 00:45:24.700 "transport_ack_timeout": 0, 00:45:24.700 "ctrlr_loss_timeout_sec": 0, 00:45:24.700 "reconnect_delay_sec": 0, 00:45:24.700 "fast_io_fail_timeout_sec": 0, 00:45:24.700 "disable_auto_failback": false, 00:45:24.700 "generate_uuids": false, 00:45:24.700 "transport_tos": 0, 00:45:24.700 "nvme_error_stat": false, 00:45:24.700 "rdma_srq_size": 0, 00:45:24.700 "io_path_stat": false, 00:45:24.700 "allow_accel_sequence": false, 00:45:24.700 "rdma_max_cq_size": 0, 00:45:24.700 "rdma_cm_event_timeout_ms": 0, 00:45:24.700 "dhchap_digests": [ 00:45:24.700 "sha256", 00:45:24.700 "sha384", 00:45:24.700 "sha512" 00:45:24.700 ], 00:45:24.700 "dhchap_dhgroups": [ 00:45:24.700 "null", 00:45:24.700 "ffdhe2048", 00:45:24.700 "ffdhe3072", 00:45:24.700 "ffdhe4096", 00:45:24.700 "ffdhe6144", 00:45:24.700 "ffdhe8192" 00:45:24.700 ] 00:45:24.700 } 00:45:24.700 }, 00:45:24.700 { 00:45:24.700 "method": "bdev_nvme_attach_controller", 00:45:24.700 "params": { 00:45:24.700 "name": "nvme0", 00:45:24.700 "trtype": "TCP", 00:45:24.700 "adrfam": "IPv4", 00:45:24.700 "traddr": "127.0.0.1", 00:45:24.700 "trsvcid": "4420", 00:45:24.700 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:24.700 "prchk_reftag": false, 00:45:24.700 "prchk_guard": false, 00:45:24.700 "ctrlr_loss_timeout_sec": 0, 00:45:24.700 "reconnect_delay_sec": 0, 00:45:24.700 "fast_io_fail_timeout_sec": 0, 00:45:24.700 "psk": "key0", 00:45:24.700 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:24.700 "hdgst": false, 00:45:24.700 "ddgst": false 00:45:24.700 } 00:45:24.700 }, 00:45:24.700 { 00:45:24.700 "method": "bdev_nvme_set_hotplug", 00:45:24.700 "params": { 00:45:24.700 "period_us": 100000, 00:45:24.700 "enable": false 00:45:24.700 } 00:45:24.700 }, 00:45:24.700 { 00:45:24.700 "method": "bdev_wait_for_examine" 00:45:24.700 } 00:45:24.700 ] 00:45:24.700 }, 00:45:24.700 { 00:45:24.701 "subsystem": "nbd", 00:45:24.701 "config": [] 00:45:24.701 } 00:45:24.701 ] 00:45:24.701 }' 00:45:24.701 16:53:25 keyring_file -- keyring/file.sh@115 -- # killprocess 3414743 00:45:24.701 16:53:25 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 3414743 ']' 00:45:24.701 16:53:25 keyring_file -- common/autotest_common.sh@954 -- # kill -0 3414743 00:45:24.701 16:53:25 keyring_file -- common/autotest_common.sh@955 -- # uname 00:45:24.701 16:53:25 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:45:24.701 16:53:25 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3414743 00:45:24.701 16:53:25 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:45:24.701 16:53:25 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:45:24.701 16:53:25 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3414743' 00:45:24.701 killing process with pid 3414743 00:45:24.701 16:53:25 keyring_file -- common/autotest_common.sh@969 -- # kill 3414743 00:45:24.701 Received shutdown signal, test time was about 1.000000 seconds 00:45:24.701 00:45:24.701 Latency(us) 00:45:24.701 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:24.701 =================================================================================================================== 00:45:24.701 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:45:24.701 16:53:25 keyring_file -- common/autotest_common.sh@974 -- # wait 3414743 00:45:26.076 16:53:26 keyring_file -- keyring/file.sh@118 -- # bperfpid=3416471 00:45:26.076 16:53:26 keyring_file -- keyring/file.sh@120 -- # waitforlisten 3416471 /var/tmp/bperf.sock 00:45:26.076 16:53:26 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 3416471 ']' 00:45:26.076 16:53:26 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:45:26.076 16:53:26 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:45:26.076 16:53:26 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:45:26.077 16:53:26 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:45:26.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:45:26.077 16:53:26 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:45:26.077 "subsystems": [ 00:45:26.077 { 00:45:26.077 "subsystem": "keyring", 00:45:26.077 "config": [ 00:45:26.077 { 00:45:26.077 "method": "keyring_file_add_key", 00:45:26.077 "params": { 00:45:26.077 "name": "key0", 00:45:26.077 "path": "/tmp/tmp.kjjeqjeLcx" 00:45:26.077 } 00:45:26.077 }, 00:45:26.077 { 00:45:26.077 "method": "keyring_file_add_key", 00:45:26.077 "params": { 00:45:26.077 "name": "key1", 00:45:26.077 "path": "/tmp/tmp.zKXD9znAvq" 00:45:26.077 } 00:45:26.077 } 00:45:26.077 ] 00:45:26.077 }, 00:45:26.077 { 00:45:26.077 "subsystem": "iobuf", 00:45:26.077 "config": [ 00:45:26.077 { 00:45:26.077 "method": "iobuf_set_options", 00:45:26.077 "params": { 00:45:26.077 "small_pool_count": 8192, 00:45:26.077 "large_pool_count": 1024, 00:45:26.077 "small_bufsize": 8192, 00:45:26.077 "large_bufsize": 135168 00:45:26.077 } 00:45:26.077 } 00:45:26.077 ] 00:45:26.077 }, 00:45:26.077 { 00:45:26.077 "subsystem": "sock", 00:45:26.077 "config": [ 00:45:26.077 { 00:45:26.077 "method": "sock_set_default_impl", 00:45:26.077 "params": { 00:45:26.077 "impl_name": "posix" 00:45:26.077 } 00:45:26.077 }, 00:45:26.077 { 00:45:26.077 "method": "sock_impl_set_options", 00:45:26.077 "params": { 00:45:26.077 "impl_name": "ssl", 00:45:26.077 "recv_buf_size": 4096, 00:45:26.077 "send_buf_size": 4096, 00:45:26.077 "enable_recv_pipe": true, 00:45:26.077 "enable_quickack": false, 00:45:26.077 "enable_placement_id": 0, 00:45:26.077 "enable_zerocopy_send_server": true, 00:45:26.077 "enable_zerocopy_send_client": false, 00:45:26.077 "zerocopy_threshold": 0, 00:45:26.077 "tls_version": 0, 00:45:26.077 "enable_ktls": false 00:45:26.077 } 00:45:26.077 }, 00:45:26.077 { 00:45:26.077 "method": "sock_impl_set_options", 00:45:26.077 "params": { 00:45:26.077 "impl_name": "posix", 00:45:26.077 "recv_buf_size": 2097152, 00:45:26.077 "send_buf_size": 2097152, 00:45:26.077 "enable_recv_pipe": true, 00:45:26.077 "enable_quickack": false, 00:45:26.077 "enable_placement_id": 0, 00:45:26.077 "enable_zerocopy_send_server": true, 00:45:26.077 "enable_zerocopy_send_client": false, 00:45:26.077 "zerocopy_threshold": 0, 00:45:26.077 "tls_version": 0, 00:45:26.077 "enable_ktls": false 00:45:26.077 } 00:45:26.077 } 00:45:26.077 ] 00:45:26.077 }, 00:45:26.077 { 00:45:26.077 "subsystem": "vmd", 00:45:26.077 "config": [] 00:45:26.077 }, 00:45:26.077 { 00:45:26.077 "subsystem": "accel", 00:45:26.077 "config": [ 00:45:26.077 { 00:45:26.077 "method": "accel_set_options", 00:45:26.077 "params": { 00:45:26.077 "small_cache_size": 128, 00:45:26.077 "large_cache_size": 16, 00:45:26.077 "task_count": 2048, 00:45:26.077 "sequence_count": 2048, 00:45:26.077 "buf_count": 2048 00:45:26.077 } 00:45:26.077 } 00:45:26.077 ] 00:45:26.077 }, 00:45:26.077 { 00:45:26.077 "subsystem": "bdev", 00:45:26.077 "config": [ 00:45:26.077 { 00:45:26.077 "method": "bdev_set_options", 00:45:26.077 "params": { 00:45:26.077 "bdev_io_pool_size": 65535, 00:45:26.077 "bdev_io_cache_size": 256, 00:45:26.077 "bdev_auto_examine": true, 00:45:26.077 "iobuf_small_cache_size": 128, 00:45:26.077 "iobuf_large_cache_size": 16 00:45:26.077 } 00:45:26.077 }, 00:45:26.077 { 00:45:26.077 "method": "bdev_raid_set_options", 00:45:26.077 "params": { 00:45:26.077 "process_window_size_kb": 1024, 00:45:26.077 "process_max_bandwidth_mb_sec": 0 00:45:26.077 } 00:45:26.077 }, 00:45:26.077 { 00:45:26.077 "method": "bdev_iscsi_set_options", 00:45:26.077 "params": { 00:45:26.077 "timeout_sec": 30 00:45:26.077 } 00:45:26.077 }, 00:45:26.077 { 00:45:26.077 "method": "bdev_nvme_set_options", 00:45:26.077 "params": { 00:45:26.077 "action_on_timeout": "none", 00:45:26.077 "timeout_us": 0, 00:45:26.077 "timeout_admin_us": 0, 00:45:26.077 "keep_alive_timeout_ms": 10000, 00:45:26.077 "arbitration_burst": 0, 00:45:26.077 "low_priority_weight": 0, 00:45:26.077 "medium_priority_weight": 0, 00:45:26.077 "high_priority_weight": 0, 00:45:26.077 "nvme_adminq_poll_period_us": 10000, 00:45:26.077 "nvme_ioq_poll_period_us": 0, 00:45:26.077 "io_queue_requests": 512, 00:45:26.077 "delay_cmd_submit": true, 00:45:26.077 "transport_retry_count": 4, 00:45:26.077 "bdev_retry_count": 3, 00:45:26.077 "transport_ack_timeout": 0, 00:45:26.077 "ctrlr_loss_timeout_sec": 0, 00:45:26.077 "reconnect_delay_sec": 0, 00:45:26.077 "fast_io_fail_timeout_sec": 0, 00:45:26.077 "disable_auto_failback": false, 00:45:26.077 "generate_uuids": false, 00:45:26.077 "transport_tos": 0, 00:45:26.077 "nvme_error_stat": false, 00:45:26.077 "rdma_srq_size": 0, 00:45:26.077 "io_path_stat": false, 00:45:26.077 "allow_accel_sequence": false, 00:45:26.077 "rdma_max_cq_size": 0, 00:45:26.077 "rdma_cm_event_timeout_ms": 0, 00:45:26.077 "dhchap_digests": [ 00:45:26.077 "sha256", 00:45:26.077 "sha384", 00:45:26.077 "sha512" 00:45:26.077 ], 00:45:26.077 "dhchap_dhgroups": [ 00:45:26.077 "null", 00:45:26.077 "ffdhe2048", 00:45:26.077 "ffdhe3072", 00:45:26.077 "ffdhe4096", 00:45:26.077 "ffdhe6144", 00:45:26.077 "ffdhe8192" 00:45:26.077 ] 00:45:26.077 } 00:45:26.077 }, 00:45:26.077 { 00:45:26.077 "method": "bdev_nvme_attach_controller", 00:45:26.077 "params": { 00:45:26.077 "name": "nvme0", 00:45:26.077 "trtype": "TCP", 00:45:26.077 "adrfam": "IPv4", 00:45:26.077 "traddr": "127.0.0.1", 00:45:26.077 "trsvcid": "4420", 00:45:26.077 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:26.077 "prchk_reftag": false, 00:45:26.077 "prchk_guard": false, 00:45:26.077 "ctrlr_loss_timeout_sec": 0, 00:45:26.077 "reconnect_delay_sec": 0, 00:45:26.077 "fast_io_fail_timeout_sec": 0, 00:45:26.077 "psk": "key0", 00:45:26.077 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:26.077 "hdgst": false, 00:45:26.077 "ddgst": false 00:45:26.077 } 00:45:26.077 }, 00:45:26.077 { 00:45:26.077 "method": "bdev_nvme_set_hotplug", 00:45:26.077 "params": { 00:45:26.077 "period_us": 100000, 00:45:26.077 "enable": false 00:45:26.077 } 00:45:26.077 }, 00:45:26.077 { 00:45:26.077 "method": "bdev_wait_for_examine" 00:45:26.077 } 00:45:26.077 ] 00:45:26.077 }, 00:45:26.077 { 00:45:26.077 "subsystem": "nbd", 00:45:26.077 "config": [] 00:45:26.077 } 00:45:26.077 ] 00:45:26.077 }' 00:45:26.077 16:53:26 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:45:26.077 16:53:26 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:45:26.077 [2024-09-29 16:53:26.332412] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:45:26.077 [2024-09-29 16:53:26.332559] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3416471 ] 00:45:26.077 [2024-09-29 16:53:26.462385] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:26.336 [2024-09-29 16:53:26.716267] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:45:26.900 [2024-09-29 16:53:27.168109] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:45:26.900 16:53:27 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:45:26.900 16:53:27 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:45:26.900 16:53:27 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:45:26.900 16:53:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:26.900 16:53:27 keyring_file -- keyring/file.sh@121 -- # jq length 00:45:27.157 16:53:27 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:45:27.157 16:53:27 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:45:27.157 16:53:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:27.157 16:53:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:27.157 16:53:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:27.157 16:53:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:27.157 16:53:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:27.416 16:53:27 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:45:27.416 16:53:27 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:45:27.416 16:53:27 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:45:27.416 16:53:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:27.416 16:53:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:27.416 16:53:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:27.416 16:53:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:27.673 16:53:28 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:45:27.673 16:53:28 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:45:27.673 16:53:28 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:45:27.673 16:53:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:45:27.931 16:53:28 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:45:27.931 16:53:28 keyring_file -- keyring/file.sh@1 -- # cleanup 00:45:27.931 16:53:28 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.kjjeqjeLcx /tmp/tmp.zKXD9znAvq 00:45:27.931 16:53:28 keyring_file -- keyring/file.sh@20 -- # killprocess 3416471 00:45:27.931 16:53:28 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 3416471 ']' 00:45:27.931 16:53:28 keyring_file -- common/autotest_common.sh@954 -- # kill -0 3416471 00:45:27.931 16:53:28 keyring_file -- common/autotest_common.sh@955 -- # uname 00:45:27.931 16:53:28 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:45:27.931 16:53:28 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3416471 00:45:27.931 16:53:28 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:45:27.931 16:53:28 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:45:27.931 16:53:28 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3416471' 00:45:27.931 killing process with pid 3416471 00:45:27.931 16:53:28 keyring_file -- common/autotest_common.sh@969 -- # kill 3416471 00:45:27.931 Received shutdown signal, test time was about 1.000000 seconds 00:45:27.931 00:45:27.931 Latency(us) 00:45:27.931 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:27.931 =================================================================================================================== 00:45:27.931 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:45:27.931 16:53:28 keyring_file -- common/autotest_common.sh@974 -- # wait 3416471 00:45:29.302 16:53:29 keyring_file -- keyring/file.sh@21 -- # killprocess 3414599 00:45:29.302 16:53:29 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 3414599 ']' 00:45:29.302 16:53:29 keyring_file -- common/autotest_common.sh@954 -- # kill -0 3414599 00:45:29.302 16:53:29 keyring_file -- common/autotest_common.sh@955 -- # uname 00:45:29.302 16:53:29 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:45:29.302 16:53:29 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3414599 00:45:29.302 16:53:29 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:45:29.302 16:53:29 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:45:29.302 16:53:29 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3414599' 00:45:29.302 killing process with pid 3414599 00:45:29.302 16:53:29 keyring_file -- common/autotest_common.sh@969 -- # kill 3414599 00:45:29.302 16:53:29 keyring_file -- common/autotest_common.sh@974 -- # wait 3414599 00:45:31.831 00:45:31.831 real 0m20.667s 00:45:31.831 user 0m46.347s 00:45:31.831 sys 0m3.820s 00:45:31.831 16:53:31 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:45:31.831 16:53:31 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:45:31.831 ************************************ 00:45:31.831 END TEST keyring_file 00:45:31.831 ************************************ 00:45:31.831 16:53:32 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:45:31.831 16:53:32 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:45:31.831 16:53:32 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:45:31.831 16:53:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:45:31.831 16:53:32 -- common/autotest_common.sh@10 -- # set +x 00:45:31.831 ************************************ 00:45:31.831 START TEST keyring_linux 00:45:31.831 ************************************ 00:45:31.831 16:53:32 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:45:31.831 Joined session keyring: 461531107 00:45:31.831 * Looking for test storage... 00:45:31.831 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:45:31.831 16:53:32 keyring_linux -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:45:31.831 16:53:32 keyring_linux -- common/autotest_common.sh@1681 -- # lcov --version 00:45:31.831 16:53:32 keyring_linux -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:45:31.831 16:53:32 keyring_linux -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:45:31.831 16:53:32 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:31.831 16:53:32 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:31.831 16:53:32 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:31.831 16:53:32 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:45:31.831 16:53:32 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:45:31.831 16:53:32 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:45:31.831 16:53:32 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:45:31.831 16:53:32 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:45:31.831 16:53:32 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:45:31.831 16:53:32 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:45:31.831 16:53:32 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:31.831 16:53:32 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:45:31.831 16:53:32 keyring_linux -- scripts/common.sh@345 -- # : 1 00:45:31.831 16:53:32 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:31.831 16:53:32 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:31.831 16:53:32 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:45:31.831 16:53:32 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:45:31.831 16:53:32 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:31.831 16:53:32 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:45:31.831 16:53:32 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:45:31.831 16:53:32 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:45:31.831 16:53:32 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:45:31.831 16:53:32 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:31.831 16:53:32 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:45:31.831 16:53:32 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:45:31.831 16:53:32 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:31.831 16:53:32 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:31.831 16:53:32 keyring_linux -- scripts/common.sh@368 -- # return 0 00:45:31.831 16:53:32 keyring_linux -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:31.831 16:53:32 keyring_linux -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:45:31.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:31.832 --rc genhtml_branch_coverage=1 00:45:31.832 --rc genhtml_function_coverage=1 00:45:31.832 --rc genhtml_legend=1 00:45:31.832 --rc geninfo_all_blocks=1 00:45:31.832 --rc geninfo_unexecuted_blocks=1 00:45:31.832 00:45:31.832 ' 00:45:31.832 16:53:32 keyring_linux -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:45:31.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:31.832 --rc genhtml_branch_coverage=1 00:45:31.832 --rc genhtml_function_coverage=1 00:45:31.832 --rc genhtml_legend=1 00:45:31.832 --rc geninfo_all_blocks=1 00:45:31.832 --rc geninfo_unexecuted_blocks=1 00:45:31.832 00:45:31.832 ' 00:45:31.832 16:53:32 keyring_linux -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:45:31.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:31.832 --rc genhtml_branch_coverage=1 00:45:31.832 --rc genhtml_function_coverage=1 00:45:31.832 --rc genhtml_legend=1 00:45:31.832 --rc geninfo_all_blocks=1 00:45:31.832 --rc geninfo_unexecuted_blocks=1 00:45:31.832 00:45:31.832 ' 00:45:31.832 16:53:32 keyring_linux -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:45:31.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:31.832 --rc genhtml_branch_coverage=1 00:45:31.832 --rc genhtml_function_coverage=1 00:45:31.832 --rc genhtml_legend=1 00:45:31.832 --rc geninfo_all_blocks=1 00:45:31.832 --rc geninfo_unexecuted_blocks=1 00:45:31.832 00:45:31.832 ' 00:45:31.832 16:53:32 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:45:31.832 16:53:32 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:45:31.832 16:53:32 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:45:31.832 16:53:32 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:31.832 16:53:32 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:31.832 16:53:32 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:31.832 16:53:32 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:31.832 16:53:32 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:31.832 16:53:32 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:31.832 16:53:32 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:31.832 16:53:32 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:31.832 16:53:32 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:31.832 16:53:32 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:31.832 16:53:32 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:45:31.832 16:53:32 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:45:31.832 16:53:32 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:31.832 16:53:32 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:31.832 16:53:32 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:45:31.832 16:53:32 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:31.832 16:53:32 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:45:31.832 16:53:32 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:45:31.832 16:53:32 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:31.832 16:53:32 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:31.832 16:53:32 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:31.832 16:53:32 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:31.832 16:53:32 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:31.832 16:53:32 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:31.832 16:53:32 keyring_linux -- paths/export.sh@5 -- # export PATH 00:45:31.832 16:53:32 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:31.832 16:53:32 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:45:31.832 16:53:32 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:45:31.832 16:53:32 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:45:31.832 16:53:32 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:31.832 16:53:32 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:31.832 16:53:32 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:31.832 16:53:32 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:45:31.832 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:45:31.832 16:53:32 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:45:31.832 16:53:32 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:45:31.832 16:53:32 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:45:31.832 16:53:32 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:45:31.832 16:53:32 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:45:31.832 16:53:32 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:45:31.832 16:53:32 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:45:31.832 16:53:32 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:45:31.832 16:53:32 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:45:31.832 16:53:32 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:45:31.832 16:53:32 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:45:31.832 16:53:32 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:45:31.832 16:53:32 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:45:31.832 16:53:32 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:45:31.832 16:53:32 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:45:31.832 16:53:32 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:45:31.832 16:53:32 keyring_linux -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:45:31.832 16:53:32 keyring_linux -- nvmf/common.sh@726 -- # local prefix key digest 00:45:31.832 16:53:32 keyring_linux -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:45:31.832 16:53:32 keyring_linux -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:45:31.832 16:53:32 keyring_linux -- nvmf/common.sh@728 -- # digest=0 00:45:31.832 16:53:32 keyring_linux -- nvmf/common.sh@729 -- # python - 00:45:31.832 16:53:32 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:45:31.832 16:53:32 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:45:31.832 /tmp/:spdk-test:key0 00:45:31.832 16:53:32 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:45:31.832 16:53:32 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:45:31.832 16:53:32 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:45:31.832 16:53:32 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:45:31.832 16:53:32 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:45:31.832 16:53:32 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:45:31.832 16:53:32 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:45:31.832 16:53:32 keyring_linux -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:45:31.832 16:53:32 keyring_linux -- nvmf/common.sh@726 -- # local prefix key digest 00:45:31.832 16:53:32 keyring_linux -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:45:31.832 16:53:32 keyring_linux -- nvmf/common.sh@728 -- # key=112233445566778899aabbccddeeff00 00:45:31.832 16:53:32 keyring_linux -- nvmf/common.sh@728 -- # digest=0 00:45:31.832 16:53:32 keyring_linux -- nvmf/common.sh@729 -- # python - 00:45:31.832 16:53:32 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:45:31.832 16:53:32 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:45:31.832 /tmp/:spdk-test:key1 00:45:31.832 16:53:32 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3417234 00:45:31.832 16:53:32 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:45:31.832 16:53:32 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3417234 00:45:31.832 16:53:32 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 3417234 ']' 00:45:31.832 16:53:32 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:31.832 16:53:32 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:45:31.832 16:53:32 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:31.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:31.832 16:53:32 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:45:31.832 16:53:32 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:45:31.832 [2024-09-29 16:53:32.349085] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:45:31.832 [2024-09-29 16:53:32.349242] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3417234 ] 00:45:32.090 [2024-09-29 16:53:32.481846] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:32.349 [2024-09-29 16:53:32.737814] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:45:33.283 16:53:33 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:45:33.283 16:53:33 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:45:33.283 16:53:33 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:45:33.283 16:53:33 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:33.283 16:53:33 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:45:33.283 [2024-09-29 16:53:33.698060] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:33.283 null0 00:45:33.283 [2024-09-29 16:53:33.730108] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:45:33.283 [2024-09-29 16:53:33.730747] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:45:33.283 16:53:33 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:33.283 16:53:33 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:45:33.283 970978235 00:45:33.283 16:53:33 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:45:33.283 1001862087 00:45:33.283 16:53:33 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3417381 00:45:33.283 16:53:33 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:45:33.283 16:53:33 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3417381 /var/tmp/bperf.sock 00:45:33.283 16:53:33 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 3417381 ']' 00:45:33.283 16:53:33 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:45:33.283 16:53:33 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:45:33.283 16:53:33 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:45:33.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:45:33.283 16:53:33 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:45:33.283 16:53:33 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:45:33.283 [2024-09-29 16:53:33.835571] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:45:33.283 [2024-09-29 16:53:33.835747] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3417381 ] 00:45:33.541 [2024-09-29 16:53:33.972329] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:33.799 [2024-09-29 16:53:34.223513] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:45:34.365 16:53:34 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:45:34.365 16:53:34 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:45:34.365 16:53:34 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:45:34.365 16:53:34 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:45:34.623 16:53:35 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:45:34.623 16:53:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:45:35.189 16:53:35 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:45:35.190 16:53:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:45:35.448 [2024-09-29 16:53:35.921554] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:45:35.448 nvme0n1 00:45:35.706 16:53:36 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:45:35.706 16:53:36 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:45:35.706 16:53:36 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:45:35.706 16:53:36 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:45:35.706 16:53:36 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:45:35.706 16:53:36 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:35.964 16:53:36 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:45:35.964 16:53:36 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:45:35.964 16:53:36 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:45:35.964 16:53:36 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:45:35.964 16:53:36 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:35.964 16:53:36 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:35.964 16:53:36 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:45:36.223 16:53:36 keyring_linux -- keyring/linux.sh@25 -- # sn=970978235 00:45:36.223 16:53:36 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:45:36.223 16:53:36 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:45:36.223 16:53:36 keyring_linux -- keyring/linux.sh@26 -- # [[ 970978235 == \9\7\0\9\7\8\2\3\5 ]] 00:45:36.223 16:53:36 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 970978235 00:45:36.223 16:53:36 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:45:36.223 16:53:36 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:45:36.223 Running I/O for 1 seconds... 00:45:37.158 5956.00 IOPS, 23.27 MiB/s 00:45:37.158 Latency(us) 00:45:37.158 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:37.158 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:45:37.158 nvme0n1 : 1.02 5975.69 23.34 0.00 0.00 21206.07 6092.42 28932.93 00:45:37.158 =================================================================================================================== 00:45:37.158 Total : 5975.69 23.34 0.00 0.00 21206.07 6092.42 28932.93 00:45:37.158 { 00:45:37.158 "results": [ 00:45:37.158 { 00:45:37.158 "job": "nvme0n1", 00:45:37.158 "core_mask": "0x2", 00:45:37.158 "workload": "randread", 00:45:37.158 "status": "finished", 00:45:37.158 "queue_depth": 128, 00:45:37.158 "io_size": 4096, 00:45:37.158 "runtime": 1.018292, 00:45:37.158 "iops": 5975.692630404638, 00:45:37.158 "mibps": 23.34254933751812, 00:45:37.158 "io_failed": 0, 00:45:37.158 "io_timeout": 0, 00:45:37.158 "avg_latency_us": 21206.074618460698, 00:45:37.158 "min_latency_us": 6092.420740740741, 00:45:37.158 "max_latency_us": 28932.93037037037 00:45:37.158 } 00:45:37.158 ], 00:45:37.158 "core_count": 1 00:45:37.158 } 00:45:37.416 16:53:37 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:45:37.416 16:53:37 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:45:37.674 16:53:37 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:45:37.674 16:53:37 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:45:37.674 16:53:37 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:45:37.674 16:53:38 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:45:37.674 16:53:38 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:37.674 16:53:38 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:45:37.932 16:53:38 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:45:37.932 16:53:38 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:45:37.932 16:53:38 keyring_linux -- keyring/linux.sh@23 -- # return 00:45:37.932 16:53:38 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:45:37.932 16:53:38 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:45:37.932 16:53:38 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:45:37.932 16:53:38 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:45:37.932 16:53:38 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:45:37.932 16:53:38 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:45:37.932 16:53:38 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:45:37.932 16:53:38 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:45:37.932 16:53:38 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:45:38.190 [2024-09-29 16:53:38.526701] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:45:38.190 [2024-09-29 16:53:38.527647] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7780 (107): Transport endpoint is not connected 00:45:38.190 [2024-09-29 16:53:38.528620] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7780 (9): Bad file descriptor 00:45:38.190 [2024-09-29 16:53:38.529613] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:45:38.190 [2024-09-29 16:53:38.529650] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:45:38.190 [2024-09-29 16:53:38.529683] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:45:38.190 [2024-09-29 16:53:38.529734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:45:38.190 request: 00:45:38.190 { 00:45:38.190 "name": "nvme0", 00:45:38.190 "trtype": "tcp", 00:45:38.190 "traddr": "127.0.0.1", 00:45:38.190 "adrfam": "ipv4", 00:45:38.190 "trsvcid": "4420", 00:45:38.190 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:38.190 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:38.190 "prchk_reftag": false, 00:45:38.190 "prchk_guard": false, 00:45:38.190 "hdgst": false, 00:45:38.190 "ddgst": false, 00:45:38.190 "psk": ":spdk-test:key1", 00:45:38.190 "allow_unrecognized_csi": false, 00:45:38.190 "method": "bdev_nvme_attach_controller", 00:45:38.190 "req_id": 1 00:45:38.190 } 00:45:38.190 Got JSON-RPC error response 00:45:38.190 response: 00:45:38.190 { 00:45:38.190 "code": -5, 00:45:38.190 "message": "Input/output error" 00:45:38.190 } 00:45:38.190 16:53:38 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:45:38.190 16:53:38 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:45:38.190 16:53:38 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:45:38.190 16:53:38 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:45:38.190 16:53:38 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:45:38.190 16:53:38 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:45:38.190 16:53:38 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:45:38.190 16:53:38 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:45:38.190 16:53:38 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:45:38.190 16:53:38 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:45:38.190 16:53:38 keyring_linux -- keyring/linux.sh@33 -- # sn=970978235 00:45:38.190 16:53:38 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 970978235 00:45:38.190 1 links removed 00:45:38.190 16:53:38 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:45:38.190 16:53:38 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:45:38.190 16:53:38 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:45:38.190 16:53:38 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:45:38.190 16:53:38 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:45:38.190 16:53:38 keyring_linux -- keyring/linux.sh@33 -- # sn=1001862087 00:45:38.190 16:53:38 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1001862087 00:45:38.190 1 links removed 00:45:38.190 16:53:38 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3417381 00:45:38.190 16:53:38 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 3417381 ']' 00:45:38.190 16:53:38 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 3417381 00:45:38.190 16:53:38 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:45:38.191 16:53:38 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:45:38.191 16:53:38 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3417381 00:45:38.191 16:53:38 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:45:38.191 16:53:38 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:45:38.191 16:53:38 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3417381' 00:45:38.191 killing process with pid 3417381 00:45:38.191 16:53:38 keyring_linux -- common/autotest_common.sh@969 -- # kill 3417381 00:45:38.191 Received shutdown signal, test time was about 1.000000 seconds 00:45:38.191 00:45:38.191 Latency(us) 00:45:38.191 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:38.191 =================================================================================================================== 00:45:38.191 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:45:38.191 16:53:38 keyring_linux -- common/autotest_common.sh@974 -- # wait 3417381 00:45:39.124 16:53:39 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3417234 00:45:39.124 16:53:39 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 3417234 ']' 00:45:39.124 16:53:39 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 3417234 00:45:39.124 16:53:39 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:45:39.124 16:53:39 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:45:39.124 16:53:39 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3417234 00:45:39.382 16:53:39 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:45:39.382 16:53:39 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:45:39.382 16:53:39 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3417234' 00:45:39.382 killing process with pid 3417234 00:45:39.382 16:53:39 keyring_linux -- common/autotest_common.sh@969 -- # kill 3417234 00:45:39.382 16:53:39 keyring_linux -- common/autotest_common.sh@974 -- # wait 3417234 00:45:41.911 00:45:41.911 real 0m10.133s 00:45:41.911 user 0m17.134s 00:45:41.911 sys 0m2.020s 00:45:41.911 16:53:42 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:45:41.911 16:53:42 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:45:41.911 ************************************ 00:45:41.911 END TEST keyring_linux 00:45:41.911 ************************************ 00:45:41.911 16:53:42 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:45:41.911 16:53:42 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:45:41.911 16:53:42 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:45:41.911 16:53:42 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:45:41.911 16:53:42 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:45:41.911 16:53:42 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:45:41.911 16:53:42 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:45:41.911 16:53:42 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:45:41.911 16:53:42 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:45:41.911 16:53:42 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:45:41.911 16:53:42 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:45:41.911 16:53:42 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:45:41.911 16:53:42 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:45:41.911 16:53:42 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:45:41.911 16:53:42 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:45:41.911 16:53:42 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:45:41.911 16:53:42 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:45:41.911 16:53:42 -- common/autotest_common.sh@724 -- # xtrace_disable 00:45:41.911 16:53:42 -- common/autotest_common.sh@10 -- # set +x 00:45:41.911 16:53:42 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:45:41.911 16:53:42 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:45:41.911 16:53:42 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:45:41.911 16:53:42 -- common/autotest_common.sh@10 -- # set +x 00:45:43.810 INFO: APP EXITING 00:45:43.810 INFO: killing all VMs 00:45:43.810 INFO: killing vhost app 00:45:43.810 INFO: EXIT DONE 00:45:44.745 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:45:44.745 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:45:44.745 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:45:44.745 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:45:44.745 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:45:44.745 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:45:44.745 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:45:44.745 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:45:44.745 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:45:44.745 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:45:44.745 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:45:44.745 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:45:44.745 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:45:44.745 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:45:44.745 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:45:44.745 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:45:44.745 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:45:46.139 Cleaning 00:45:46.139 Removing: /var/run/dpdk/spdk0/config 00:45:46.139 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:45:46.139 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:45:46.139 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:45:46.139 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:45:46.139 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:45:46.139 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:45:46.139 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:45:46.139 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:45:46.139 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:45:46.139 Removing: /var/run/dpdk/spdk0/hugepage_info 00:45:46.139 Removing: /var/run/dpdk/spdk1/config 00:45:46.139 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:45:46.139 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:45:46.139 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:45:46.139 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:45:46.139 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:45:46.139 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:45:46.139 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:45:46.139 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:45:46.139 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:45:46.139 Removing: /var/run/dpdk/spdk1/hugepage_info 00:45:46.139 Removing: /var/run/dpdk/spdk2/config 00:45:46.139 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:45:46.139 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:45:46.140 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:45:46.140 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:45:46.140 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:45:46.140 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:45:46.140 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:45:46.140 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:45:46.140 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:45:46.140 Removing: /var/run/dpdk/spdk2/hugepage_info 00:45:46.140 Removing: /var/run/dpdk/spdk3/config 00:45:46.140 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:45:46.140 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:45:46.140 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:45:46.140 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:45:46.140 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:45:46.140 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:45:46.140 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:45:46.140 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:45:46.140 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:45:46.140 Removing: /var/run/dpdk/spdk3/hugepage_info 00:45:46.140 Removing: /var/run/dpdk/spdk4/config 00:45:46.140 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:45:46.140 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:45:46.140 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:45:46.140 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:45:46.140 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:45:46.140 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:45:46.140 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:45:46.140 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:45:46.140 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:45:46.140 Removing: /var/run/dpdk/spdk4/hugepage_info 00:45:46.140 Removing: /dev/shm/bdev_svc_trace.1 00:45:46.140 Removing: /dev/shm/nvmf_trace.0 00:45:46.140 Removing: /dev/shm/spdk_tgt_trace.pid3002835 00:45:46.140 Removing: /var/run/dpdk/spdk0 00:45:46.140 Removing: /var/run/dpdk/spdk1 00:45:46.140 Removing: /var/run/dpdk/spdk2 00:45:46.140 Removing: /var/run/dpdk/spdk3 00:45:46.140 Removing: /var/run/dpdk/spdk4 00:45:46.140 Removing: /var/run/dpdk/spdk_pid2999855 00:45:46.140 Removing: /var/run/dpdk/spdk_pid3001000 00:45:46.140 Removing: /var/run/dpdk/spdk_pid3002835 00:45:46.140 Removing: /var/run/dpdk/spdk_pid3003722 00:45:46.140 Removing: /var/run/dpdk/spdk_pid3005189 00:45:46.140 Removing: /var/run/dpdk/spdk_pid3005608 00:45:46.140 Removing: /var/run/dpdk/spdk_pid3006601 00:45:46.140 Removing: /var/run/dpdk/spdk_pid3006855 00:45:46.140 Removing: /var/run/dpdk/spdk_pid3007397 00:45:46.140 Removing: /var/run/dpdk/spdk_pid3008857 00:45:46.140 Removing: /var/run/dpdk/spdk_pid3010168 00:45:46.140 Removing: /var/run/dpdk/spdk_pid3010764 00:45:46.140 Removing: /var/run/dpdk/spdk_pid3011367 00:45:46.140 Removing: /var/run/dpdk/spdk_pid3011973 00:45:46.140 Removing: /var/run/dpdk/spdk_pid3012575 00:45:46.140 Removing: /var/run/dpdk/spdk_pid3012856 00:45:46.140 Removing: /var/run/dpdk/spdk_pid3013017 00:45:46.140 Removing: /var/run/dpdk/spdk_pid3013338 00:45:46.140 Removing: /var/run/dpdk/spdk_pid3014057 00:45:46.140 Removing: /var/run/dpdk/spdk_pid3016820 00:45:46.140 Removing: /var/run/dpdk/spdk_pid3017385 00:45:46.140 Removing: /var/run/dpdk/spdk_pid3017941 00:45:46.140 Removing: /var/run/dpdk/spdk_pid3018082 00:45:46.140 Removing: /var/run/dpdk/spdk_pid3019452 00:45:46.140 Removing: /var/run/dpdk/spdk_pid3019707 00:45:46.140 Removing: /var/run/dpdk/spdk_pid3021073 00:45:46.140 Removing: /var/run/dpdk/spdk_pid3021219 00:45:46.140 Removing: /var/run/dpdk/spdk_pid3021660 00:45:46.140 Removing: /var/run/dpdk/spdk_pid3021918 00:45:46.140 Removing: /var/run/dpdk/spdk_pid3022232 00:45:46.140 Removing: /var/run/dpdk/spdk_pid3022490 00:45:46.140 Removing: /var/run/dpdk/spdk_pid3023530 00:45:46.140 Removing: /var/run/dpdk/spdk_pid3023817 00:45:46.140 Removing: /var/run/dpdk/spdk_pid3024147 00:45:46.140 Removing: /var/run/dpdk/spdk_pid3026787 00:45:46.140 Removing: /var/run/dpdk/spdk_pid3029805 00:45:46.140 Removing: /var/run/dpdk/spdk_pid3037571 00:45:46.140 Removing: /var/run/dpdk/spdk_pid3037987 00:45:46.140 Removing: /var/run/dpdk/spdk_pid3040654 00:45:46.140 Removing: /var/run/dpdk/spdk_pid3040927 00:45:46.140 Removing: /var/run/dpdk/spdk_pid3043965 00:45:46.140 Removing: /var/run/dpdk/spdk_pid3047957 00:45:46.140 Removing: /var/run/dpdk/spdk_pid3050368 00:45:46.140 Removing: /var/run/dpdk/spdk_pid3057778 00:45:46.140 Removing: /var/run/dpdk/spdk_pid3063525 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3065487 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3066293 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3077602 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3080287 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3138503 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3141936 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3146154 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3152098 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3182192 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3185387 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3186563 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3188134 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3188411 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3188772 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3189088 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3189931 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3191459 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3192904 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3193622 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3195627 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3196442 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3197275 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3199950 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3203749 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3203750 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3203751 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3206345 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3209204 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3212849 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3237089 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3240008 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3244167 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3245673 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3247395 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3248949 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3252218 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3254929 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3259621 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3259739 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3262788 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3262925 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3263176 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3263480 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3263582 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3264785 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3266581 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3267756 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3268935 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3270181 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3271413 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3275371 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3275822 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3277217 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3278075 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3282063 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3284172 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3287994 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3291585 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3299124 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3303931 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3303958 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3317008 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3317683 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3318342 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3319005 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3319992 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3320647 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3321201 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3321862 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3324757 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3325164 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3329822 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3330009 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3333518 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3336384 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3343425 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3343823 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3346596 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3346790 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3349770 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3353719 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3356009 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3363917 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3369515 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3370886 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3371732 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3382712 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3385352 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3387500 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3393546 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3393559 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3396705 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3398219 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3399741 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3400673 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3402143 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3403147 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3408807 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3409197 00:45:46.399 Removing: /var/run/dpdk/spdk_pid3409586 00:45:46.400 Removing: /var/run/dpdk/spdk_pid3411480 00:45:46.400 Removing: /var/run/dpdk/spdk_pid3411809 00:45:46.400 Removing: /var/run/dpdk/spdk_pid3412157 00:45:46.400 Removing: /var/run/dpdk/spdk_pid3414599 00:45:46.400 Removing: /var/run/dpdk/spdk_pid3414743 00:45:46.400 Removing: /var/run/dpdk/spdk_pid3416471 00:45:46.400 Removing: /var/run/dpdk/spdk_pid3417234 00:45:46.400 Removing: /var/run/dpdk/spdk_pid3417381 00:45:46.400 Clean 00:45:46.658 16:53:47 -- common/autotest_common.sh@1451 -- # return 0 00:45:46.658 16:53:47 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:45:46.658 16:53:47 -- common/autotest_common.sh@730 -- # xtrace_disable 00:45:46.658 16:53:47 -- common/autotest_common.sh@10 -- # set +x 00:45:46.658 16:53:47 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:45:46.658 16:53:47 -- common/autotest_common.sh@730 -- # xtrace_disable 00:45:46.658 16:53:47 -- common/autotest_common.sh@10 -- # set +x 00:45:46.658 16:53:47 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:45:46.658 16:53:47 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:45:46.658 16:53:47 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:45:46.658 16:53:47 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:45:46.658 16:53:47 -- spdk/autotest.sh@394 -- # hostname 00:45:46.658 16:53:47 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:45:46.916 geninfo: WARNING: invalid characters removed from testname! 00:46:19.026 16:54:16 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:46:19.960 16:54:20 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:46:23.242 16:54:23 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:46:25.770 16:54:26 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:46:29.049 16:54:28 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:46:31.576 16:54:31 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:46:34.860 16:54:34 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:46:34.860 16:54:34 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:46:34.860 16:54:34 -- common/autotest_common.sh@1681 -- $ lcov --version 00:46:34.860 16:54:34 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:46:34.860 16:54:34 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:46:34.860 16:54:34 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:46:34.860 16:54:34 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:46:34.860 16:54:34 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:46:34.860 16:54:34 -- scripts/common.sh@336 -- $ IFS=.-: 00:46:34.860 16:54:34 -- scripts/common.sh@336 -- $ read -ra ver1 00:46:34.860 16:54:34 -- scripts/common.sh@337 -- $ IFS=.-: 00:46:34.860 16:54:34 -- scripts/common.sh@337 -- $ read -ra ver2 00:46:34.860 16:54:34 -- scripts/common.sh@338 -- $ local 'op=<' 00:46:34.860 16:54:34 -- scripts/common.sh@340 -- $ ver1_l=2 00:46:34.860 16:54:34 -- scripts/common.sh@341 -- $ ver2_l=1 00:46:34.860 16:54:34 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:46:34.860 16:54:34 -- scripts/common.sh@344 -- $ case "$op" in 00:46:34.860 16:54:34 -- scripts/common.sh@345 -- $ : 1 00:46:34.860 16:54:34 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:46:34.860 16:54:34 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:46:34.860 16:54:34 -- scripts/common.sh@365 -- $ decimal 1 00:46:34.860 16:54:34 -- scripts/common.sh@353 -- $ local d=1 00:46:34.860 16:54:34 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:46:34.860 16:54:34 -- scripts/common.sh@355 -- $ echo 1 00:46:34.860 16:54:34 -- scripts/common.sh@365 -- $ ver1[v]=1 00:46:34.860 16:54:34 -- scripts/common.sh@366 -- $ decimal 2 00:46:34.860 16:54:34 -- scripts/common.sh@353 -- $ local d=2 00:46:34.860 16:54:34 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:46:34.860 16:54:34 -- scripts/common.sh@355 -- $ echo 2 00:46:34.860 16:54:34 -- scripts/common.sh@366 -- $ ver2[v]=2 00:46:34.860 16:54:34 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:46:34.860 16:54:34 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:46:34.860 16:54:34 -- scripts/common.sh@368 -- $ return 0 00:46:34.860 16:54:34 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:46:34.860 16:54:34 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:46:34.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:34.860 --rc genhtml_branch_coverage=1 00:46:34.860 --rc genhtml_function_coverage=1 00:46:34.860 --rc genhtml_legend=1 00:46:34.860 --rc geninfo_all_blocks=1 00:46:34.860 --rc geninfo_unexecuted_blocks=1 00:46:34.860 00:46:34.860 ' 00:46:34.860 16:54:34 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:46:34.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:34.860 --rc genhtml_branch_coverage=1 00:46:34.860 --rc genhtml_function_coverage=1 00:46:34.860 --rc genhtml_legend=1 00:46:34.860 --rc geninfo_all_blocks=1 00:46:34.860 --rc geninfo_unexecuted_blocks=1 00:46:34.860 00:46:34.860 ' 00:46:34.860 16:54:34 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:46:34.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:34.860 --rc genhtml_branch_coverage=1 00:46:34.860 --rc genhtml_function_coverage=1 00:46:34.860 --rc genhtml_legend=1 00:46:34.860 --rc geninfo_all_blocks=1 00:46:34.860 --rc geninfo_unexecuted_blocks=1 00:46:34.860 00:46:34.860 ' 00:46:34.860 16:54:34 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:46:34.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:34.860 --rc genhtml_branch_coverage=1 00:46:34.860 --rc genhtml_function_coverage=1 00:46:34.860 --rc genhtml_legend=1 00:46:34.860 --rc geninfo_all_blocks=1 00:46:34.860 --rc geninfo_unexecuted_blocks=1 00:46:34.860 00:46:34.860 ' 00:46:34.860 16:54:34 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:46:34.860 16:54:34 -- scripts/common.sh@15 -- $ shopt -s extglob 00:46:34.860 16:54:34 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:46:34.860 16:54:34 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:34.860 16:54:34 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:34.860 16:54:34 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:34.860 16:54:34 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:34.860 16:54:34 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:34.860 16:54:34 -- paths/export.sh@5 -- $ export PATH 00:46:34.860 16:54:34 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:34.860 16:54:34 -- common/autobuild_common.sh@478 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:46:34.860 16:54:34 -- common/autobuild_common.sh@479 -- $ date +%s 00:46:34.860 16:54:34 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1727621674.XXXXXX 00:46:34.860 16:54:34 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1727621674.Wz4NZe 00:46:34.860 16:54:34 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:46:34.860 16:54:34 -- common/autobuild_common.sh@485 -- $ '[' -n '' ']' 00:46:34.861 16:54:34 -- common/autobuild_common.sh@488 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:46:34.861 16:54:34 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:46:34.861 16:54:34 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:46:34.861 16:54:34 -- common/autobuild_common.sh@495 -- $ get_config_params 00:46:34.861 16:54:34 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:46:34.861 16:54:34 -- common/autotest_common.sh@10 -- $ set +x 00:46:34.861 16:54:34 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:46:34.861 16:54:34 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:46:34.861 16:54:34 -- pm/common@17 -- $ local monitor 00:46:34.861 16:54:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:46:34.861 16:54:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:46:34.861 16:54:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:46:34.861 16:54:34 -- pm/common@21 -- $ date +%s 00:46:34.861 16:54:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:46:34.861 16:54:34 -- pm/common@21 -- $ date +%s 00:46:34.861 16:54:34 -- pm/common@25 -- $ sleep 1 00:46:34.861 16:54:34 -- pm/common@21 -- $ date +%s 00:46:34.861 16:54:34 -- pm/common@21 -- $ date +%s 00:46:34.861 16:54:34 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1727621674 00:46:34.861 16:54:34 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1727621674 00:46:34.861 16:54:34 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1727621674 00:46:34.861 16:54:34 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1727621674 00:46:34.861 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1727621674_collect-cpu-load.pm.log 00:46:34.861 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1727621674_collect-vmstat.pm.log 00:46:34.861 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1727621674_collect-cpu-temp.pm.log 00:46:34.861 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1727621674_collect-bmc-pm.bmc.pm.log 00:46:35.448 16:54:35 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:46:35.448 16:54:35 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:46:35.448 16:54:35 -- spdk/autopackage.sh@14 -- $ timing_finish 00:46:35.448 16:54:35 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:46:35.448 16:54:35 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:46:35.448 16:54:35 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:46:35.448 16:54:35 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:46:35.449 16:54:35 -- pm/common@29 -- $ signal_monitor_resources TERM 00:46:35.449 16:54:35 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:46:35.449 16:54:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:46:35.449 16:54:35 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:46:35.449 16:54:35 -- pm/common@44 -- $ pid=3431768 00:46:35.449 16:54:35 -- pm/common@50 -- $ kill -TERM 3431768 00:46:35.449 16:54:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:46:35.449 16:54:35 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:46:35.449 16:54:35 -- pm/common@44 -- $ pid=3431770 00:46:35.449 16:54:35 -- pm/common@50 -- $ kill -TERM 3431770 00:46:35.449 16:54:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:46:35.449 16:54:35 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:46:35.449 16:54:35 -- pm/common@44 -- $ pid=3431772 00:46:35.449 16:54:35 -- pm/common@50 -- $ kill -TERM 3431772 00:46:35.449 16:54:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:46:35.449 16:54:35 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:46:35.449 16:54:35 -- pm/common@44 -- $ pid=3431801 00:46:35.449 16:54:35 -- pm/common@50 -- $ sudo -E kill -TERM 3431801 00:46:35.449 + [[ -n 2927975 ]] 00:46:35.449 + sudo kill 2927975 00:46:35.455 [Pipeline] } 00:46:35.464 [Pipeline] // stage 00:46:35.467 [Pipeline] } 00:46:35.476 [Pipeline] // timeout 00:46:35.479 [Pipeline] } 00:46:35.488 [Pipeline] // catchError 00:46:35.491 [Pipeline] } 00:46:35.500 [Pipeline] // wrap 00:46:35.504 [Pipeline] } 00:46:35.511 [Pipeline] // catchError 00:46:35.517 [Pipeline] stage 00:46:35.518 [Pipeline] { (Epilogue) 00:46:35.526 [Pipeline] catchError 00:46:35.527 [Pipeline] { 00:46:35.534 [Pipeline] echo 00:46:35.535 Cleanup processes 00:46:35.538 [Pipeline] sh 00:46:35.815 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:46:35.815 3431967 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:46:35.815 3432079 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:46:35.828 [Pipeline] sh 00:46:36.110 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:46:36.111 ++ awk '{print $1}' 00:46:36.111 ++ grep -v 'sudo pgrep' 00:46:36.111 + sudo kill -9 3431967 00:46:36.123 [Pipeline] sh 00:46:36.405 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:46:48.605 [Pipeline] sh 00:46:48.880 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:46:48.880 Artifacts sizes are good 00:46:48.890 [Pipeline] archiveArtifacts 00:46:48.894 Archiving artifacts 00:46:49.073 [Pipeline] sh 00:46:49.404 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:46:49.420 [Pipeline] cleanWs 00:46:49.430 [WS-CLEANUP] Deleting project workspace... 00:46:49.430 [WS-CLEANUP] Deferred wipeout is used... 00:46:49.436 [WS-CLEANUP] done 00:46:49.440 [Pipeline] } 00:46:49.454 [Pipeline] // catchError 00:46:49.465 [Pipeline] sh 00:46:49.741 + logger -p user.info -t JENKINS-CI 00:46:49.749 [Pipeline] } 00:46:49.764 [Pipeline] // stage 00:46:49.769 [Pipeline] } 00:46:49.783 [Pipeline] // node 00:46:49.789 [Pipeline] End of Pipeline 00:46:49.835 Finished: SUCCESS